Instead of matching milliseconds, then seconds, etc, I suggest you calculate the difference between the time stamps, limiting your matches to those below 60,000 milliseconds if you want (otherwise a record @ 2010dec31:23:59:59.999 will match any record that day instead of matching the 2011jan01:00:00:00.000 record). You can then keep the match with the lowest time difference.
SQL can do this obvioulsy using
group by a.DATESTAMP, a.REFERENCE
having calculated DIF_MILLISECOND= min(calculated DIF_MILLISECOND)
but SQL is a resource hog and you might have to spit the data into small bits (by reference?) depending on your platform.
To limit the size of the fuzzy match data, you can do the exact matches first and work with the rest from then onward:
data T2_MATCH T2_NOMATCH;
merge T2(in=A)
T1(drop=AMT in=B);
by DATESTAMP REFERENCE MILLISECOND;
if A;
if B then output T2_MATCH;
else output T2_NOMATCH;
run;
Instead of SQL, I would try to use a hash table with REFERENCE as the key and the multidata option. You can then calculate the time differences and keep the best match.
Work on a subset first to see if this is faster than sql in your case.