The best answer may depend on whether there is a known, sorted order to have2.
Here is an approach to consider. I'm not sure if it's feasible, because I'm not sure how you want to handle duplicate entries. But there should be a few posters on this thread who will take the idea and run with it. (Sorry, I can't spend enough time to look up the details.)
Rather than creating a hash table, create an informat. The advantage: The hash table has to process the 300M records for each run (i.e., for each year). But an informat can be created once, and permanently saved. It also forces you to clean out any duplicates from HAVE2.
Once that is done, you would be appending to just a smaller (300K records) data set and the processing should be swift.