In one of my first (non-realtime) 3d graphics software projects in the mid 90s, we used a non-raytraced screen-space reflections method, because we used procedural geometry that exceeded the memory resources of the computers we ran on. We rendered a scene, capturing RGBA+depth, and deferred rendering the reflective surfaces until all other rendering was complete. Then we'd come back and ray trace the reflective fragments against the world-space-projected depth buffer, and plot the reflected color as captured in the corresponding RGBA buffer. It was a good trick, because our reflective surfaces were almost always disturbed planes near the bottom of the image, and the other geometry was convex on the Z axis (no real overhangs that would have undersides that weren't visible in the screen-space rendering).
Someone asked me the other day about the technique and I realized we kinda just made it up at the time, we didn't really have a reference (unlike most of the other techniques we used, where we just implemented someone else's paper like Worley's cellular basis function or fBm or whatever).
There was a lot of interest in SSR in the 2000s in real-time rendering, but I can't unearth any earlier stuff. Was it a technique back in the 90s?
This doesn't appear to be a question about SAS. Is it?
SAS Innovate 2025 is scheduled for May 6-9 in Orlando, FL. Sign up to be first to learn about the agenda and registration!
Learn how use the CAT functions in SAS to join values from multiple variables into a single value.
Find more tutorials on the SAS Users YouTube channel.
Ready to level-up your skills? Choose your own adventure.