What’s Next for Frame Generation?
NVIDIA says that a GeForce RTX 5070 can approximate a high-end RTX 4090 gaming experience with Frame Generation. Is this the future of gaming?

NVIDIA will tell you that a GeForce RTX 5070 can approximate a high-end gaming experience that you'd normally need a RTX 4090 for. The catch is that this requires generating frames using DLSS Frame Generation, to fool your brain into thinking your game is running smoother. Their shiny new Blackwell GPU can now generate three frames at once; But what can you actually do with that?
Frame Smoothing For Dummies
Hardware Unboxed refers to related technologies like G-Sync, FreeSync, DLSS, FSR, and frame generation as “frame smoothing” technologies, and the term is quite good at describing what they all do. Ultimately, you’re using these settings to run a game at a higher frame rate and achieve a smoother presentation.

To start, Vertical Sync (V-Sync) is the term for when the output from a game runs at the same refresh rate as the monitor you're displaying it on. At 60Hz, there are sixty redraws of the image on the display, and between each redraw the display clears its state and blanks the screen. This is called the vblank interval.
When a GPU completes a render of a frame, it immediately starts work on the next one. The completed frame is held in a section of memory called a buffer, and then scans that out to the monitor. If the GPU is fast enough, it can render two or three frames ahead of time, and store each frame in its own buffer.
Double buffering renders two frames ahead. Triple buffering will do three. Regardless of your settings, in an ideal world, your game never drops below 60fps and the output is always perfectly synced to the monitor.
IF it drops below that level with V-Sync enabled, the monitor will redraw the previous frame again while waiting for the next frame to arrive from the GPU - dropping the effective refresh rate to 30Hz for a brief moment. The game visibly hitches and seems to lag, and your mouse and keyboard inputs don't relate to what's happening on the display.

Variable Refresh Rates (VRR) were the first attempt at fixing this. The game runs at any given framerate and, so long as the framerate is within the monitor’s VRR window (typically from 48Hz to the display's max), the monitor will sync up and display one new frame as it is ready from the GPU. The result is a near-perfect presentation with minimal input lag and no tearing.
The problem with VRR, however, is that games running on a desktop PC or laptop don't always run at a fixed framerate. Provided that the game runs at a high average framerate this isn’t a problem, but a game that runs at 120fps and suddenly drops below 80fps will have a noticeable input delay that is jarring to many people.
You may find, for example, that your mouse sensitivity settings might be perfectly suited to running Apex Legends at 120fps, but you start missing shots at 80fps because of the higher input lag. The game still looks smooth, but something is wrong.

To try fix the input latency problem, AMD and NVIDIA both worked on similar techniques that also work with monitors that don’t support VRR. AMD calls their solution Enhanced Sync, while NVIDIA’s is called Fast Sync.
In both cases, if the GPU is pushing more frames per second than the monitor can display (say, 120fps for a 100Hz monitor), the additional rendered frames that do not sync up with a 100Hz refresh cycle are discarded, as they will never be displayed.
The result is a smooth 100Hz gaming experience that has the input delay of a game running as if V-Sync were disabled, but without tearing. You may notice some microstutter because there are frames being dropped that affect in-game animations, but for the most part you've solved the problem.
Later on, these features were enhanced with driver settings that would turn VRR on if the framerate dipped below the monitor's maximum refresh rate, trading a small increase in input lag for a smoother experience.
All seemed well after that, but the honeymoon did not last long.
Let's Just... Fake It?
Today we have high-refresh monitors that are affordable and very capable. Panels running at 100Hz and faster are available everywhere you look. But modern games with all the eye candy enabled aren’t going to be running at those high refresh rates. Cyberpunk 2077 with RT Overdrive settings on a GeForce RTX 4090 can barely deliver 80fps most of the time.
If you want that smooth V-Sync presentation again, you have to do some black magic. You’re either forced to reduce the quality settings in an attempt to boost performance, or use DLSS to run the game at a lower internal resolution.
As it happened, there was something that could help.
An old technology was dusted off and proved useful once more: Motion Interpolation. Modern TVs call this "motion smoothing" in their settings (and you should turn it off), but the feature has been around on PCs for a very long time.
In a nutshell, you render two real frames on the GPU, and then you hold the second frame in a back buffer. To smooth out the presentation because the framerate is already low, an algorithm generates an additional frame that is a combined output from frame 1 and frame 2. Frame 1 is displayed, then the generated frame, and finally frame 2. The process repeats.
You now have three visible frames, and one of them is fake. And technically, the framerate is now doubled.
Because this was originally designed to smooth out video shot at a lower framerate, viewers might see interpolated motion as having a kind of "soap opera" effect. Soap opera TV shows were typically filmed at 60fps in a bid to increase the immersion of seeing real humans doing human things with realistic movements.
Now Fake Things Look Weird
Motion interpolation has some fundamental problems. Fast-moving action scenes can have artifacts and ghosting in the generated frame. Fine details get lost as the algorithm tries to figure out how much of the two frames it analyses are text, or static background.
If you’re scaling up the video from 720P to 1080P, or even 1080p to 4K, you’re adding in more artifacts as a result of aliasing in the final image, which need to be removed as well.
All of these problems are repeated today in frame generation techniques used for video games. They haven’t gone away. While NVIDIA and AMD have made great strides in reducing the number of artifacts shown on the display by using generative AI and other software tricks, the end result is that generated frames always have a lower quality compared to the original.
At first, frame generation only added in a single additional frame. A game running at 60fps could suddenly appear as smooth as the same game running at 120fps. Just like motion interpolation for video, the generated frame was derived from the contents of the first and second frames, and was inserted in after frame 1 was displayed to double the effective framerate.
At this stage, generative AI was also adopted into NVIDIA's version of the technology, bundled in as part of DLSS. NVIDIA spent years, thousands of man-hours, and used hundreds of GPUs running games to train a neural network to process and fix artifacts in generated and upscaled frames.
Using AI dramatically improved the quality of the generated image, to NVIDIA's credit, but AMD gets close with traditional software approaches using image sharpening and other old-school tricks. It didn't hurt that these worked on more than just AMD's hardware as well.
Let's Just Fake It (But More)
Running games like this is very different from film. Because your inputs can't be predicted, visual artifacting as a result of fast inputs is always going to be an issue.
NVIDIA tried to fix this in later versions of their frame generation tech by adding in motion vectors as part of their Optical Flow technology. These are parameters from the game engine itself and include feedback from an algorithm that analyses the final output image - how fast your character is moving, where you're looking in-game, what objects are coming into view, and so on.
The generated frame uses those motion vectors to calculate what the likely output is based on your movements in-game. The result is something closer to reality, but it's still not perfect. Fast-moving background objects can stick out like a sore thumb, text looks like it's popping in and out of existence.
Later versions of DLSS, namely DLSS 3, fixed text and UI rendering to a large degree, and it is further improved with DLSS4 (which luckily can be run on older NVIDIA RTX hardware).

These issues are somewhat moderated by how long generated frames are seen on the display, now that we have Multi Frame Generation debuting with the GeForce RTX 50 Series. NVIDIA can now generate up to three additional frames from the base output.
There are benefits to this approach. First, the output from one generated frame to the next is more similar, and thus less jarring. There's more time to figure out what the next image should look like. Second, NVIDIA weaves in multiple tricks to do more advanced filtering and removal of artifacting, including a new generative AI algorithm using a transformer model.
All the usual input lag and artifacting problems are still there, but at least the presentation is improved because of the higher perceived framerate. You would not want to play a fast-moving game at a 60fps base framerate boosted to 240fps, but this is all a stepping stone to the maturation of frame generation technology.
Target Fixation Is The Future
AMD and NVIDIA are almost tied for advances in frame generation, but the next leap forward will need one of them to step back from the race to even generate more frames. NVIDIA's graphs look comical at this stage, and everyone knows they're unrealistic. An RTX 5070 can never hope to match a RTX 4090's output.
Instead, the advantage will be handed over to whoever develops a target frame generation feature first. AMD is technically closer because they have an existing driver feature called Radeon Chill, which could be updated to support frame generation.
Let's assume you have a 1440P monitor that is capable of 180Hz refresh rates. You cap a game at 70fps with max settings, and enable VRR. The monitor will display the first frame, and then refresh the panel at 70Hz. This looks smooth to you, but has noticeable input lag at lower frame rates.
Because the display is running at a lower refresh rate, the overall image also appears slightly darker.
Then you uncap the game's framerate and disable VRR. With the game now running at 100fps most of the time, you set frame generation to add in however many generated frames are necessary to present the game at 180fps. Your monitor is perfectly synced to the output, and input lag is negligible (because the game was already running at 100fps). The game will now always look and feel as if is running at 180fps.
The closer to you are to the maximum refresh rate of your monitor, the fewer generated frames are needed, and the vast majority of images seen by your eyeballs are natively rendered. This may be an acceptable tradeoff for most people. A slight bit of noise in the overall presentation is arguably worth it for the perceived smoothness.
You paid for a 180Hz monitor, after all. Might as well use up all the Hertz.
That's the future of frame generation that we can hope for, from all three GPU makers - AMD, Intel, NVIDIA. You'll one day have the choice of how close you want to run to your monitor's refresh rate by enabling frame generation, with low input lag and dramatically improved image quality thanks to advanced AI algorithms.
The future is generated.