Nvidia DLSS has been around since the RTX 2080 dropped back in 2018, but while it started as a way to use machine learning to upscale games, it’s grown to be so much more than that.
Now, 8 years after the Tensor Core that powers DLSS first debuted in Nvidia’s Volta GPU architecture, DLSS will upscale your games, generate entirely new frames, and, when DLSS 5 comes out later this year, will even re-draw each frame in your game. Some of these features are more divisive than others, but it’s hard to argue that DLSS isn’t one of the most important GPU software suites in a while, and is a major part of why Nvidia graphics cards are so good.
DLSS Upscaling
DLSS ostensibly stands for “Deep Learning Super Sampling,” and that’s exactly what it did at the beginning. The whole idea was to render the game at a lower resolution, and then use an AI algorithm, trained on Nvidia super computers and running on Tensor Cores, to accurately upscale to a higher resolution.
For most games, DLSS is available in one of four presets, each changing the scaling factor of the game.
Ultra Performance: 33%Performance: 50%Balanced: 58%Quality: 66.7%
Obviously, the higher you go up, the better the image quality is going to be, but that will conversely affect your frame rate. As a general rule of thumb, I’d recommend ‘Performance’ for 4K, ‘Balanced’ for 1440p, and ‘Quality’ for 1080p gaming. I personally turn DLSS on whenever I can, but that’s because the algorithm is so good these days that I can’t tell the difference unless I’m actively looking for it. That wasn’t always the case, though.
In the beginning, this was rough. The first iteration of DLSS upscaling was noisy and had a ton of issues with artifacts and general image softness. Coupled with the fact that it only ran in a couple of games, and it was hard to take Nvidia’s upscaling super seriously. However, with DLSS 2.0, released in 2020, Nvidia improved the algorithm by taking motion vector data from the game engine into consideration, making it able to more accurately generate pixels that were true to the motion on the screen.
DLSS 2 also shifted Nvidia’s upscaling algorithm from needing to be trained on each game individually, to an algorithm that could be applied to many different games, assuming developers worked the DLSS files into their game. This made adoption skyrocket, and we started to see a ton of games starting to support the technology.
While the upscaling tech got a little better over the following few years, the next biggest jump came with DLSS 4, which debuted with the RTX 5080 in 2025. This moved the upscaling algorithm from a CNN (convolutional neural network) to a transformer model, which drastically improved accuracy.
This update worked on every RTX graphics card, and you could even force-enable it in the driver software, allowing it to run in basically any game that already supported DLSS 2 or 3. The only downside was that the model was a tad heavier than the older CNN model, so it could lessen the performance boost you got from enabling DLSS – particularly on older RTX 3000 and 2000 graphics cards.
Then, in 2026, Nvidia released a weird little mid-generation update, with DLSS 4.5. This changed both upscaling and frame generation – which I’ll get into later – but for upscaling it just made the transformer model a bit more accurate when you set it to “Performance” or “Ultra Performance” modes. The only problem was that it increased the performance footprint on older cards – so I’d only recommend enabling it with an RTX 5060 or newer graphics card.
Frame Generation
DLSS upscaling has become quite popular over the last few years, but with DLSS 3 and the RTX 4090, Nvidia went from using AI to generate extra pixels, to generating entire frames. Frame gen is incredibly simple in theory: It uses an algorithm to generate extra frames and then inserts them between actually rendered frames in the render queue. However, this is of course a bit more complicated than it sounds.
By their very nature, video games are unpredictable, so Nvidia had to find a way to generate the extra frames that wouldn’t turn games into a mess of hallucinations and artifacts. And, just like the upscaling bit, frame generation was rough at first.
Rather than just being free performance like Nvidia initially claimed, frame generation increased latency, rather than lowering it. One of the main purposes of playing games at a higher frame rate is to lower the delay between whatever you’re doing – clicking a mouse, moving your character, or whatever – and that action being reflected on the screen. Because frame generation isn’t actually making the game run faster, it can only ever add latency, because the algorithm needs a couple of milliseconds to generate each frame.
Nvidia offsets this increase in latency with Reflex, a system that essentially zeroes out the render queue and has the CPU sync with the graphics card, so that frames aren’t just sitting there waiting for the GPU to render them.
Reflex has been out for years, and it really does improve latency quite a bit, but it’s necessary to stop frame generation from being straight up terrible. With frame generation, Nvidia basically added a new render queue that exists on the GPU itself, taking a rendered frame, holding it until a new frame is generated, and then pacing them out to keep the motion smooth.
This is also why frame gen doesn’t exactly double your frame rate, even though it is generating a new frame off of every one sent by the game. And it’s also why it hits latency a bit. In my experience, the latency will typically go from, for example, 30 ms to 40 ms in Cyberpunk 2077 when frame generation is enabled.
The added latency isn’t ideal, but it’s not enough to be noticeable, especially in single player games where you’re not trying to compete.
As controversial as frame generation was when it launched with the RTX 4080, though, Nvidia upped the multiplier with DLSS 4 in 2025, adding a 3x and 4x frame gen option. Now, RTX 50 series graphics cards could generate up to three frames per rendered frame, theoretically boosting frame rates by 4x. Again, that number doesn’t exactly work out in theory, but it does vastly improve performance.
Generating all of those rendered frames could have been a nightmare, but Nvidia worked an AMP core, or AI Management Processor, into its Blackwell (RTX 50 series) graphics cards. This little chip works as a sort of taskmaster for the GPU and takes over frame pacing, which was typically handled by the CPU previously. Because that work is now handled on the same die as the GPU, there is less latency when scheduling out the frames, which is why multi-frame generation is able to work.
What’s more impressive, though, is how little the extra generated frames impact latency beyond the initial latency impact of 2x frame gen. In the same Cyberpunk section I mentioned earlier, turning frame gen to 4x only bumps the PC latency up to 43ms, which is barely more. And while adding latency at all is a downside, the upside is that it’s much easier to use a 4K 240Hz monitor and get your money’s worth.
But 4x frame gen wasn’t enough for Nvidia. At CES 2026, the company announced DLSS 4.5, a sort of half-generation refresh of DLSS 4. With it, 6x frame gen is now possible, but the more exciting part of the equation is Dynamic Multi Frame Generation. When enabled, this will have the GPU, um, dynamically change the frame gen multiplier, to keep your frame rate as close to your monitor’s refresh rate as possible.
Think of it as a sort of spicy V-Sync, but instead of limiting frame rate to your refresh rate, it just generates extra frames to keep your monitor fully fed. With this update, Nvidia also updated the AI model that frame generation runs on. Creatively called “Model B” in the Nvidia App, it is much better at handling UI elements, which can mess up in some games when frame gen is active.
Nvidia seems to be heading in a direction of constantly increasing the amount of frames that DLSS is going to generate, and who knows where it’s going to decide to hit the brakes. But no matter how high that multiplier gets, just keep in mind that this feature definitely isn’t for anybody.
If you’re still rocking a 60Hz display, frame generation is not going to do anything for you. And if you’re struggling to hit 50-60 fps in the first place, frame generation is going to be a laggy, artifact-riddled mess.
Instead, frame generation is best for people who are already getting 50-60 fps on a high-refresh display, where it’ll make games look much smoother. Even then, though, the added latency means that a lot of competitive gamers should probably just ignore it.
The Future of DLSS
While both DLSS upscaling and frame generation use AI to improve performance, they don’t meaningfully change the way the game itself looks. While there might be little mistakes here and there, the final product is extremely faithful to what the game is outputting. But that might be changing later this year when DLSS 5 launches.
All we’ve really seen from DLSS 5 so far is a short demo on-stage at GTC 2026 (Graphics Technology Conference), so there’s not a lot to go off of. But, based on what was there, this new algorithm seems to massively impact the aesthetics of a game, particularly when it comes to character models.
Nvidia claims that the model is grounded in the geometry and “scene semantics” of the game. But when the model is taking the game’s final output along with motion vector data to generate an image that’s overlaid on top, it sure looks like it’s altering the aesthetics of the game.
In theory, this means that the better the underlying game looks, the better DLSS 5 will be at generating a final image, but because it doesn’t take data from the game engine into account, it’s very possible that it will mess up. Though, Nvidia will likely patch it up to make it look better over the next half-a-year before it is available to download.
But beyond the impact to a game’s aesthetic, it remains to be seen whether or not it will actually improve performance. After all, since the beginning, one of the guiding lights of DLSS was maximising performance while preserving image quality. The early model of DLSS 5 needed two RTX 5090s to run. And, sure, Team Green will find a way to optimize it to run on a single graphics card, but only time will tell if it’ll be something that gamers turn on. If it just changes the way the game looks but with worse performance – I don’t think a lot of people are going to choose to use it.
Jackie Thomas is the Hardware and Buying Guides Editor at IGN and the PC components queen. You can follow her @Jackiecobra
