Unless you’ve been living under a rock for the last 48 hours, you will have seen the reviews of Nvidia’s new RTX 2080 and RTX 2080 Ti graphics cards go live. As part of our analysis, we looked at Deep Learning Super Sampling – a new form of anti aliasing technology that leverages the cards’ Tensor Cores. We’ve had a few comments and messages from readers about DLSS, and it seems there is a fair amount of confusion about how it works and what it does. We reached out to Nvidia to clarify the situation.
When our reviews went live, we noticed that some seemed to think that DLSS is simply upscaling technology with a fancy name. That would mean that instead of rendering games at native resolution, games would instead render at a lower resolution and upscale frames to the display’s resolution to improve performance. While we have been told this is part of what makes up DLSS, there is more to it than just simple upscaling.
Rick Napier, Senior Technical Product Manager at NVIDIA, told us that at its core, DLSS is a post-processing technique that improves performance over traditional anti-aliasing (AA) methods in two main ways. First of all, it simply takes less samples per pixel than current AA methods – meaning demand on the GPU is lessened. Secondly, Rick also emphasised the fact that DLSS is executed on the Tensor Cores within the Turing GPU, rather than CUDA cores. In Rick’s words, this is “critical” as it means DLSS is “freeing up the shaders to focus on rendering and not applying an AA technique.”
In sum, DLSS can increase game performance because traditional GPU shaders are not being leveraged for AA, while DLSS is also using less samples per pixel.
So that’s what DLSS does, but how does it work under the hood? In essence, DLSS uses a neural network that has been trained to take input frames (from a game) and output them with higher overall quality. It can do this because the neural network has been trained to optimise image quality. As Andrew Edelsten, Director of Developer Technologies at NVIDIA puts it, the “DLSS model is fed thousands of aliased input frames and its output is judged against the ‘perfect’ accumulated frames. This has the effect of teaching the model how to infer a 64 sample per pixel supersampled image from a 1 sample per pixel input frame.”
So when you’re gaming with DLSS on, it uses an encoder to extract “multidimensional features from each frame to determine what are edges and shapes and what should and should not be adjusted.” In other words, it knows what areas of your frame should and shouldn’t get the ‘DLSS treatment’ as it were. Once it knows that, the high-quality frames from the DLSS neural network can be combined to provide a final image.
That’s DLSS as simply as we can put it. At the moment we have 25 games that are set to support the technology, and you can see the full list over here. Be sure to read our RTX 2080 and RTX 2080 Ti reviews as well for the full low-down on the new Turing-based cards.
KitGuru says: There’s quite a lot going on with Nvidia’s DLSS technology, but it is more than simple upscaling. We look forward to testing it in games soon.