Saturday, July 6, 2024

Vision of NVIDIA: DLSS 10 Gaming Future

DLSS 10: Elevating Visuals in Gaming

NVIDIA was one of the first companies to lay bets on artificial intelligence (AI), and the company is now reaping the benefits of those investments, as seen by the phenomenal growth it has had over the past year. Deep Learning Super Sampling, also known as DLSS, is a technology that was developed with the purpose of boosting the performance of video games by utilizing the capabilities of artificial intelligence (more specifically, a trained neural network).

After that point, NVIDIA began including Tensor Cores in all GeForce graphics cards beginning with the RTX series and moving forward. This was done since there was a significant need to recoup as much speed as possible with the arrival of real-time ray tracing.

NVIDIA developed DLSS over the course of time. Version 2.0 was able to deliver much higher quality while still maintaining its status as a performance accelerator; version 3.0 added Frame Generation, which unlocked new levels of performance, particularly in CPU-bound games; and version 3.5 focused on improving the quality of ray tracing under upscaling with the new Ray Reconstruction feature, which just recently made its debut in Cyberpunk 2077 to widespread acclaim.

NVIDIA DLSS 10 Gaming
Image Credit to WCCFTECH

During the concluding portion of the recent ‘AI Visuals’ roundtable discussion that was organized and moderated by Digital Foundry, Bryan Catanzaro, Vice President of Applied Deep Learning Research at NVIDIA, stated that he expects that future editions of DLSS, maybe in version 10, will be able to handle all aspect of rendering in a neural, AI-based system.

We actually put up a really cool demo of a world that was being drawn by a neural network, like, entirely, but it was being driven by a game engine back in 2018 at the NeurIPS conference. The environment in question was being produced by a gaming engine. Therefore, what we were doing was basically leveraging the game engine to produce information about where items are located, and then using that information as an input to a neural network that would conduct all of the rendering.

Because of this, the neural network was basically responsible for every step of the rendering process. Even getting that item to work in real time in 2018 was a somewhat remarkable accomplishment in and of itself. Although the image quality that we obtained from it was in no way comparable to that of Cyberpunk 2077, I believe that in the long term, this is going to be the direction that the graphics industry will be heading.

The graphics creation process will increasingly make use of generative artificial intelligence (AI). Again, the rationale for this is going to be the same as it is for every other use of AI, which is that we are able to learn significantly more difficult functions by looking at enormous data sets than we are able to by manually creating algorithms bottom up.

By shifting to a rendering style that relies significantly more on neural networks, I believe we will be able to achieve a higher level of realism while simultaneously lowering the cost of creating fantastic AAA settings. I believe that will be a progression that takes place over time. The standard 3D pipeline and the gaming engines both have one thing in common: they are controllable. This means that you can have groups of artists produce things, and those things will have consistent stories, locations, and everything else. These technologies genuinely allow you to construct a planet from scratch.

Those are definitely going to be useful to us in the future. I do not believe that artificial intelligence will one day be able to construct games in a way in which all you have to do is write a line about how to make a cyberpunk game, and out of nowhere something as fantastic as Cyberpunk 2077 will appear. I do believe that in the far future, let’s say DLSS 10, there will be a completely neural rendering system that interfaces with a gaming engine in different ways. As a result of this, I believe that it will be more beautiful and immersive.

Catanzaro is referring to the “driving game” that was shown for the first time at the NeurIPS conference held in Montreal, Canada, in December of 2018. It goes without saying that the level of quality wasn’t very high, but AI is capable of making significant advancements in a relatively short period of time.Link Here

It is not at all a stretch of the imagination to think that within the next five to ten years, DLSS may be able to completely supplant the more conventional approaches to rendering. NVIDIA is already working on additional neural techniques, such as radial caching and texture compression, that may be added to the DLSS suite as it expands to replace additional parts of the rendering process. These techniques might be added to the DLSS suite as it expands to replace additional parts of the rendering process. However, if this turns out to be the path that will be taken, NVIDIA will likely need to significantly expand the number of Tensor Cores that are included in its GPUs.

In order to get the clearest picture possible of what NVIDIA has in store for the future of neural rendering, we are going to keep a close eye on the newly published research papers.

Source

agarapuramesh
agarapurameshhttps://govindhtech.com
Agarapu Ramesh was founder of the Govindhtech and Computer Hardware enthusiast. He interested in writing Technews articles. Working as an Editor of Govindhtech for one Year and previously working as a Computer Assembling Technician in G Traders from 2018 in India. His Education Qualification MSc.
RELATED ARTICLES

3 COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Recent Posts

Popular Post

Govindhtech.com Would you like to receive notifications on latest updates? No Yes