Mile-High AI: Simulation and Gen AI Advancements to Be Showcased by NVIDIA Research at SIGGRAPH
SIGGRAPH 2024
A variety of rendering, simulation, and generative AI innovations are being brought by NVIDIA to SIGGRAPH 2024, the world’s leading computer graphics conference, which is being held in Denver from July 28 to August 1.
NVIDIA Research has published more than 20 papers outlining new developments in inverse rendering technologies and synthetic data generators that can be used to train next-generation models. By improving picture quality and opening up new possibilities for creating 3D representations of imagined or real-world environments, NVIDIA’s AI research is improving simulation.
The studies centre on physics-based simulation, increasingly lifelike AI-powered rendering, and diffusion models for visual generative AI. These include partnerships with academic institutions in the United States, Canada, China, Israel, and Japan; they also involve two technical Best Paper Award winners and researchers from firms like Adobe and Roblox.
These projects will contribute to the development of tools that businesses and developers may use to construct intricate virtual settings, characters, and items. The creation of synthetic data can subsequently be used to create compelling visual narratives, support scientists’ comprehension of natural occurrences, or help with simulation-based training for autonomous cars and robotics.
Text to Image Generation and Texture Painting Are Improved by Diffusion Models
The time it takes to bring ideas to life can be decreased by using diffusion models, a common technique for turning text prompts into images. Diffusion models can assist artists, designers, and other creators in quickly creating visuals for storyboards or production.
The capabilities of these generative AI models are being advanced by two articles coauthored by NVIDIA.
Researchers at Tel Aviv University and NVIDIA collaborated to create ConsiStory, a tool that makes it simpler to create several images with a single main character. This is a crucial feature for use cases involving narrative, like creating a storyboard or drawing a comic strip. By using a method known as subject-driven shared attention, the researchers’ approach cuts the time required to produce consistent visuals from thirteen minutes to just thirty seconds.
Last year, at SIGGRAPH’s Real-Time Live event, NVIDIA researchers earned the Best in Show award for their AI models that create personalised textured materials based on text or image cues. This year, they will be presenting a work that enables interactive texture painting on 3D meshes using 2D generative diffusion models. This will allow artists to create intricate textures in real time using any reference image.
Getting Things Started in Physics-Based Simulation
Physics-based simulation, a collection of methods to make virtual characters and objects move like real-world items, is helping graphics researchers close the gap between virtual and real-world objects.
Advancements in the subject are highlighted in a number of NVIDIA Research articles, such as SuperPADL, a project that addresses the difficulty of modelling intricate human gestures using text prompts (see the video up top).
The researchers showed how the SuperPADL framework can be trained to replicate the movements of over 5,000 talents using a combination of supervised and reinforcement learning. It can also operate in real time on a consumer-grade NVIDIA GPU.
An approach to neural physics described in another NVIDIA article uses artificial intelligence (AI) to learn the behaviour of things, whether they are represented as a 3D mesh, a NeRF, or a solid object created by a text-to-3D model, as they are moved within an environment.
SIGGRAPH papers
In a report published in cooperation with researchers at Carnegie Mellon University, a novel type of renderer is developed that can handle fluid dynamics, electrostatics, and thermal analysis in addition to modelling actual light. The technique, which was recognised as one of the top five papers at SIGGRAPH, presents new possibilities for accelerating engineering design cycles because it is simple to parallelize and doesn’t require laborious model cleanup.
Further simulation papers present a pipeline that ten times faster fluid simulation as well as an improved method for modelling hair strands.
Increasing the Bar for Diffraction Simulation and Rendering Realism
In a separate group of publications, NVIDIA presents novel methods for simulating diffraction effects, which are utilised in radar modelling to train self-driving cars, up to 1,000 times quicker than current methods for modelling visible light.
In a publication, researchers from NVIDIA and the University of Waterloo address the optical phenomena known as free-space diffraction, which occurs when light disperses or bends around the edges of objects. With up to 1,000x acceleration, the team’s approach can be integrated with path-tracing techniques to improve the accuracy of reproducing diffraction in intricate settings. The model could be used to replicate longer wavelengths of radio waves, radar, or sound in addition to visible light.
Path tracing generates a lifelike image by sampling many pathways, or multi-bounce light beams moving through a scene. ReSTIR is a route-tracing method that was initially presented at SIGGRAPH 2020 by NVIDIA and academics from Dartmouth College. Two SIGGRAPH publications enhance the sampling quality of ReSTIR, which has been essential in bringing path tracing to real-time rendering products such as games.
In one of these articles, a partnership with the University of Utah, a novel approach to path reusing is shared, which leads to an effective sample count increase of up to 25x, hence improving the quality of the images. The other modifies a portion of the light’s path at random to enhance sample quality. This improves the performance of denoising techniques and reduces visual artefacts in the final output.
Educating AI to Understand 3D
At SIGGRAPH, NVIDIA researchers are also exhibiting versatile AI technologies for 3D design and representation.
A GPU-optimized framework for 3D deep learning that fits the size of the real world, called fVDB, is introduced in one study. The huge spatial scale and high resolution of city-scale 3D models and NeRFs, as well as the segmentation and reconstruction of large-scale point clouds, are made possible by the AI infrastructure provided by the fVDB framework.
An award-winning Best Technical Paper, coauthored alongside Dartmouth College academics, presents a framework for depicting the interactions between 3D objects and light. The idea integrates a wide range of appearances into one paradigm.
Additionally, a real-time algorithm that creates smooth, space-filling curves on 3D meshes is introduced by a collaboration between Adobe Research, the University of Toronto, and the University of Tokyo. This framework operates in seconds and gives users a great degree of control over the result to enable participatory design, whereas earlier methods took hours.
SIGGRAPH with NVIDIA
Attend SIGGRAPH to learn more about NVIDIA. Special events will feature a fireside talk on the topic of robotics and artificial intelligence (AI) in industrial digitalization, including Jensen Huang, the CEO and founder of NVIDIA, and Lauren Goode, a senior writer at WIRED.
OpenUSD Day by NVIDIA, a full-day event that showcases how developers and industry leaders are adopting and expanding OpenUSD to construct 3D pipelines enabled by artificial intelligence, will also be presented by NVIDIA researchers.
With teams concentrating on AI, computer graphics, computer vision, robotics, self-driving cars, and computer graphics, NVIDIA Research employs hundreds of scientists and engineers globally. View additional of their recent work.
SIGGRAPH 2024 location
From July 28 to August 1, 2024, SIGGRAPH 2024 will take place in the Colorado Convention Centre in Denver, Colorado, USA.