NVIDIA to Power the Metaverse, Cars, and Robots with Powerful Omniverse Updates, Thor Superchip, Grace CPU, and More


Nvidia kicked off its GTC Developer Conference today with a keynote from Founder and CEO Jensen Huang. New GeForce RTX 40 Series The cards announced at the start are sure to grab most of the headlines, but the company’s announcements cover a lot of industries. Today’s keynote also introduced the world to new advancements in NVIDIA’s Omniverse, including cutting-edge AI, platforms and hardware that includes the Grace processor and Grace Hopper superchips. , the Drive Thor automotive processor and the Ada Lovelace L40 data center GPU.

Huang’s speech stemmed from Ada Lovelace’s introduction to gaming GPUs about fully simulated worlds in Omniverse, a digital platform for industries. Jensen explains how hugely diverse industry domains leverage the Omniverse for everything from design and development collaboration to marketing and delivery in a single pipeline.

The Omniverse is a simulation engine that interacts with objects based on Pixar’s Universal Scene Description (USD) format. Described as “the HTML of 3D”, USD offers a consistent way to create, manage and analyze the immense amounts of 3D data involved, all in real time. The Omniverse serves as a platform for companies to build and maintain their own applications.

For many of these applications, it is used to create real-world “digital twins”. Digital twins can move from molecules for pharmaceutical research to objects for prototyping or even municipal landscapes to adjust self-driving behaviors. Jensen said that in the near future, everything created will have a digital twin.

This is possible thanks to the new omniverse cloud service. Omniverse Cloud is a suite of software and hardware distributed in data centers around the world. It is tied to Omniverse applications running locally on edge devices as needed. Because the Omniverse Cloud service offloads rendering, it enables developers to deliver immersive and accurate experiences to a wide range of devices.

Rimac auto-configurator on Omniverse

Huang used an example from Rimac Automobili that leverages the Omniverse Cloud to create a configuration app. This configurator allows buyers to select options and view real-time changes from actual “ground truth” engineering files without having to pre-make options possible. This combines design, engineering, and marketing pipelines into a single data source using Omniverse Nucleus Cloud as the database engine.

Omniverse Cloud is built on NVIDIA’s HGX and OVX platforms. NVIDIA HGX is designed for AI workloads while NVIDIA OVX is designed for real-world visuals and simulation.

The HGX platform is enhanced with new Through CPU and Grace Hopper Superchips. As disclosed by Arm, the Grace processor is built using 144 Neoverse V2 cores and has a memory bandwidth of 1 terabyte per second. The Grace processor is therefore ideal for high-performance computing workloads.

The Grace Hopper Superchip combines the Grace processor with Hopper GPU architectures using NVLink-C2C to tackle the most complex AI workloads. The coherent memory model runs seven times faster than PCIe Gen 5 at approximately 900 GB/s, and in total provides 30 times more overall system memory bandwidth than the DGX A100 before it.
NVIDIA OVX Server with L40 GPU

Conversely, the NVIDIA OVX system relies on new Ada Lovelace L40 GPUs to tackle digital twin workloads. The base unit is the NVIDIA OVX server which can combine up to eight L40 GPUs in a 4U rack chassis. These can be aggregated into an OVX POD consisting of 4 to 16 OVX servers with an “optimized network structure and storage architecture” to scale capacities. This results in the 32-server SuperPOD OVX to handle massive simulations with low real-time latencies.
Ada Lovelace L40 Datacenter graphics card

The L40 GPU offers a whopping 48GB of GDDR6 (ECC compatible) memory to handle large data sets. NVIDIA states that CUDA cores support 16-bit math (BF16) for mixed precision workloads. The L40 also incorporates third-generation RT cores and fourth-generation Tensor cores to navigate design and data science models with aplomb. Of course, this tier also grants access to NVIDIA virtual GPU software to support more powerful remote virtual workstations.

Finally, the tech titan unveiled the new Drive Thor to serve as a centralized computer for automotive needs. It combines the functionality of several traditionally discrete chips on a single SoC with its multi-domain computing support. According to NVIDIA, this simplifies the design of autonomous vehicles, reduces costs and reduces weight.

The Drive Thor is a powerful processor, supplanting the previously planned 1000 TOPS Atlan. The Drive Thor doubles the projected computing power of the Atlan to deliver 2000 TOPS. The Drive Thor has many other safety features and functions. It houses an inference transformer engine for complex AI workloads required by autonomous driving. Its NVLink-C2C interconnect further allows the system to run multiple operating systems simultaneously.

NVIDIA reports robust support for its Omniverse ecosystem. The the company says there are already 150 connectors to the Omniverse and it supports open software platforms from companies around the world. The Omniverse Cloud actually works like a content delivery network, but for graphics. NVIDIA in turn dubbed it a Graphics Delivery Network (GDN). Omniverse Cloud Infrastructure is now available in a SaaS model with AWS, and will be available on other Cloud platforms in the near future.


About Author

Comments are closed.