Nvidia's Omniverse Cloud brings Digital Twins to the Apple Vision Pro in real time

Nvidia's Omniverse Cloud brings Digital Twins to the Apple Vision Pro in real time

Nvidia continues to invest in the Industrial Metaverse and demonstrates new workflows between Omniverse Cloud and Apple Vision Pro.

FACTS

Nvidia has unveiled a new software framework based on Omniverse Cloud APIs that allows developers to send their OpenUSD scenes from creation tools to the Nvidia Graphics Delivery Network (GDN). According to Nvidia, this cloud-based approach allows real-time renderings to be streamed directly to Apple Vision Pro without compromising the details of large, technically demanding datasets.

During today's GTC keynote, Nvidia streamed an interactive, physically accurate digital twin of a car to the Apple Vision Pro. The designer wearing the Vision Pro used a vehicle configuration app developed by CGI studio Katana on the Omniverse platform to toggle between paint and trim options and even enter the vehicle.

The workflow also introduces a hybrid rendering technique that combines local and remote rendering on the device. Users can render fully interactive experiences in a single application using Apple's native SwiftUI and Reality Kit, while the Omniverse RTX renderer is streamed from the GDN.

logo

CONTEXT

Nvidia sees the metaverse as the 3D evolution of the internet and wants to make it easier for companies in particular to work in it. This "industrial metaverse" can be used to create digital twins of factories or products, simulate rail networks or enable collaborative work on 3D developments.

In 2022, CEO Jensen Huang introduced the Omniverse Cloud to give more companies and individuals access to the Omniverse, as Nvidia calls it. For example, teams can design 3D workflows with one click and access Omniverse features such as physics simulation, ray tracing and AI features without powerful edge devices.

Sources: Nvidia