NVIDIA’s job at COMPUTEX 2022 is not over just yet as the latest video prepared for the forum is all about Team Green’s real-time reference development platform Omniverse and the wave of revolutionary applications it’s going to bring forward.
The presentation is hosted by Richard Kerris, Vice President and IGM of the Omniverse Developer Platform, and he has shared the enormous opportunities simulation brings to 3D workflows alongside AI’s next evolution while adding ways on how enterprises can get a head start by joining the NVIDIA Omniverse platform and gaining access to next-gen real-time 3D simulation, design collaboration, and the creation of virtual worlds.
We are here to deliver what he shared in an easy way of understanding.
NVIDIA’s embrace of Metaverse is none like others
Kerris started with a word that most people don’t like much these days – the Metaverse. To begin, he is fully confident that the M-word will be shaping the next generation of the Internet since the Internet is a big digital web of connected networks, and if by transforming it into a three-dimensional state, you get the Metaverse.
One great example that he envisioned is traveling seamlessly between each and every unique virtual world with the digital landscapes becoming the foundation of next-gen AI as applications ranging from training robots to monitoring and managing a digital twin of factories and buildings are potentially limitless.
In order to create the Metaverse, tools are needed and NVIDIA’s Omniverse will be the industry’s very best and most innovative platform to start creating the building blocks, essentially serving as the connector of virtual worlds. Only by grouping up the power of industry experts with each hosting a highly-specific skill set, then only the materialization of a digital twin can be done.
No such thing as incompatibility in Omniverse
The VP explains the plaguing situation, particularly within the designing, modeling, and seemingly the coding industry as hundreds and thousands of tools are used by millions of experts. Despite being brilliant in their ways, the collaboration of 2 or more employees that adopted a different set of programs and applications can create issues.
For example, the tedious process of importing and exporting directly contributes to wasted time and at times, these file conversion tools will discard detailed data that is deemed “irrelevant” but as a matter of fact, every single bit is as important. As a result, it is either using the same set of integrated products from the same vendor that could limit creativity and options or keep wasting time and money to adopt different slews of tools.
But at NVIDIA Omniverse Create, the platform is made to accelerate 3D workflow and make every participant’s life easier by offering a wide set of direct app integration so that each of them can immediately insert elements or craft one from scratch, all the while getting real-time reflected feedback shown to other artists as well.
With how much work is needed to apply things like global illumination and ray-tracing, it is important to channel the skills and creativity of fellow experts into delivering these outputs rather than worrying about technical issues or whatnot, and this is what the Omniverse platform is capable of.
In terms of relating to the experience of the Omniverse, Kerris gave an analogy of a two-chapter journey where the creation part is first processed, then the next one is a bit more interesting as one can create an entirely identical digital twin of a context or subject of the synthetic world and see how it reflects in the digital world to anticipate whether it is the same in the real dimension.
Digital twins by Omniverse Enterprise
Despite being old news, the adoption of digital twins by Amazon Robotics and BMW is one of the most impactful moves in accelerating the idea of having a completely identical digital copy of a building or system. In short, both of these brands utilize digital twins to understand how to create something, the way of operation after that, and possibly any kind of scenarios that would arise through mass production – anything that has risk and uncertainty, can be fully avoided by simulating them first in a digital 3D space.
Another point that Kerris highlighted is the training of AI algorithms. To everyone’s knowledge, to train an AI into doing something, it needs tons of input data to find out what gets the green tick and what gets the red cross. But in the current environment, generating these input data from the real world is both costly and perhaps risky in terms of privacy concerns.
But, if you already have a digital twin that can generate the same data, why not just take everything into the virtual world, including creating and feeding the precious data that machine learning is craving?
Hence, everything from residential buildings to 5G antennas and even the entire world, which is literally dubbed Earth-2, is what NVIDIA and its partners seek to achieve.
Leverage the power of RTX
Tracing back to the roots and we can see that it is actually RTX that powers the basis of Omniverse. Here, artists utilize Physics to get real-life simulations of movements that obey the laws of physics while RTX Render takes care of the ray-tracing and path-tracing jobs for absolute real-world matching graphics.
Taking a page out of Kerris’s book, one of Team Green’s partners Siemens managed to create wind farm digital twins via super-resolution simulations powered by NVIDIA Modulus and Physics ML Models. And guess what, a simple optimization within the digital copy resulted in 10% fewer operation costs and additional electric power for 20,000 homes in the real world!
RTX of shapes and powers
With RTX basically becoming the household term for NVIDIA’s modern GPU and thus invading rigs of consumers and prosumers, it is also present in things like workstations and even data centers. Regarding that, Kerris touched on the recent announcement of OVX, a dedicated system made just for Omniverse that can go as small as for workstations and as large as for superpod deployments, displaying the platform’s ultimate scalability and flexibility in supporting gigantic amounts of data pools at once.
NVIDIA Omniverse’s Future Vision
As the presentation enters the final phase, Kerris attributed 4 reasons why Omniverse got such enormous improvements year by year and they are:
- RTX – True to reality simulation in real-time from small devices to landscape-scale servers
- GPU Scalability – Ability to handle a large number of data sets just by simply adding more computing power
- Universal Scene Description (USD) – Open-source file description format developed by Pixar
- AI Revolution
At the time being, there are already 82 connectors of asset libraries, applications, and more available for Omniverse, and all of them are already utilized by more than 120,000 individuals in a wide variety of ways.
Kerris wrapped up with absolute confidence in the bright future the NVIDIA Omniverse is going to become and capable of delivering.
You can catch the video below if you prefer to hear Kerris’s voice directly.