In August of this year, Nvidia CEO Huang Renxun’s “virtual person” video was popular on social platforms. The reason is that in a public speech, Huang Renxun used a “virtual person” stand-in for 14 seconds. But because the “virtual person” is too lifelike, no one notices it.
And this video also drove Omniverse, an infrastructure platform of Nvidia. It is reported that this infrastructure platform is a software platform used to create virtual spaces. “Virtual human” is one of its works.
At today’s NVIDIA GTC conference, Huang Renxun once again appeared in a classic image. Many people are also speculating whether “virtual humans” will also be used this time. The result is disappointing.
The drama of “virtual human” has not been staged again. But this does not conceal Nvidia’s determination to enter the metaverse.
In the new product release that lasted more than one and a half hours, Huang Renxun mainly talked about how Nvidia promoted the development of AI in various industries. Including the latest technologies in the fields of enterprise and data centers, conversational AI, and natural language processing. There are also AI applications in edge computing and virtual worlds, such as robotics, medical care, self-driving cars, and digital twin factories.
Among them, the metaverse is not the longest. But it was enough to leave a deep impression and became one of the highlights of the entire conference.
As a major release of this conference, Nvidia launched the Omniverse Avatar. This is a technology platform for generating interactive AI avatars.

This platform integrates technologies in speech AI, computer vision, natural language understanding, recommendation engines and simulation technologies.
The AI avatar created in the platform is an interactive character generated by applying ray tracing technology. Not only have vision, but also can talk on a wide range of topics and understand the intention of speaking.
The first demonstration was Huang Renxun’s personal experiment. In the demonstration, Huang Renxun in the animated toy version was very flexible and cute, and answered questions about climate change and protein production.
In the second demonstration, there is also the image of a cute “egg shell man”. It talked to a couple through the terminal of a fast food restaurant and understood the two’s needs for ordering vegetarian burgers, fries and drinks.

This avatar uses facial tracking technology to maintain eye contact with customers and respond to their facial expressions. “This will help smart retail, drive-through and customer service,” Huang Renxun said.

The third demonstration has once again been upgraded to a level and is used in the conference software.
For example, during a conference call, someone may be wearing casual clothes. But at this time, they used realistic animated avatars as substitutes to maintain a decent image.
For another example, when making a video call in a noisy coffee shop, a woman can be heard clearly without background noise when she speaks English. In addition, when she speaks, her words will be transcribed and translated into German, French and Spanish in real time, using the same voice and intonation as hers.
It needs to be pointed out that for the scenes of these demonstrations, some people have also raised questions about the experience. For example, is real-time interaction more popular than manual selection on the terminal.
Huang Renxun also mentioned in his speech that the response time of virtual humans is slower than that of humans. If the customer is in a hurry, it will inevitably bring a bad experience. Similarly, although the technology and applications in conference software look good, we have not yet seen it have a significant impact on the real world.
In addition to the application in personal scenarios, Huang Renxun will also discuss new applications in larger scenarios, such as how NVIDIA and its partners build digital twins for factories, cities and the entire region.
It is reported that Omniverse enables engineers and designers to build accurate digital twins. Create large-scale, realistic simulation environments for training robots or self-driving cars before deploying to the physical world.
For example, in order to better understand the application of 5G networks in reality, Ericsson has built a city-scale digital twin model in Omniverse.
“You will see an eternal theme. How Omniverse can be used to simulate digital twins in warehouses, factories and factories, physical and biological systems, 5G edges, robots, self-driving cars and even avatars,” said Huang Renxun.
At the end of the conference, NVIDIA announced that it would build a digital twin model called E-2 or Earth II to simulate and predict climate change.
It is not difficult to find that in Nvidia’s view, Metaverse is also about a shared virtual world that allows remote collaboration.
But unlike the concept proposed by Meta (former Facebook), Nvidia is more concerned with replicating industrial environments with digital twins and creating virtual avatars that interact with people.
And this starting point is also welcomed and recognized by the market.
Relevant information shows that since the public beta release, Omniverse has been downloaded by more than 70,000 individual creators. It is also used by more than 700 companies, including BMW Group, CannonDesign, Epigraph, Ericsson, construction companies HKS and KPF, Lockheed Martin and Sony Pictures Animation.
It is worth mentioning that since the low of March 16, 2020, Nvidia’s market value has increased by nearly $530 billion.
Judging from the current business landscape, Nvidia’s current achievements are naturally inseparable from its strong chip business. With the rising of the metaverse, the market also has a reason to give Nvidia a high premium.
This can’t help but reminiscent of Forbes’s bold predictions. Nvidia’s market value will surpass Apple in five years. It can be speculated that with the blessing of the chip and the metaverse, this process may have been accelerating.