Nvidia unveils Omniverse Machinima
Wednesday, September 2, 2020 | Written by Jim Thacker
A demo that was created “in a few days” with assets from Mount & Blade II: Bannerlord with Omniverse Machinima, Nvidia's upcoming app for creating 3D animations based on game models with new AI-based tools.
Nvidia has introduced Omniverse Machinima, a new app for creating CG animations using game assets and other inventory based on the real-time collaboration platform Omniverse.
The app, which is due to appear in beta next month, offers AI-based systems for generating character animations from webcam footage and audio recordings, as well as real-time effects and rendering tools.
New Nvidia tools for game streamers and machinima developers
Although, unsurprisingly, the biggest news of yesterday's launch event for Nvidia's new GeForce RTX 30-series GPUs was the cards themselves, Nvidia also announced two new software products.
The first, Nvidia Broadcast, is aimed at livestreams and video conferencing and offers AI-driven noise reduction tools that replace the background of webcam footage and automatic video reframing.
The other was Omniverse Machinima, an upcoming app for generating CG animations from game assets and other components using a semi-automated, AI-controlled workflow.
While dedicated machinima authoring tools aren't new, the Omniverse Machinima selling point over Valve's Source Filmmaker or the internal recording modes of titles like GTA V seems to be its flexibility.
It is based on Omniverse, Nvidia's work in progress "Open Collaboration" framework for 3D production, which the company called "essentially Google Docs for 3D design" when it was released last year.
Animate stock game characters with new AI-driven tools
With the subsequent demos from Nvidia focusing more on architectural visualization and industrial design, Omniverse Machinima is one of the earliest demonstrations of entertainment technology.
Users can import assets from games or online marketplaces for inventory content or from DCC applications – there are connections for Autodesk and Adobe software as well as Unreal Engine.
As with Omniverse itself, files appear to be exchanged in USD format. So it's not clear whether the app has a built-in file converter: USD is not yet as common for gaming assets as it is in VFX pipelines.
Characters can then be animated using two AI technologies: Pose Estimator, which generates movement based on video footage of a live actor, and Audio2Face, which generates facial animation from recorded speech.
In the demo at the top of the story, the results look functional, although they are definitely more in Uncanny Valley territory than professional applications like Adobe Character Animator.
Users can also add effects like smoke, fire and dirt based on PhysX, Nvidia's real-time dynamics system, and Flow, real-time gaseous liquid solver.
The results can be output using the integrated RAD rendering (Path Traced Rendering), which is operated with the current generation of RTX GPUs from Nvidia and enables cinema films to be exported in film quality.
Prices and availability
Omniverse Machinima is expected to be in beta in October 2020. Nvidia has not stated any pricing or system requirements.
For more information on Omniverse Machinima, visit the Nvidia website
Tags: 3D animation, Adobe, AI-based, AI-controlled, Audio2Face, Autodesk, automated, CG animation, character animator, kinematics, collaboration, create animation from game models, dynamics, effects, face animation, flow, fluid simulation, full body movement, gaseous liquid, GPU rendering, lip sync, Machinima, NVIDIA, Nvidia Broadcast, Omniverse, Omniverse Machinima, path tracking, PhysX, Pose Estimator, price, real time, release date, RTX, system requirements, UE4, Unreal Engine, vfx