Finding the Secret SAUCE for Asset Re-use – fxguide
Sauce.jpg

SAUCE is a three-year EU research and innovation project between the Universitat Pompeu Fabra, the foundry, the DNEG, the Technical University of Brno, the Animation Institute of the Baden-Württemberg Film Academy, the Saarland University, Trinity College Dublin and the Disney Research Studios companies the creative industry to reuse existing digital assets for future productions.

SAUCE's goal is to produce, test, and demonstrate a range of professional tools and techniques that will lower the cost of producing enhanced digital content for the creative industry by increasing the potential for content reuse and reuse in its deployment significantly improved technologies for the production and management of digital content.

The SAUCE project has its roots in another EU joint virtual production project, led at the time by Jon Stark at the foundry called DreamSpace. The aim of this project was to edit and operate virtual production in a user-friendly way. It was built on the assumption that the production was no longer linear, and a team of companies from all over Europe were involved, including the Film Academy, as Prof. Volker Helzle reports: “This work package was about objects, lights and Editing animations in virtual production scenarios is a direct and user-friendly fashion ”. The new SAUCE project is also a multi-company EU project that started developing a range of tools including a tablet controller and has evolved into a centralized production system with Katana using Epic Unreal and synchronizing virtual production environments. AR, Google Tango, Apple and Andriod devices to make virtual production a more stable and functional tool for enhanced digital content and management in real productions.

The team presented SAUCE at DigiPro this year and outlined the project. While some parts of the broader SAUCE project focused on things like lightfields and crowd animation, the DigiPro team focused on asset pipelines. SAUCE stands for S.mart-ONEsset R.eU.be C.reative E.With regard to asset pipelines, SAUCE is divided into three central research questions:

  • Smart Assets: What are “intelligent assets”, what makes them intelligent and how can they be defined?
  • reuse: How can assets be made reusable not only in a VP environment, but also retrospectively?
  • Creative environments: Why is this important for creatives and productions?

As most people in production know, missing out on missed opportunities to recycle assets and how asset libraries are both created and managed is a tremendous waste of effort. It is not uncommon for an asset to be rebuilt as it is considered easier to rebuild as an older asset from a previous project is found, converted, or modified. Shared assets across projects become even more problematic when considering sharing assets between entities or across isolated boundaries such as VFX, gaming, or licensing / marketing.

Partner DNEG focused on searching and retrieving data, Foundry on storing data, and Film Academy on use cases in production.

Data storage:

Foundry is committed to ensuring that the actual file formats in the library are not a focus for the user. Dan Ring from the foundry gives the example of an iPhone user who doesn't care whether a track is an MP3 or AAC track. He just imagines his data as a title and iTunes handles problems related to file formats, conversions or transcoding. As long as it can be versioned and is unique, the asset can be useful locally or in the cloud in the long term. Serves both large companies and small one- or two-person outfits.

Foundry's katana was widely used to create a working system that was both abstract and powerful enough to handle production workloads. Katana has become an important cornerstone of so many important pipelines, but it is also very well suited for smaller intelligent virtual productions.

Search and Retrieve:

DNEG has millions of assets, so curating DNEG's assets is a huge challenge. Conventional information retrieval systems work with text. The indexing of text is of course well understood. However, the requirements of the VFX and animation industries are much higher. As Will Greenly of DNEG explained, the visual nature of our industry means these systems don't have to sort by just text or specially curated metadata. DNEG was interested in standardizing how things could be classified. They also used machine learning with this approach to standardization to create a system with greater search options and better search results. The retrieval system also has a graphical, non-text based user interface with some new artist friendly UI tools that are really easy to use.

Use cases:

Jonas Trottnow from the Film Academy explained the use cases and in particular the effort involved in preparing assets in advance at DigiPro. These assets ideally flow from start to finish, so they often require varying levels of detail (LOD) and rely heavily on the work of Foundry for precise versioning. As in the other areas, the film academy wanted to support use cases large and small. To support this workflow for production research, the Film Academy used the “VPET” (Virtual Production Editing Tools), a system that was previously started by the research and development team at the Baden-Württemberg Film Academy

What makes VPET so interesting is its ease of use on the set. It is an open source editing tool that can be used to edit environments live and set elements from a tablet. It integrates seamlessly with the production pipeline and even supports AR capabilities so Team Onset can collaborate and interactively customize the virtual world, which expands the onset footage and helps build a more integrated world. Helzle doesn't know of any other open source solution that addresses these problems for virtual production: “There's Omniverse from NVIDIA that does something similar, but it's not the same and our system is a completely open structure … and that extends too not on our work in procedural character animation "

Since the VPET tablet client is used as a remote control for the 3D scene, the visual quality of the tablet is not an important goal. VPET enables the use of different versions of the same asset on a VPET tablet and on the LED wall of a virtual production stage. This means that a user can interact with the real-time asset on the tablet, but all updates will be applied to the high quality version on set. The tablet only offers a high quality preview of the final appearance. This requires multiple LODs and the system is fully designed to deliver this. A USD scene with several assets at different LODs can be generated, compiled and delivered automatically.

The system works with metadata and labels from the asset library to simplify set dressing and even edit them on set via the tablet. As the system is integrated and the world is updated, the scene-related procedural animation is updated automatically. Additional animation directions can also be entered using the same tablet. The team also added "mood regions". Using machine learning, the procedural animation merges with different behaviors, for example when the autonomous agents break into a “scary” building or region. "When a character enters such a region, the animation engine can automatically adjust the animation – which can be used to start virtual production, but also for pre-viz," explained Trottnow. The ML network learned how people move in different styles (e.g. running, walking, sad, happy …). "This means that a character could react if he walks past a television. He would look at the television when it was switched on, since the behavior is triggered by the prop asset, because that is exactly what people do," explains Helzle . This can then be used on a spline walkway defined in VPET. This allows any human character to be animated in a crowd simulation using the learned animation. The training data for the ML algorithms were also published publicly as high quality optical motion detections.

VPET has an open character streaming protocol for whole characters (including weights, skeleton, etc.) at runtime. Any external animation solution engine can animate a character through streamed bone animation. To begin with, high-level commands can be used to control a character. Commands like "go there", "run" etc. can be used through the VPET tablet.

For virtual production on an LED stage, the SAUCE system enables a database-coherent system to feed in the LED scenes, while the director changes the environment and animation in real time, all as part of a consistent system for storing and managing assets. SAUCE has an extensible framework for classifying and enriching production facilities and has proven extremely effective in a number of film academy productions.

Helzle points out that SAUCE as a project goes beyond the points dealt with in their DigiPro presentation, in particular the work that deals with “crowds, light field compression, camera calibration, etc.”. SAUCE runs until the end of the year, with more to come. ”

Building on the vision the team had at Siggraph Asia in 2016, most of the virtual production pipelines are proprietary and mainly apply to large-scale productions. With SAUCE and tools like VPET, it is now possible to implement virtual production-like scenarios at much lower costs. With Katana from Foundry, for example, the appearance of a recording can be defined. VPET can be used to set up a joint session in which all creative teams can interactively edit the content.

The SAUCE team tries to establish new production processes that encourage creativity by creating collaborative environments with shortened post-production cycles and a strong scope for reuse and intelligent management of assets thanks to the work of companies like DNEG. VPET is released for release under an open source license and enables any team to leverage collaboration in virtual production environments.

LEAVE A REPLY

Please enter your comment!
Please enter your name here