Virtual Production Epic Deep Dive – fxguide
Fx Pulse.jpg

In September, Mike Seymour from Greenscreen was the host The pulse, Virtual Production (VP) Panel from Epic Games. During this panel, viewers were asking themselves a number of more technical questions that we couldn't answer during this discussion, so we settled on this subsequent audio fxpodcast, with Mike again being assisted by Epic Games Matt Madden (Director of Virtual Production) .

Matt has such a strong background in virtual manufacturing. In addition to his leadership role at EPIC, where he worked on projects such as The Mandalorian, he previously worked at Profile Studios and Giant Studios. In short, Matt has a wealth of knowledge and on this episode of our fxpodcast you can hear Mike and Matt joyfully delve deep into the implementation of virtual production and LED stages in particular.

In addition to Mike and Matt, the original Pulse episode featured Sam Nicholson (CEO and Founder, Stargate Studios) and Felix Jorge (Co-Founder, CEO and Creative Director, Happy Mushroom Studios). The panel looked at how production and post are merging at the beginning of the pipeline, with the industry introducing new ways to create and manage content, as well as the possibilities of narrative virtual production.

Bonus Pulse Lighting Video

As a bonus for Greenscreen readers, there is a special video of a previously unpublished part of the original Pulse discussion that we didn't have time to broadcast. (Special thanks to the Epic team for editing and providing).

Pulse questions

During the Pulse event, Mike opened the discussion for viewers to ask questions, and there was a deluge of questions ranging from the most basic to the most complex LED staging questions. The most technical of these couldn't be addressed in the time of the panel, so this podcast aims to fill that gap. Thanks to everyone who asked questions as there were hundreds of them! We appreciate that and it's clear that this is a really important area for a lot of artists right now.

On the podcast, Mike and Matt discuss everything from:

  • Top LED Screen Design Issues, From Shape, To Best LED Screen To Use, And Why
  • The problem of latency and goals for professional use
  • Questions of color and color science
  • The complex topics of focus and virtual defocus in one LED phase
  • Stage calibration and alignment
  • Other sources for more information.

Below is the original video recording of the Pulse Panel:

Here is the transcript of the fxpodcast with Mike and Matt.

Mike: (01:04) Matt, thank you very much for joining us. It was a lot of fun doing this pulse video that we made for the live event. That was fun, wasn't it?

Frosted: (01:12) Yeah, that was great. It's great to get everyone in and share some stories and ideas and just different perspectives. Really good fun

Mike: (01:22) The problem that I had, and I'm sure you have had it a bit too, is of course that we can talk about it on so many different levels. And for me some interesting questions popped up in the chat like I thought, well this is probably not the right forum for that. Because that would be like a deep dive, but deep dive is my favorite place to dive.

Frosted: (01:40) That's the compromise, isn't it? Because there are many of these areas where you really want to go deeper, but it can take an hour.

Mike: (01:51) Exactly. Yes. And when you have a group of four, it can be a little harder sometimes to cover all of these basics, but let's get into that. Now I'll start with the obvious things which are LED screens because this isn't the only form of virtual production, but it is certainly one that we have a tremendous amount of questions about.

Mike: (02:13) I'll get back to that in a moment because there are some questions about the cost but I just thought I was going to ask you if you would agree with me, there is kind of a base question hey you know what is this Default? And of course it is still too early to say what a standard actually is. Because there is no sense of a standard, but I guess most of the stairs people are looking at right now are about 10m by 4 or 5m high. Is that the kind of scale you see most of these current stages on? Or are they getting much, much bigger?

Frosted: (02:48) I would say a little higher as these are these large format cameras. For something set 4m high, it's easy to quickly show what's on top. They know that your camera movement is a little more restricted. If you stay at four meters, usually 6 meters, I'd say the height, but I think you're pretty much right on the width. And the other thing about it, Mike, is the volumes or those walls, you know, we have all these different names for what, what kind of LED structure is. Right, it really depends on the application, you know? What is a standard? Is it a standard for a full three-360 world that you want to emulate? Or is it a standard for part of a world and the rest of the live action? So it really depends on the type of production and how you plan to use the virtual content.

Mike: (03:39) Yes. I think the first type that most people remember is a cube, but when I talk to people, a cube probably isn't the best place to go, is it?

Frosted: (3:52) I'd agree and I've seen a lot of these cube setups that you talk about in demos and even customers sent us specs and feedback and the problem with the right angle and the cube as you get the Angles in your camera are actually off track or offline when the real light is coming from the wall. So you have issues like color shifts that you have to deal with because you are much less perpendicular to the actual wall. In general, a rounded wall is better and it doesn't have to be a uniform angle. However, we usually recommend having a rounded wall rather than a right angle.

Mike: (04:38) Let's just dive deep right out of the gate. So there are two things there: first, you don't want to get too round or you will get crosstalk from the LEDs, right.

Matt: (04:48) That's right. That's right. So you need to consider the angle between the LEDs. So then you don't get any light which, as you said, actually transmits the crosstalk from neighboring LEDs. So that's a problem. So at this angle you can't get too aggressive. Sound is also a problem. Then when you have a consistent angle, um, especially if you're building more of a 270 degree volume, you should definitely pay attention to the sound.

Mike: (05:19) Yes, they are hard surfaces … it bounces, it's not what you would design a sound stage!

Frosted: (05:22) No, it's actually the opposite of how you design a soundstage. And so we actually talked to some "acousticians" which is a new word to me, but they are acousticians and that's what they do for a living. And we asked them, "Hey, what would you do if you had to build something like this?" And they laughed at first because they looked at what we were dealing with, which was a cylinder with a hard surface, with all that noise bouncing towards the center, and that massive echo – they just couldn't believe we were inside shot but you know we of course I was working on the first Mandalorian season and everyone in this environment is dealing with these issues, so you use all of the normal production tricks of flags and other set objects. Anything you can to interrupt the sound. But her idea was interesting. Their proposal was to change the angle of the arch throughout the room. The sound, not just pointing back to the center, actually threw it in different places.

Mike: (06:27) Right ?! A lens designed to focus exactly the same curvature at one point. – They say we're making a wonky lens. So it never focuses

Frosted: (06:34) Exactly. You want that sound to be spread all over the environment, not in a single place or the echo, the bounce, and that's exactly what we recommend to our customers for future builds.

Mike: (6:50) The other thing you said there earlier was that you have a sharp corner because you have a box just to explain that a little further. What people don't know is that depending on the LEDs, you are getting the actual LEDs and not all of them are in the same flat space. So if you cut off an LED and went so that you were about two inches away from it, they wouldn't all be exactly the same height, which means when I'm on, a very obtuse angle – as I can when looking down the wall it actually look kind of pink. I am thinking of the demo at SSIGGRAPH. When I was there, of course, it looked perfectly normal. I put my back up and took a few photos. I was about the same height as the ceiling panels and in my photos it was red. And I think is that a strange frequency thing? And of course I realized that the red LEDs just stood out a bit more. And when I got into a point of view, it was kind of pink. When I was underneath or literally underneath, the people who were underneath in my photo were usually fully lit because they let the straight light come down from above, and it's even. It's just that when the camera sees it at a fleeting angle, it starts to get funky, right?

Frosted: (07:58) Right. And that actually depends on how you align the LEDs. So there will be a greater color shift, either left to right or top to bottom. Usually it is oriented so that the color shift is more from top to bottom. So your red, green, and blue are actually stacked vertically. So if you look at it from left to right, you'll get a more even color, even at those sharper angles. However, if at some point you are right there is the starting angle so the LEDs die off in relation to your portion may actually throw at the camera at a certain angle, but it will also start shifting. And so this can be right angle. In the camera – if it's all about lighting and sideways reflections, that's one thing, but if you're trying to do a curved pan or great tracking shot from one surface to one perpendicular surface, all in one go, then is that a pretty big question!

Mike: (8:57) The other thing people asked was, you know how much is the cost of stuff? Well it's very hard to get the cost, but I think the first thing you would say is that it really depends on the quality of the LEDs. As for that – I think you'd probably agree – is 2.8 kind of a standard pitch that most people are working on these days?

Frosted: (9:17 am) I'd say it's probably the most common for mid to high end productions right now. Yes. 2.8.

Mike: (09:24) What is the best thing you've seen and how far is that from being standard?

Frosted: (09:29) Well that's an interesting question. Which is best because we're seeing more LED panels that are smaller pixel pitch, you know, in the 1.5, 1.7 and that kind of range. The other factor to consider is brightness, color shift, Reliability and consistency of all LEDs. So what we're seeing is some manufacturers, not all, but some of them are pushing to put more LEDs on a panel, which means a lower pitch, but at the expense of these other things like brightness, especially color consistent and the like.

Mike: (10:08) I would have thought that color consistency was a bigger problem than lighting. Because I would have thought that the lighting conditions – it's not like you're normally pumping those screens at 100% a hundred or less?

Frosted: (10:20) Well, that's another interesting question. Some of these panels have a maximum brightness of, for example, five, six, 700 nits, while others are two to three times as high. The actual brightness of these panels is therefore very different. And that's another factor that you really need to consider.

Mike: (10:41) The reason I said color consistency is because I've obviously used LED with LED light panels and learned pretty early on that LEDs are not all the same. In fact, they are not all the same. So two things for those who listen. There are literally companies that make LEDs and they have containers – quality containers. Like "these are the really good ones" and "these are the not so good ones". And "those are the ones that are a bit out of spec". And the theory is if you buy a cheaper screen it looks pretty good to your eye because what they did is they used a number of screens that weren't among the best but they figured out that it will be average. As if there was some good, some bad, some up and some down and you'd get some "even" color, the really good manufacturers – and I think now, like the high-end brands you I would know, spend more money to buy the more accurate LEDs. So that's a really big point. And then the second point is, and maybe you can talk to this Matt. When you do a frequency response on LEDs, there is some kind of peaks and valleys in a spectrum of light that is sloping off and how those spectral peaks affect human skin tones is an incredibly important difference, to say tungsten, right? It's a very different spectrum analysis. So we're really talking to your eye about that it might look okay, but you're actually going to get changes in skin tone that you might not want – if you buy a cheaper, happier LED panel

Frosted: (12:08) Definitely. And you got a lot of good points there, Mike, and the other thing is the consistency about how we're mixed in certain batches, at least by some, some manufacturers. And this is where calibration becomes a real challenge. Because if, when you think about it, you have all these variations between individual LEDs on a single panel that you need to take into account and the LED processor manufacturers realize, okay if we are trying to put out a certain amount of light from that panel, we have to see the whole. We cannot just assume that they all allow the same amount of the same color. So we have to calibrate things to the single LED. So it is a very careful process to make sure they offer consistent color and brightness for these different panel types. So you're working with the lowest common denominator in the stack if you want that consistency. So if the blacks don't hold up or the brightness isn't the same on others, then in terms of calibration you have to work within what they can all do across a panel. Otherwise, if you want consistency for the whole thing, start getting into a noise. I am sorry?..

Mike: (13:27) How do you calibrate? So if you go on a Mandalorian type of stage, like a high end stage with decent quality, what is the calibration?

Frosted: (13:36) Well that usually comes with the processors. So there are some new techniques that the companies' special cameras, especially the processor manufacturers, have, and it can sometimes take 10 minutes for a single panel to actually be calibrated. I didn't do it myself. We usually work with the likes of Brompton and others to actually do the calibration, but it's a very detailed spectral illuminance analysis of every single LED under every possible setting to determine what they can actually produce. And then the calibration process again determines what the consistent light, luminance and spectral values ​​are for a given signal, and that is the end result. And that's really the challenge, and then what you're talking about, thousands of panels beyond that that you need to calibrate, um, it's quite a significant undertaking. They are supposed to come out calibrated at the factory, but these processes are companies that say no, actually we have to do it all over again. Because that's really just a kind of estimate.

Mike: (2:52 pm) When you're building the stage, we need to know where a panel is relative to the camera. Law? And we often talk about tracking, like "where is the camera in the volume?" And I think that's a technology well understood by most people – that you can do camera tracking – but I actually need to know where the screens are. And I know that in broadcasting in particular, companies like Disguise are using their Omnical to do a spatial mapping and actually using structured light to try to figure this out. What is what, but what is the process of actual determination? Because you can't just say I have a 3D model of where the guys should put the panels and let's hope they put them there?

Frosted: (3:30 p.m.) Right, right. We use this as an estimate and then you need to do some sort of LIDAR which we often recommend and possibly photogrammetry. We have not yet done the structured light approach. There are teams that actually do that – that their job is to incorporate LED walls. And um, but up to your point you need to have a high-precision 3D version of the mesh of the physical structure, not the design of what it will be. This is a good starting point. Yes.

Mike: (4:06) Because I mean, obviously in some cases, like maybe for some kind of skydome in the skylight, you don't think this is particularly accurate, but it is incredibly accurate. But when the camera moves, the relationship with the virtualized things is wrong. If the LEDs are not physically where the camera system thinks they are, all of that wouldn't be in a row. Can I ask another question? While we get into this tracking thing, there is a certain type of tracking for the camera – when you come back to it you think that this is the kind of de facto recommendation for what you have set up as a recording volume, … to know where the camera is?

Frosted: (16:47) I think it really depends on the condition. For example, the broadcast groups that do LED walls and camera tracking and compositing work, as you know, compositing have very strict criteria for live compositing. In this case, however, the cameras are very predictable. You know, they're on those rigs that are moving, they are slowly pushing them in, they are moving out, they follow a little panning, but they are not like what you see and some of the hardcore movie and TV productions and in these cases – it's a bit of a mixed bag. I have used many optical tracking systems like the MoCap version with a built in IMU (Inertial Measurement Unit). And the reason the IMU is so important is because the optics themselves are a really good starting point again, but they are prone to a bit of noise and you don't want to over-filter that as it can lead to a little bit of looseness in the rack the camera and there is softness and latency. So you don't want either. And so a high quality me that has a gyro and an accelerometer is really important to any of these systems. It's not just a matter of whether it's an inside-out camera that has an Alexa or a Sony, for example, mounted on the ceiling with reflective spots – this is a reliable approach, the problem is that you're there Deploying a physical device, grip starts with different flags or something else you weren't expecting. And suddenly the camera sees about a third of it, right? You need to have a few tools in your belt to cope with these changing conditions in a production facility. We often used optical detection for the large volumes, but this also has its limits. They're not as pixel-perfect just for the kind of process of something that's more inside-out, like a stereo camera rig, or as I mentioned, the ones that are camera mounted and on an array The number of reflective marks and inside-out tracking are usually more accurate, but they also pose challenges in terms of consistency and robustness because so many things can change in the field of view of this camera during production. So it's really about understanding what you're recording. What kind of show is it or a lot of things that change, how much can you predict where the camera will be and what you can access in terms of visibility for tracking. Once you've defined these, the options become a little clearer.

Mike: (7:36 pm) The next related point is the "seeing" of the camera. And the camera of course sees the people on set, but they also see the LED screens, which brings us to some of the questions we had about our Pulse thing. So, Jesse Sperling, Scott Lynch, like a bunch of people, send us questions along the way. And they said, "Hey, what about the color space?" Or "Is ACES something we should be including?" – and other people just ask simple questions like "How good is the color reproduction on the screens?" "Do you have enough dynamic range to get final visual effects and captures in-camera?"

Frosted: (20:12) So to the second question, yes, they do. You are certainly capable of wide range and high dynamic range. Then it's about getting that content onto the screen. So when you get back to the actual asset creation process, this is where everything starts right. We still usually have most of the assets currently being created in an SRGB area or something very similar. They don't have to be, but that's what we mostly see now. The first thing to make sure is keeping track of what color space these assets created. If you want to convert something else, or work with it in a different form, you need to know about each conversion you need to know where to start with. Right, – currently in the standard workload with Unreal, it assumes that assets go into SRGB. That is then converted into a linear scene. So it actually decreases this display-related curve. So you work linearly. Any changes you make to the color

Mike: (8:15 PM) Any math you do on color changes is linear in Unreal, so the math will work properly.

Frosted: (20:16) Right. And because it is a nightmare when you try to work with different color slides at the same time, with all kinds of effects and calculations. And then, as part of the post-production process, you have the option to convert and convert that color to a different area and coding if you want to work in a larger area from the start. For now, this is the best way to do it. And we have clients who work with our assets in ACEs. So you are actually converting these images to ACES-CG.

The rest of that workload is the same until you send it to the display device you are sending it to. In this case, it is LED walls. You will then have to convert it again to the color space and encoding that the LED processors are looking for. And they also have different options. You can send REC709 REC 2020 in DCI, …

Mike: (23:07) Yeah, REC2020, not necessarily the best, but it just seems to be the most popular.

Frosted: (23:11) It is definitely gaining popularity now because of its wide range. No question. And we see that often with the newer workloads that are being produced. And the people who are more interested in taking advantage of the wide range on the walls – yes, sure.

Mike: (23:34) For those of you who hear (this) – just to understand how complicated this is, although it sounds incredibly simple: you have the color that it is made in, the color that Unreal has to do with , – then the color space, or the type of type of color range to be sent to the monitors. Then of course this is indicated by the LEDs. And how is that perceived by the CMOS chip in the camera? Of course, keep in mind that all of this – from the camera – is recorded and passed on to Swiss Post.

Frosted: (24:11) Yes. And to add to that, because that is also a really good point, is the development of a 3dLUT that emulates what comes from the camera and what comes from the render via the LED processors, via the panels, to the sensor and to a calibrated monitor … If you want to rate the content, let's assume you are in the art department looking at virtual worlds with the DP and the production designer’s director. And you want to evaluate these worlds on a monitor in your laboratory or your art department. Ideally, you can mimick what it will look like through the camera. This is not the only way the world looks. And when we put a camera on it, we know how it should look. Ideally, you want both, right? Here the world is in its natural state. This is what it looks like when you look through the camera so they can evaluate it both ways.

And because the DPs are involved here. Now of course it's about looking through the virtual camera. When they get on stage, that's what they expect – the world that looks like this. Law? And you have to take that into account. And then that day you want to remove that transformation because then the camera and LED processor, as well as the wall or the whole process, will naturally apply them. So you need to come up with this separate step. If you want to be able to rate the content so that it ultimately looks through the lens.

Mike: (25:48) I mean, the whole principle of ACES is, "Hey, my Canon camera is going to have a different way of seeing colors than my ARRI camera." So I have to normalize everything in a room. – Well, that's the math ending that I love, but if you hear this and think, "Whoa, Matt lost me a little. Because he's above my salary (!). Let me just say this, Matt, Didn't find that in the case the art department painted some gray props on the Mandalorian set and had that same gray expanded into the LED screens because it was obviously a set expansion. And of course you want that It sounds like the most trivial problem in the world, but if you get something wrong, an art department will re-work at midnight!

Frosted: (26:33) It's not just gray, it's dirt, it's everything because it's you too. They also take into account the impact of the environment on this physical dirt on the physical side. And so, when you merge one, it's always a challenge – it's a challenge, and you will never theoretically get it done ahead of time if you load it all up and go. Yes, it's a perfect one to one transition. Can't tell where one ends and the other begins. In that case, you're hoping to be at the point where you're just fine-tuning. It certainly shouldn't look drastically different with all of the lighting set arriving, and you can't exactly predict how he'll light it up anyway, but you can get the closest

Mike: (27:19) Yes, a daylight or I'm sorry, I am a tungsten light. Yes,

Frosted: (27:24) That's right. You make a good point. Another tool we have is critical to this process. These are the color correction areas that you can create very quickly and interactively

Mike: (27:38) Please explain. Because I think this is an amazing tool

Frosted: (27:41) It's really the only way I know you can create a seamless blend based on all of the variables we're talking about. All the math you need to do ahead of time to get as close as possible. It is certainly necessary. But even after that there is the subjective optimization that you do at the very end, once the camera is in place, and you don't do that beforehand because things could change. And so you can't get too, too smart, or too far ahead of yourself and start tweaking the content because the DP will first of all say, "What are you doing to change my mess over there?"

Sobald Sie die Aufnahme gerahmt haben und jeder diese Transformation sieht, werden wir das direkte Beispiel dieses physischen Schmutzes, Sandes und Steins in virtuell umwandeln. Anschließend können Sie diese Farbkorrekturbereiche hinzufügen. Es handelt sich im Grunde genommen um Würfel, die Sie in Rechtecke umwandeln können, unabhängig von der gewünschten Form. Und Sie platzieren diese im Sichtfeld der Kamera. Und dann können Sie das mischen. Sie können diese Farbe über einen Drei-Würfel anwenden, wobei die Intensitäts- und Farbwerte in die Texturen dieser Welt integriert sind. Es handelt sich also um eine 3D-Änderung, nicht um eine 2D-Änderung, was wirklich wichtig ist. Und dann können Sie das auch verjüngen. Es wird also in den Rest Ihrer tatsächlichen virtuellen Sets eingeblendet, um diesen Übergang zu erzielen.

Mike: (29:08) Ich denke, die meisten Leute kennen die Idee eines elektrischen Fensters aus den Tagen von daVinci in einer Farbklasse. Es gibt ein elektrisches Fenster. Jetzt hast du wie "Kraftblasen"? (laughs)

Matt: (29:16) Das ist richtig. Gute Analogie. Ja. Law. Und manchmal benötigen Sie mehrere "Blasen", da sich das Gelände in der virtuellen Welt relativ zur physischen Welt ändert und es sich nicht unbedingt um eine konsistente Beziehung handelt. Daher müssen Sie diese Blasen möglicherweise über den gesamten Übergangsbereich vom physischen zum virtuellen Bereich miteinander mischen.

Mike: (29:42) Außerdem möchte ich kurz auf die Latenz eingehen, da ich das für wichtig halte. Und eines der Dinge, die die Latenz beeinflussen, ist natürlich die Verarbeitung, um herauszufinden, wo sich die Kamera befindet. Das hat nichts mit Unreal oder der Szene zu tun. Es ist nur ein oder zwei Frames, aber sobald es eintrifft und Unreal offensichtlich sein Rendering durchführen muss und wir Dinge rausholen müssen, gibt es Netzwerkprobleme. Jetzt möchte ich nur Ihre Meinung dazu einholen, weil die Leute Netzwerke aufbauen und die meisten, denke ich, NDI sind. Aber ist das der Standard oder ist das nur ein Sprungbrett? Ich meine, es ist nicht so NDI, ist das besonders alt? Es gibt es erst seit ein paar Jahren, aber wie der Versuch, das Zeug von der Seite der Bühne in Ihrem internen Netzwerk zu bewegen, ist ein weiterer Ort, an dem Sie nicht ein oder zwei Frames verlieren möchten, oder?

Matt: (30:26) Nein, schon gar nicht. Aber ehrlich gesagt, das Networking – persönlich habe ich den großen Erfolg beim Networking nicht gesehen. Es ging um das Rendern und die eigentliche Projektion, die tatsächlichen Berechnungen des Schnittstellensystems und das Rendern des Hintergrunds. Und dann, dann die Verarbeitung innerhalb der LED-Prozessoren und dann die Wand,

Mike: (30:50) Aber das sollte ich anstreben? Das ist die Schleife, oder? Wie lange es dauert, bis das gesamte System in Bezug auf Frames aktualisiert ist, haben wir, wie gesagt, ein paar Frames hier, ein paar Frames dort, eine Zielnummer, die Sie einem Fachmann zeigen möchten Bühne?

Matt: (31:05) Idealerweise ungefähr sechs Frames mit 24 fps. Ja, es ist in Ordnung.

Mike: (31:13) 6 – wow I’m feeling very old fashioned – six? That’s incredible.

Matt: (31:17) Like I said, ideally, but generally what we see is about eight and I think we can shave some, I’m saying the collective, – we, it’s all of us as an industry. And so from what I said, 8 is kind of the standard. What I’m seeing is 7, we got it down to 7 last year on a lot of tests and some shoots that we did. But we still had the camera tracking at two frames. Not because it took that long, but because we were only sampling every 24th of a second. So it was really just a hair over a frame, but because it wasn’t available at that instance, it had to wait for the next one. And so you, you gain almost a whole additional frame on necessarily. So the camera, the tracking could be at 120 FPS and, take six or seven frames at that rate. So it’s just a little bit over a full-frame at 24.

And so since then, in the last year that’s been, uh, that’s been knocked down. I think most tracking systems can do it in a frame. The other area is that the rendering to the wall. So all of these different pieces of the wall that need to be rendered. That’s something we’re working quite a bit on in terms of efficiency, and then resolution,  because in the original setup that we had just a bit about this rendering process: we actually had to render the frustum and what the camera sees on every single PC,  meaning – let’s say you’ve got this big volume and you’re mapping the background and breaking that background up into a dozen different pieces. And so you have a dozen PCs with high-end Quadro cards on them. Every single one of those PCs had to also render what the camera sees. So you’re rendering a 12 different times for the same image, – so not very efficient, but it worked. But they’re certainly better ways to do it, especially now with some of the new tech that NVIDIA has out. And so we’re doing a lot of R&D with GPU texture sharing now and, and things like mosaic, where you can render one big image very efficiently. And so the whole idea there is to not render that same picture, 12 times on 12 different machines, but maybe have one machine with three or four GPS on it. And so you could render one big image for the camera. And then the other background images on other GPU is, and that copy that inner frustum, that inner picture to the other backgrounds, and then send that to the walls and a much faster, much more efficiently than we did in the previous generation.

Mike: (34:21) I mean, you say the previous generation like…

Matt: (34:25)Two years ago, but yeah, (laughs)

Mike: (34:27) Exactly!!!, Now we did get a lot of questions from people and I’m not ignoring them, – about cost, but I think we’ve already established that it’s very variable depending on, you know, what you’re using it for, how big it is. For example, there’s a lot of costs even in the cost of hanging the LEDs overhead. LEDs are a significant factor because you’ve got a big space. That’s a big ask – to not have any pylons or anything holding up the LEDs from below. So we probably can’t give anyone actual numbers. But what I would like to ask you about in terms of budgets, more like ‘opportunities’ for addressing this (the budget). And one of the ones that I thought was interesting, somebody, Dave Smith, asked us how much of this could I use the Cloud for? Is there any use of this? Can we use the cloud circling back of that latency discussion? I think it is a responsible thing to ask…

Matt: (35:21) Yeah, for sure. Certainly, the cloud is great for pre-production designing assets, collaborating with different teams, especially in the situation we’re in now, where we typically can’t be together. It’s wonderful for scouting, even where there that there’s an interactive component to the live camera and set scouting and that type of thing in Tech checks and such, – and for posts, – same thing, evaluating the content and posts evaluating the edit. We all know the cloud is really good for that kind of stuff. Once you get into production, at least what we’re talking about, where it’s live in-camera content, it’s not ideal. We’re not there yet. We want every millisecond we can get to make sure that that content gets to screen as soon as possible. So I know there’s a lot of interest in that and we’ve been approached by a few big companies that want to explore that, but, yeah, I mean, we just talked about latency and we’re lucky…

Just back to latency for one second, the way this approach that we use, and it just the nature of the process really with the LED wall and the way it’s done, as opposed to a composite, – we kind of get away with a little bit with the nature of this process in that we render a larger area than what the camera can actually see: intentionally. So if there is a bit of latency between what the physical camera is doing and what the virtual camera thinks is happening, we have that buffer zone. So that the perspective is correct from the virtual camera, even though it hasn’t, it might be slightly behind that physical camera with certain moves. And that’s something that with compositing, it just adds more frame delay, right? If you, if you’re doing a comp and you have a greenscreen, and you’re projecting a virtual image on the somewhere in that are covering the green, and you have all this latency in this process, well, there’s going to be a longer and longer time before the camera operator can see the result of that comp because you have to wait until that’s done before you can send it back to the operator. And that’s no good, you know, especially if they have a handheld move or Steadicam where they’re really dependent on seeing that feedback interactively.

Mike (37:47) Yeah. If you’re listening and you’re sort of a little confused as to why this latency is so important, that’s a great example, Matt, like I put, I put a square up just behind your head – because I want to get greenscreen of your hair. And if that square wasn’t, as you say a bit outside, what I need and the camera operator quickly pans to the right, it might be that he goes off the green before the green has a chance to move with his camera. And another way of thinking that is it feels like a bit spongy, right? Like, you know, I would stop. And then a second later, of course, not a second or two, but anyway, the second or later it would all catch me up and it would stop. And so the less, that is a perceivable amount, the better off we are, but yes, having a little bit of, um, a little bit of up your sleeve, so a bit of extra, but quite frankly, like how many times do we not do that with everything else? Law. We always have a bit extra in case we want to reframe or something. So that’s, that’s pretty reasonable.

Hey, the other thing that people ask about in terms of costs, we might be able to give some guide on is like, what are the main costs? Um, just sort of percentage more than numbers. And I’m thinking obviously crew is, is vital and probably the actors, the above the line cost. The biggest thing of all. – But leaving that aside, If I’m using one of these sets we have been talking about, – a typical set we’ve been talking about – it’s got a 2.8 (pitch) It’s not in any way a peculiar thing… and it is, say, 5m x 10 m and done sensibly,..where am I sort of going to see most of the costs being dropped?

Matt: (39:19) Definitely the panels for sure, and the processors to some degree, + the graphics cards, although, like I mentioned, we’re doing a lot of work to minimize the hit in the wallet on the graphics side. But it’s head and shoulders: the panels for sure. Ja.

Mike: (39:38) So before I leave this, if there was somewhere that people wanted to go, let’s say they’d love to learn more about this? This is fascinating, where, on Epic site, is there a place that somebody can go or is there something that we can help people with in terms of following up on in some way?

Matt: (39:55) Yes, we created a pipeline document for this process in-camera VFX reference pipeline on our docs page. So docs.unrealengine.com has this information. There’s also a link to, …you mentioned hardware. There’s also a link in our reference pipeline document. Now, if you just go to the new releases for 4.25, you’ll see this information inside the reference pipeline document, there’s a link to a recommended hardware for computers, for switchers, network gear, basically, all the bits and pieces required to put this together and different levels of equipment. Different levels in terms of complexity and, resolution for graphics cards and such so that you don’t have to just use the high-end. If you’re testing, if you’re just at home, you want to figure this out. There are different options there. So we try to assemble all of that into a document for users, whether you’re at home, just working on this or you’re trying to do the next big movie.

Mike: (41:09) The other question that a couple of people had was about VR. We’ve been discussing LED walls. Maybe you could just discuss that cause it’s kind of like a, if you had a Venn diagram, they overlap like at the SIGGRAPH, launch where we had the setup, the cube, and it was the motorbike with Matt Workman. It was great, – but to the side was also a whole VR overlay on that so that people could scout, I don’t want to spend too long on it. Just give us a bit of a handle. Because we don’t want to imply that the only virtual production is LED sets.

Matt: (41:40) That’s right. Ja. So when you build interactive content, there are lots of ways you can use it, right? You could have full CG content. It doesn’t have to be with live-action. Obviously, the VR aspect is a really useful tool when you’re trying to design your world. You can immerse yourself in that world. You can work interactively with others and build that world, get feedback on the fly. And we spent a lot of time integrating, our multi-user tools and communication process so that when you change something in VR, it’s updating other machines or other users in that same world. Uh, we also have the Vive tracker now with full, LiveLink support. So again, if you want to set up a small system at home, whether it’s tracking for VR or just, you want to put it on a camera and, and look on the monitor and create camera moves, you have a way to track at home without, you know, the big Hollywood budget prices. Matt: (42:42)So we’re trying to, and in fact, our, our staff is doing that quite a bit with the code situation. We have a lot of us have our own home studios, if you will, because we have to keep testing. We have to keep breaking things and trying to fix them and sharing results. So when we get on our zoom calls, you often see people with all kinds of gear in the background with these home studios. And so that’s a big part of our process too, –  to think about what we can do? Because we don’t want to build 30 studios that, with some ungodly amount of money on each (!) And so we’re all looking for ways to do this more cost-effectively, whether we’re testing or, or designing, uh, or just collaborating on something. So, yeah, we’re working on integrating hardware at all levels for every type of user.

Mike: (43:37) Build out from that. So there are some questions by Cameron Smith and just a bunch of other people about greenscreen into UE4, because clearly one of the things I could do is just have a green screen, a virtual environment, and I could just be placing my person in that. Is there any sort of tips you could give in terms of virtual production, like for how to do that, are we better trying to have some external keyer that’s providing a key signal that we’d better keying it at inside Unreal. Just give me some idea about how I would go about doing that, where I’m no longer that concerned about the LED side of things. I’m just thinking, ‘Hey, I can do a green screen and I can do a virtual world?’

Matt: (44:15) Right Well you can use composure to pull a key. So if you have a video card like a KONA5, card, for example, Blackmagic has lots of cards too, that are different cards we could use. And actually, all that is referenced on our hardware side, the cards that we typically recommend. But absolutely you can use either one, whether you do the keying outside in artwork composite, or there’s certainly nothing wrong with that, but if you want to have it all at one system, you can use composure, which is part of Unreal engine to do the actual composite.

Mike: (44:54) In terms of that workflow, that’s been really like the sort of precursor to the LEDs. Like before we were talking about LED screens, there were lots of companies doing really good work with virtual sets, and virtual environments. And I would say in terms of the questions that we’ve been asked, ‘how do I get into this?’ I think that’s one of the easiest ways to get into this. You can learn a lot from doing virtual production of a green screen environment, -, way before you need to be worrying about LED screens, if you’re a relatively new user.

Matt: (45:26) Absolutely and all the traditional filmmaking, processes and tools are still valid in a sense. I mean, you have to understand if you want to be involved in virtual production, you mentioned getting started. If you know about cameras, if you understand lenses, if you understand video signals and those are all still super important to this process, if you understand a timecode, if you’re more of an engineer, if you understand computer graphics and real-time rendering. So virtual production is really a combination of all those different things. It’s not its own thing. That’s separate. It’s really all of those things combined. It’s interactive graphics, it’s filmmaking, it’s engineering, it’s artistry. And that’s, what’s really exciting about it, but there’s no virtual production class where you go to quote-unquote, learn all this stuff. It’s really understanding the foundation of these different pieces. And once you have that experience and knowledge base, then you can apply it to this type of production.

Mike: (46:32) So to that end, I guess people have seen the Unreal engine UE5 demo. I’m not going to go into the details of UE 5, but is there a rule of thumb of like, how much do you feel this is going to be thrown out the window and I’m going to have to start over again? Or is a lot of this is going to be relevant? (laughs) Well, you laugh, but I mean…

Matt: (46:52) No, – I know that’s the thing about technology, right? You do this cool stuff and that’s a factor is you have user base and you don’t want to have to tell them you’re starting from scratch, right? And that’s not the case with UE5, – the big thing that’s different there, is really, the storage in terms of speed and capacity because the process which content is getting to the GPU is a little bit different. But it’s not, …I’ll say that the bulk of the hardware you have now is still going to be quite useful. Certainly, the graphics cards, memory, motherboards, that type of thing, but storage, high-speed storage, is really the main area that is being leveraging far and away more than anything else. And so if you have a system that maybe needs a bit more of that, or you have a motherboard that doesn’t support the NBMe SSDs, something like that, …then you might need to invest in a new machine, but you could take the graphics card out and put it in the new machine. So you wouldn’t necessarily be starting from scratch.

Mike: (48:07) A couple of specific questions to that end. If I’m doing something like a green screen, placing someone in an environment, the question is, ‘how much do you personally, – just Matt’s opinion – think that we are matching real camera’s motion, blur, and depth of field, with what’s happening in UE4 for at the moment?’ So I’m not talking about your UE5 right now. I’m just talking about at the moment, if I’m doing virtual production, do you feel like those blurs and motion blurs as well as obviously the practically focus ones, -are matching for your eye?

Matt: (48:35) If we were doing a green screen, is that what you mean? Ja. To an extent it takes some noodling, you know, there’s kind of going back to what we were talking about before you have the mathematical emulation of what we think is happening with the physical camera. And we apply those variables in the computer to try and match it. But, I wouldn’t say it’s one-to-one? No. I think, it does take a bit of fudging and the way the motion blur, for example, there’s the object blur the way that engines work is there’s a blur generated from a moving object and the blur generated from movie camera. And you would think, ‘Oh, well, that’s kind of the same thing’. ‘Whether the camera’s moving or an object’s moving, it’s just, it takes a picture and whatever’s moving during the exposure?!’, – that’s your blur. And in a sense, that’s correct – in the real world, but in the graphics world, they’re handled a little bit differently. And it’s, it’s something that we are working to have more control over,.. to match the physical world. Because there are so many variables in the camera, the type of shutter, the exposure time, the lenses, all those things affect the blur. And we’re not emulating it to that level of detail.

Mike: (49:59) Yeah and also from a VFX point of view, you know, if I’m doing rack defocus, if I get a really shallow depth of field, like it’s 120 mm lens, and I’ve got lights behind that turn into those sort of ovals, you know, the number of blades on the actual physical aperture of a lens will affect what the bokeh looks like. So these things are not like a blur, is a blur is a blur.

But I am doing the exact thing I was going to not do, which is I’m hitting this from a VFX point of view. And the next question I want to ask you actually came from somebody that said, ‘Hey, you guys are kind of asking all these questions about VFX artists and lighting, artists learning this stuff. What about a games person? How easy do you think it is from a game developer to enter virtual production? And is that something that, you know, you’d recommend or is it too steeper a learning curve?’

Matt: (50:49) Oh, not at all. Game developers are more than welcome. In fact, there are ideal for this workflow because they understand interactive graphics. And that’s one of the steeper curves in this process coming from a more traditional VFX background, how real-time renders work, that is different from traditional rendering and the demands of a real-time render that are quite different. And so if you understand how to build great real-time assets, there’s a, there’s a job for you in virtual production. I assure you. So understanding filmmaking, sure, you need to learn that, – you need to understand how this process differs from game development. And it’s funny, we have that within Epic. We have people from, of course, we’d have hundreds of people from a gaming background, obviously,  – but now, we have more and more people like myself coming from the film world and to see these two worlds come together, it’s really interesting because we will be in conversations on both sides where once I think they’re talking about something that’s very obvious and very intuitive and we’re deep into it. And someone raised their hand says, ‘hold on a second. You know, what does that phrase mean? What are you talking about there?’ or  ‘I don’t quite get that and vice versa’. So it’s really interesting to see this coming together and it’s happening all over the world as we go down this road together. But absolutely there is a demand, I would say, a need for professionals from the game world to come into virtual production. And I think the best teams will have combinations of both people from both backgrounds.

Mike: (52:37) Yeah. I was talking to a friend,  Ben Grossman, I’m sure that might be mentioning this from Magopus and he was just totally embracing that concept. Like you need both, right? You don’t win with a bunch of film people sitting around trying to learn stuff that is intuitive to a game developer. And similarly as a language for a film director that again, developer might need to learn. Yeah, it’s an approach, you know, because of background, obviously I think the example we always go is, you know, if I’m doing a shot in film, I ‘dress’ to the lens. I don’t care about the mathematical continuity. I just care that it looks good. But a game developer tends to build a world. So you can go anywhere in the world and you don’t think to move the table up off the floor and hover it three inches above just because I’m at this particular camera, whereas a film guy will be like, yeah, we’ll just, you know, totally lift that table and stick some wooden blocks under it.

Matt: (53:32) That’s so true. And that, that’s one of the biggest transformations of this process is this idea of world-building, because it can be done so efficiently and that’s not to say you have to create the entire world to final, but you can get very far along ,…pretty quickly as opposed to just saying ‘I’m going to do just a section of it’.  It’s amazing when you approach it that way, what you get in return. Because I’ve, now that I’ve been immersed in it for a while-  to work with people that think that way of building worlds and that, like you said, now I’m going to go lens it, it is really interesting. So what we’re seeing now is this, this collaboration between screenwriter, director, worldbuilder, so that it’s not about the shot, it’s about where we are in these worlds first, to really develop those worlds to the level that they’re needed before they worry about specific camera shots, unless it’s a special case, but generally define the world. The interesting parts of the world, where we could be, is becoming a real process in art direction, production, design, and actual asset building.

Mike: (54:51) So I’ve got two more technical questions, but I do want to sneak in one before we leave this. Because it was a great question that I hadn’t even thought of –  about this idea of roles and backgrounds and somebody said we don’t discuss the role of the producer very much? How do you feel the role of a producer is different with a virtual production? Because where we’ve discussed directors, we discussed lighting, you know, fair enough too, but producers, – it’s quite different than the sort of almost waterfall kind of one after another: pre-production, production, post. How do you find producers adapting?

Matt: (55:29) I think the biggest challenge is, is taking what was traditionally budgets from art department construction and visual effects because now we really, what you’re doing is a bit of both. You’re taking some, some of that labor that was from physical set builds and construction and grips, and all the folks involved in creating physical sets and some of the visual effects that were all created in post,.. and putting all that in production. So I think the challenge has been how much of, where are we actually using from physical production versus visual effects? And that’s the part they get. They, of course they get the budgets, they get the schedules and all of that. But it’s understanding at the end of the day, where Would this have gone? How, how, when I’m, approaching a film, because each department needs their budget. Law? And so giving that up and understanding, working with the director and breaking down the scripts and the visual effects supervisor and producer, that it’s such an important process now, more than ever, when you’re doing a script breakdown to understand, okay, ‘where can we best leverage this technology? And this approach?’, ‘let’s circle these pages in these, in this story that we can get the most bang for our buck’.

And there’s also the concept,  now Mike, for the studios or the content holders of repurposing assets. So you start that that’s when their eyes start to light up, is this idea of, and we’ve all been talking about it as an industry forever and a day, but now that the assets can be created as final content in camera and not just a visualization tool, I think the industry is waking up and saying, Hey, these are really worth something. I can take these worlds and I can change the texture. I can change the lighting. I can rotate them around. I can move them out a little bit. And all of a sudden I have a different world. And so now I can build a library of these worlds for my storytellers to choose from. It doesn’t mean they’re all going to be used in the story, but at least to hash out ideas and start brainstorming and kitbashing different scenes together and coming up with concepts very quickly. And I can very well have assets that I can repurpose for the actual show too. That’s really an interesting evolution that we’re seeing now with all of this.

Mike: (58:08) Yeah. I think that also raises two important points. One is just the marvelous world of kitbashing, but the other one is it also from we’re talking about producers at a studio production level, it also really matters about asset management. And we shouldn’t skip over the fact that reuse is as much an asset management database problem as it is an asset retrieval problem, –  as it is technically being able to do it. I mean, I’m sure you’re the same, we’ve all been on occasions where someone’s gone: ‘don’t we have a bunch of, …well, they’re on a tape somewhere… or they’re backed up some I can’t… I take me so long to find them. I make them again.”

Matt: (58:46) No question about it. And we’ve all been there many times. And so this is definitely a hot topic inside of Epic and within these studios too, because you’re absolutely right. It really, the value of the assets are really in so much as you can access them intelligently, efficiently and give access to content creators to review them remotely. And, uh, not just in 2D, but actually in 3D. So a lot of work in terms of infrastructure that needs to be done to really facilitate

Mike: (59:22) Key questions to finish on because I want this to be a deep dive. So I’m going to go back -in… so one of them is risk related to this, right? Which is,…so on a modern, like an ARRI camera or something that’s running with i/data on the lenses and stuff. There is metadata streaming now reliably. And we used to be that everything would get transcoded and you’d lose all the metadata. And even though you had this marvelous lending information on set, it was all gone by the time you got to post, I don’t think we’re there now, but how much is Epic and Unreal kind of focused on that kind of ARRI, hardcore metadata or the stuff that are coming out, because obviously if you are embedding metadata with the imagery on set, you’ve got a very reliable source of data. Whereas if I’ve got a secondary file of someone saying, ‘Oh, we were shooting on a 35mm for that shot’. You like, well, if you say so,… but I’m not going to trust you !

Matt: (01:00:15) Yeah. So we’re actually talking to pretty much all of the lens manufacturers that have smart lenses, where we can tap into that metadata on the fly and stream that to a PC, especially if we’re doing any kind of interactive lens distortion for compositing, or if we can use information like data, for example, that’s incredibly useful. Another important part for us is the camera center, or some people call it the nodal shift. So understanding where that real camera center is interactively because we have to match the virtual camera to that and so there is a calibration process to that. But when you pull focus, that center can actually change. It’s not even if you have a prime lens and you’ve calibrated it and match that true camera, that optical center to the physical camera, um, those lenses breathe And so your actual focal length can shift, which means that nodal point will actually shift slightly. And so we have to be able to track that and recreate that on the fly. Now, you mentioned coming out of the cameras, the reason we haven’t used the camera metadata on the day or in real-time today is because it does take a few frames from the image processing and debayer and all that with, especially with these large-format sensors, to come out the SDI tap in the back of the camera, and then we have to send it to a card on our machine to interpret that image and extract that metadata. And that’s just a few extra frames now. There’s other stuff we are doing, the rendering, the tracking and all that business in parallel to the image processing and all of that. But we don’t want to have to wait for that metadata to get to it before we can apply it to the virtual camera lens in real-time. It doesn’t mean we wouldn’t use it in post. Absolut. But in terms of real-time, we haven’t, use the actual metadata that’s embedded in the imagery today.

Mike: (01:02:33) On that, so this is for my benefit. I swear to God, I have to just… I’ve got you and it’s such a treat, Hey, so my problem is this: I’ve got my lens and I’m focused on you and behind you is an LED wall, right? So the LED wall is out of focus because it’s three meters behind you just to use some math, right. But, what’s on the wall. Isn’t three meters. It’s, you know, another three meters because it’s, you know, virtually further away than the wall. And my problem is this. If I’m focusing on you, the wall goes out of focus by the fact that it’s not at your point, – but by the same token, it should go even more out of focus because it’s another three meters away. But if you just dialed in the six meters, it would be too out of focus. That makes sense because it’s, you know, not allowing for the fact that camera is actually itself providing some defocus. And I don’t know if this is solved or is it that I just don’t know how to do it? But it just seems to me an interesting problem, right? Like the wall gives the illusion that the mountain is miles away, but it’s actually only, you know, 10 meters behind the actor or five metres behind the actor?

Matt: (01:03:38) Right. So there’s this idea of relative depth of field from the projection surface. So the actual LED wall surface of where the contents being projected, like you said, there’s a certain distance that the physical surface, that content is projected from the camera, then you have to know, okay, well, how far is that content in that world? Away from the LED wall in that same world space. So if that mountain is another hundred meters from the wall, then that depth of field should reflect that relative to what the physical camera is doing. Beause you, right, the physical camera is only going to change. What’s projected on the wall. And so that’s, that’s in one space. And so if you’re not doing anything to the content that is supposed to be behind the ball, then that’s technically wrong. Doesn’t mean you can’t get away with a lot of times. So technically that’s incorrect.

We’ve talked about this a lot as you can imagine. And we’re fortunate that so far, in so far as that flaw hasn’t show up, but you’re a 100% correct, : it’s not accurate. And there are cases where it is wrong and you may have to fix it in post, but the way to handle it is to have our relative depth of field adjustment to that plane, to that plane in the world space of your environment and of your physical set. So that in this case, that’s, that’s the LED wall. That’s because that’s what the camera sees. And then you have to understand the distance of what the camera is doing to the imagery in, in that plane and how objects that are projected in that plane should change even more than what the camera’s doing.

Mike: (01:05:34) The reason I use the original example of the three meters plus another three is best illustrated, I guess, by a window, if I’ve got a window like a digital window, and I’m seeing through the window, then clearly the window is three meters behind you. Plus the notional three of virtual space across the mountain scene through that window is notionally, as you said, like a hundred meters back. And yeah, it just seems to me that if things are way off in the distance, you can probably get away with it, because like who can pick it right when you’re getting these things that are just on the edge of depth of field. Because you know, obviously there is no, sharp point of in or out of focus…

Matt: (01:06:12) I think what you’re getting at is that you can, …if the window, let’s just say that window is actually at six feet, if it were physically built, it was the same distance for a second, then that defocus should be the same. If it’s a one to one from where, where it would have been built physically and where it’s actually projected on the LED wall, then that distance is the same. If it’s different, if it’s closer, which you can do, you can actually put content, that’s supposed to be closer. You just have to be careful. I can’t do group too aggressive with that, but you can do it then that technically should be more unfocused. And that’s actually a bigger challenge.

Mike: (01:06:59)Y Laughs Yeah, I dont know how you would do that???

Matt: (01:07:01) That’s getting into deep learning stuff where you’re using, you know, – that’s another kettle of fish, but, what we have found, we’ve been at least in our study and, and based on our collective experience, working on set in this situation is having a tool because no matter what you do mathematically, at some point, the DP doesn’t care and they want to make it look the way they want to make it look good. It looks right. And so if you have a tool that calculates what it should be, that relative depth of field change from the projection on whatever distance that wall is from your foot, from your camera. And that’s another reason, we need the focus system streamlined to figure all this stuff out.  Then you have a dial to tune that if they want to hold that focus a bit more blurred, a bit further from what the camera is doing.  But starting with the correct mathematical solution so that we can show them, yes, this is what it actually would be doing. And they say, well, that’s great, but I want to do this other thing here. I want to soften this up a bit or whatever.

Mike: (01:08:11) Well, very kind of you to indulge my ‘rat hole’ on focus, but I love that stuff. So my big last question, that was the one I was heading towards technically is… so we may be wonderful and be able to get final pixels in-camera, but we should also acknowledge that there’s going to be a bunch of occasions where it makes sense to take this into post. What’s the attitude at Epic about packaging up information in such a way that post has kind of way to unpackaged, because in a sense, I need continuity of what was in shot, but I need like if I did lift the table up in this shot, I need to know that if that take is used in the next take, we decided we didn’t want the virtual table lifted up. So he put it back down again. And it’s not enough just to give me the scene because that was a shot specific kind of thing. So it’s almost like you need continuity or timecode marks on anything that’s in everything so that I can mix and match. And the guy in post or the woman in post isn’t pulling her hair out or his hair out to reconstruct it?

Matt: (01:09:13) Yeah. It sounds like you’ve been in some of our R&D meetings This is exactly the kind of stuff we’re talking about. We have a new tool coming out in 4.26 called Level Snapshot that is designed to deal with this exact type of thing where you have essentially a delta that’s happening of some sort, you did something just exactly how you described it, –  in the moment on the day that may not be a global change that you’re making a scene. In fact, it probably isn’t, it’s something that happened in the moment, that you need to track. And that’s that whole idea is this meta layer on top of your base level that lives with those files so that you can easily apply those to get back to the state of of that. (at this point, it’s still effectively) a take – of course, because we’re in virtual production world and we’re not in shot land yet, but the delivery to the VFX vendor would include that level snapshot with that, that meta information of any transforms, any modifications made to the base level.

Mike: (01:10:21) It seems to me, that that is not only just incredibly helpful to the post people, but you don’t want to slow down on set because you’re going, ‘wait, wait, wait,, if you’re going to do that, I need to take a note of it. Stop for a second.’ I’m going to loose track here’.

Matt: (01:10:32) Yeah, absolutely. And then you’re shooting again and we’re, where was everything last Tuesday? What, what did we do that, you know, that we’re going back to that scene again and, Oh, I got, you know, it’s take four/ scene 26. And so you have that, all of those offsets, that’s your safety net. And again, absolutely critical for posts, but even for production. And, you wanna do a pickup shot. Absolut. And the way to the degree that we tweak the lighting on set with light cards and flags, and you’ve seen some examples of this, but to those of you that haven’t, in addition to the normal lighting of the virtual world, there’s additional surface lights. If you will, on the LED volume that you can layer. And the point of those is to actually optimize the physical set and actors lighting. It’s really not for the virtual world itself. So you don’t see them in-camera, but they’re, they’re light cards and flags. So you can think of them like just primitive shapes that you can scale. You can change the transparency, you can change the color and the intensity. And they’re used all the time,

Mike: (01:11:46) Yes if I am on top of a real mountain. I might have my camera assistant person, whatever, holding up a piece of Polly and bounce, some extra light into your fill side, just so you don’t look too contrasty. No reason why I have to have somebody on stage doing that. I can just put up a virtual piece of square Polly on the on the wall and Walla : area light.

Matt: (01:12:07)  Righ and so you have to track all that too, of course, because that’s what made the actor look the way they did exactly how the DP wanted. So just another reason why it’s so important to have that snapshot.

Mike: (01:12:20)I mean, I we’re running out of time, but I could just talk to you about this stuff forever, but I’m so excited that not only that this has been done, but you know, quite often with these things, you’re like, Oh, I just can’t wait till we finally get our hands on this. Because I could see a few years off, but this changes so quickly you know, that these ideas come up and then as you said, like way back in the day, like

Matt: (01:12:42) Yeah,

Mike: (01:12:43) It’s fast moving you know, some people might find it a curse. I find it a gift that we know that there are things that would be great to happen and we’re not going to have to wait until after I’ve retired before they appear!

Matt: (01:12:56) It is exciting. And it, and it’s a challenge. It’s a lot of fun. It’s a fast-moving train and we’re all on it. And we are also spending a lot of time though, thinking about making sure our, our users, uh, when we put out a new version, new release that they’re not having to do a lot of re-engineering if they make tools in engine and, and, and dedicating more support so that they can make that jump with us from, from one version to the next and are bound to the previous version, just because of the legwork that they’re going to have to do to make that transition. And that’s, that’s the other side of the coin is when things are moving as fast and you really want to take advantage of all the latest and greatest stuff that we want to make sure our users can do that and, and are interested in and helping them in that regard because they do have access to the tools at a low level. It’s not just an application. They get a license to touch a bunch of buttons. They are customizing their arm doing great things with it. So it’s just part of the process and it’s very exciting.

Mike: (01:14:06) I guess I should have said this at the beginning, Matt, but if people are new to virtual production, I totally recommend The Pulse because it was a little less geeky than this and had terrific, input from the whole panel. But having said that, man – I’ve had so much fun talking to you today !

Matt: (01:14:22) So a lot of fun for me too

LEAVE A REPLY

Please enter your comment!
Please enter your name here