The link I gave to an explanation of the MVP matrix still works for me. But you can also find a lot of information about matrix transformation and perspective for 3D rendering elsewhere.
Matrices are the basic math tool we use to transform (e.g. move, rotate, and scale) objects in a 3D engine. You can understand here, here and here how it works with these few videos.
What we refer to as the MVP is the model view projection matrix, which converts every object in your scene into a perspective correct shape on your screen for drawing / rasterizing (through an operation called "perspective splitting"). Because when you think about it, every 3D object has to be "projected" onto a 2D plane, which is your screen, before you can "see" it.
When the perspective split occurs, the distance from the view position to the vertex position of your object is saved and can be interpreted as "depth". Edit: In fact, it seems that you get this depth information already when applying the projection matrix before splitting the perspective. The Z value for view space is stored in the 4th component of your position vector which becomes your clipspace depth (and I assume this is the value that is compared and written to / against the depth buffer).
This fourth component is then used for perspective subdivision when you want to project your object onto your normalized viewport coordinates and scale your perspective truncated cone into the perfect field that is your screen.
At least that seems to be the case in OpenGL. Maybe someone involved with rendering graphics can add details or correct me if I'm wrong!
No internal renderings or the like have to be changed via the "Sobel" operation. What I mean by that is that you need to "sample" information about neighboring pixels in order for the process to work.
Suppose I want to see if that pixel in red is an edge of something, and the blue pixels in the image belong to a triangle that is near the camera, while the white pixels represent the skybox background:
I try each neighboring pixel value like this:
For example, if the sampled value is the depth of the pixel, then I can compare the depth of a pixel to the depth of its neighbors. It does this through something like:
Result = (currentPixelDepth – leftPixelDepth) + (currentPixelDepth – rightPixelDepth) + (currentPixelDepth – topPixelDepth) + (currentPixelDepth – bottomPixelDepth)
The result is the final addition of the differences in depth for each neighboring pixel. The greater the depth, the greater the result. So you get a result that says "Depth Difference". So basically an edge detection is output. In my case, the difference in depth between the blue triangle and the white background is very big, so I get a white output at the coordinate of this pixel (don't forget to pinch / saturate!).
How can you scan a neighboring pixel?
Let's say you can have your shader process every current pixel using this simple node:
Then if only you Add Any number in U or V can get you the coordinate of another pixel during the current process of your shader, right?
How do you get the pixel that is your exact neighbor? 1 unit on the right here is the pixel with a diameter of 1 cm, since UE4 works in centimeters. We don't want that. We want the pixel with a diameter of 1 pixel.
A great way to do this is to calculate the size of a pixel in centimeters!
This is possible with this node:
Then you simply need to multiply any 2D vector by it to scale that vector to a single pixel length. So the end result of my example graph for the pixel neighbor looks like this:
Read again what I posted on this thread. You should have answers for everything. At the moment you are sampling the depth of neighboring pixels (good) but as I mentioned during my adventure, this is not enough to recreate the full effect we see in the game …
As mentioned earlier, @max_rudakov is that thread. An authentic Sobel operation is a little more advanced because it uses a so-called kernel in a two-pass calculation. Check out this video for a detailed breakdown of a real Sobel.