End date: May 13
User avatar
By Ernesto
#366546
Mihai,

I would add two things to your explanation:

1) in order to match the PERSPECTIVE of the backplate, you have to match the FOV also, otherwise the rendered objects would look more or less deformed depending on the diference between the FOV in the backplate and the FOV in the maxwell camera.

2) The backplate in TIFF format is an 8 bits image, but to be used as background it should be changed to hdri or mxi format. At least that is the limitation in Maya 2012.

Ernesto
User avatar
By Mihai
#366547
The matching of the cameras view to the backplate is something else, apart from first rotating the HDR so it matches what the photo "sees", which is what mashium explained.

There are numerous tutorials on the web on how to match your camera to a backplate, and we will try to have some data from Maground for easier matching. But it's not imperative and probably just by eyeing it in this case, you get a pretty good match.

I was referring to this:
I makes little sense to start with a wonderfull spherical HDR to convert it into a 8 bits image, and then to reconvert to fake a 32 bits again....
I do not see the point!
User avatar
By Ernesto
#366548
I see Mihai,

By what you explained, the backplate seems to be intended as a reference so that everybody sees the same frame.
I agree that it is good to compare all the renderings.

But I do not see the point of loosing the luminance information in the backplate.

If the backplate would have been provided in HDRI format, it would be as usefull to serve as a reference so that everybody could frame the same landscape, but additionally we would still have the realistic luminance information to work freely with the exposure that each user would need accordingly to the 3dmodel lighting conditions.

Ernesto
User avatar
By Ernesto
#366550
A strange diference.

I have just realized that the panorama in HDRI has nothing to do with the Backplate.
In order to calculate the FOV and camera height for the backplate, I discovered that both images are not the same.
The backplate is a TELEPHOTO image, that was taken from a greater distance compared to the HDRI panorama.
As result of these diferences the PERSPECTIVE for the first is really COMPRISSED meanwhile in the HDRI it is more normal.
Another diference is that the HDRI image by Maground seems to be taken from a human height, but the backplate was taken from a lower viewpoint.
I think that in order to get an accurate perspective some extra data is needed.
And if both images are not the same, we would need the data that matches the image that should be seen in the final composition, meaning the backplate.

Ernesto
User avatar
By Mihai
#366551
I'll try to explain again:

The purpose of this exercise is to integrate a 3D object into a photo, as if you had taken a photo of that object in that environment, right?

You don't need an HDR photo because you are not trying to change the exposure of this photo, you are trying to match it, using a spherical HDR taken at the same location as the lightsource of your 3D object.

If your 3D object now renders too dark, or too bright, compared to what it might look like as if it were in that environment, change your camera settings and/or your intensity setting of the illumination, reflection, refraction slots. But generally it is recommended to start with the camera settings. If your camera is set at f22, ISO 50 and shutter speed 2000, then probably unless the photo was taken in an extremely bright mountain covered in white snow, your rendered object will appear too dark.

This has nothing to do with changing the background photo exposure. The background photo needs to stay at the provided exposure.

Is it clearer now?
User avatar
By Mihai
#366552
Ernesto wrote:A strange diference.

I have just realized that the panorama in HDRI has nothing to do with the Backplate.
In order to calculate the FOV and camera height for the backplate, I discovered that both images are not the same.
The backplate is a TELEPHOTO image, that was taken from a greater distance compared to the HDRI panorama.
As result of these diferences the PERSPECTIVE for the first is really COMPRISSED meanwhile in the HDRI it is more normal.
Another diference is that the HDRI image by Maground seems to be taken from a human height, but the backplate was taken from a lower viewpoint.
I think that in order to get an accurate perspective some extra data is needed.
And if both images are not the same, we would need the data that matches the image that should be seen in the final composition, meaning the backplate.

Ernesto
You assume too much again and dig yourself into a hole...if you don't understand, read about it first. If you still don't understand, ask. Don't conclude.

An HDR image is not taken from the same position as your backplate. For example:

I see a statue
I take a photo of that statue from 20m away, with a telephoto lens
I want to integrate a 3D object as if it was standing right next to the statue
I have to take the spherical HDR image by placing the camera used to capture this HDR to the right of this statue, otherwise the lighting won't match.

So this:
The backplate is a TELEPHOTO image, that was taken from a greater distance compared to the HDRI panorama.
is normal / irrelevant.
Another diference is that the HDRI image by Maground seems to be taken from a human height, but the backplate was taken from a lower viewpoint.
It doesn't matter. It doesn't matter where the photo was taken from (meaning, where I was standing when I took the photo of that statue, with my 3D object next to it). What matters is that my camera is pointing in the direction of where the HDR spherical image was recorded from, for the lighting to match.
User avatar
By Mihai
#366554
Say you've setup your tripod camera to take a spherical HDR of this beach, to later use for lighting:

Image

Your idea is you want to put a CG car on this beach, take a photo of it, as if it was there on the beach.

You can go anywhere you want to take this photo (this "backplate"). You can go in the sand, you can go in the water, you can go up on those hills in the jungle, you can go up in the air, you can go on those distant islands and take a photo of the beach with a super telephoto lens.

The only thing that matters is that you have to point your camera to the spot where your tripod was that took the HDR. Because that's the "origin" of the lighting info contained in the HDR.

This is one of the beauties and constraints of HDR lighting: The observer point of view doesn't matter (since the HDR is spherical), but the virtual object itself is unable to move (too much) from the original position of the HDR image or the lighting won't match anymore.

Makes sense?
User avatar
By m-Que
#366604
Image
Sorry for being a bit off-topic, but just curious:
...
EDIT:
Nevermind, got it.
Just forgot to uncheck the box... Sorry
Last edited by m-Que on Sat Mar 30, 2013 2:50 am, edited 1 time in total.
User avatar
By Ernesto
#366605
m-Que, that is not off topic!
That was the main concern stated here http://www.maxwellrender.com/forum/view ... 32&t=40247
Perhaps I am not the best person to answer this question, but since we have been thinking on this problem for some time, i will tell you what I am assuming now, based in all the help and descriptions other users have given here.

There are basically two ways of recreating this composition.

a) The so called Dirty way, where you place a real ground plane in the scene, with a Camera UV, and the same background map being projected in a way, that it would look invisible.
http://www.maxwellrender.com/forum/view ... 28#p366526

b) The Flexible way. This is creating shadows channels, alpha channels, to create the final composition in a Photo software.
As this is the most popular way, you have many tutorials:
http://www.maxwellrender.com/forum/view ... 47#p366425
http://support.nextlimit.com/display/ma ... ow+channel
http://hdri-spherical.com/tutorial-usin ... art-2.html

By what was written in this Competition subforum, most users seems to prefer the second way.
But for me it seems that the first way, could solve your problem. I was thinking in a 3d model with lightsources.
Something like this:
Image
or this:
Image
Something like these samples would interact with the ground plane, as the idea you mentioned. That is easily solved using the first method, but I really cannot imagine how to acheve it using the secons method.

Unfortunately, I am not succeding to recreate the Dirty way method, which is suposedly the easiest way!
I really do not know if I am doing it the wrong way, or if I am dealing with a bug in Maya 2012 / Maxwell 2.7.
I wish we had a complete tutorial on this subject.

Ernesto
User avatar
By dariolanza
#366657
Hello Ernesto,

In another thread I've explained this topic:

This is not a bug, you just simply have to subdivide the ground plane mesh.

This is not coming from a wrong function of Maxwell, but from the different way the UV coordinates are transferring to Maxwell rather than from other renderer (i.e. like Maya software renderer).

So, whenever you need a camera projection mapping, subdivide the ground plane mesh, specially close to the camera. The closer the vertices, the better. You will get a perfect match of your texture this way.

In addition to that, keep in mind that the HDR image should be used for illumination, reflections and refractions, but for the background only as a reference. The final composition should be done over a clean planar plate (rather than over the HDRI itself) to get the best resolution and background image quality.

No VFX studio composes any render over the HDRI image due this reason.

Greetings

Dario Lanza
User avatar
By dariolanza
#366681
Hi,

This is in fact the ultimate reason of these competitions: To force all users to struggle with a challenge that is out of their day-to-day regular procedure, and take one step forward in their skills and knowledge.

Once you finished the competition (better using the Clean-Flexible method, rather than the Dirty one), you will pushed yourself and will define your own workflow to create cool realistic compositions.

Keep going, it is looking fine!

Dario Lanza
Will there be a Maxwell Render 6 ?

Let's be realistic. What's left of NL is only milk[…]