All posts related to V2
User avatar
By f64cg
#341391
Sanity check guys.
Can you check this out and tell us if we're missing something or if our findings seem correct...

When dealing with IBL/HDRI lighting, the reflections of the environment remain constant on the model in scene regardless of how the far virtual camera is from said model.

We've tested this with several of our first HDRI captures inside our office. When we use these as a reflection map for our car model the desk-fan and windows that were close to the camera appear to be giant when we pull back the virtual camera 3-7 meters from the car. Its hard to notice the effect with the majority of HDRI available for purchase from the various vendors because for the most part they are exterior captures and there's nothing within 7 meters (except the ground) from the capture point.

This would seem to indicate that one must be really careful capturing HDRI scene's where there is anything close to the camera. If there is anything close to the camera during scene capture the reflections in the model will only be correct if the virtual camera is as close to the model as the HDRI capture cam was from that object.

I guess another way of looking at it would be that the HDRI capture cam is acting as a stand in for your model. The relationship between the size/position of the camera and the scene its capturing around it will determine how the reflections look on your object. In essence our model ends up being the size of our capture SLR (ie. it appears to be toy size) and anything that's relatively close to the cam appears huge in the reflection of the object....unless the virtual camera is moved in on the model to the approximate distance that the capture cam was from the 'relatively close' parts of the captured scene. In our HDRI there was a desk fan about one meter away from the capture cam...only when we bring our virtual camera to within one meter does the reflection size seem to be correct in our model.

We haven't been able to find any discussions about the following issue anywhere on this or the other major CG or HDRI-capture forums....maybe its just obvious information but it would be surprising that all of the "How to capture HDRI" threads brush over this.

The only thread on this forum that we could find that mentions this issue (sort of) is

http://www.maxwellrender.com/forum/view ... eflections

and its hard to see the effect because its an exterior capture with nothing very close to the capture cam.

It would seem that if we need to manipulate the HDRI reflections on our model we would need to go the route of creating an actual sphere around our model and mapping the HDRI emitter to the sphere. Thus giving us control of the sphere size in relation to our model.
By JDHill
#341393
f64cg wrote:This would seem to indicate that one must be really careful capturing HDRI scene's where there is anything close to the camera. If there is anything close to the camera during scene capture the reflections in the model will only be correct if the virtual camera is as close to the model as the HDRI capture cam was from that object.
Yes, when you capture an environment using a camera, the perspective contained therein is inherent to the capture, being based on the location of the camera. This is the reason why the theoretical IBL sphere always remains centered on the camera.
f64cg wrote:It would seem that if we need to manipulate the HDRI reflections on our model we would need to go the route of creating an actual sphere around our model and mapping the HDRI emitter to the sphere. Thus giving us control of the sphere size in relation to our model.
If you manually map your environment to a sphere and then proceed to locate that sphere arbitrarily, the perspective contained in the image will no longer correlate with the perspective seen by the camera with which you are shooting. You may be okay with this, depending on the particular circumstance, but ultimately, it will not be a correct perspective match -- the foreground objects (in fact, everything in the environment map) you are concerned about will appear distorted. Taken to the point of absurdity, imagine that you moved your manual IBL sphere so far as to put the camera outside of it; this would clearly result in a useless 'environment' mapping.
User avatar
By f64cg
#341397
Thanks JD.

All of what you said makes sense.

So if we were wanting to get the truly correct proportions between our IBL captured reflections and our model, say for this example a 30' long truck, we would need to capture images in a 30' arc/spherical/circular motion...as if the nodal point point of the capture was the same size of the truck.
If we were putting the truck in our office here we would need to find the center point of where the truck would sit, then figure out the maximum distance from the center of the truck to the edge of truck and capture images in a circle around that point. Much different than moving the camera on our pano head around its nodal point....

So its the proportion of nodal point (capture camera size) to the environment its capturing that would effect the reflections on the object more than anything.....right?

Image
By JDHill
#341409
The diameter of the IBL sphere is irrelevant, so long as the sphere remains centered on the camera (obviously it must be large enough to encompass the objects in the scene, but this is taken care of for you automatically). From the camera's point of view, the sphere could be infinitely small or infinitely large; whatever the size, any given view vector from the camera eye will encounter the same pixel as mapped on the sphere. Therefore, where you say:

f64cg wrote:we would need to capture images in a 30' arc/spherical/circular motion
This has very little sense when using a correctly centered IBL sphere; a given angle is only 30' in length at a given diameter. The proper way to think of it is that a given object will occupy a given angle of view, and this is naturally related to its distance from the camera. As such, the only relationship you should really be concerned with is the one denoted by your purple line -- if the object appears too close in your render, you can be sure that it was too close to the camera in the real world during the capture.
User avatar
By f64cg
#341469
Ok.
I think its all starting to make sense.

I guess my question would be, how do we know if something is "too close to the camera in the real world during the capture"....?

It would seem that if we really want to be critical about matching reflections on our model we must put the virtual camera the same distance from the model as the capture-cam was from the closest object to it. As we pull the virtual camera farther away from the model those reflections (which remain constant to camera position) begin to appear the incorrect size on the model.
By JDHill
#341515
f64cg wrote:As we pull the virtual camera farther away from the model those reflections (which remain constant to camera position) begin to appear the incorrect size on the model.
Yes, but I do not think that it should be called incorrect. Consider: say that you accidentally had your thumb over part of the camera lens when you made the capture. The size of your thumb in the resulting image (and in the environment when it is used for IBL) will not change as you back the camera away from the model, and it should not -- in comparison to the diminishing size of the model, it will quite correctly seem to be getting larger.

That is, you cannot expect to shoot two different camera positions (i.e. in the render, with respect to the 3D model) with the same environment map and expect to obtain a correct result in both cases, since the environment map permanently encodes the relationship between your foreground objects and the camera at the time of capture. This is not usually a problem, since HDRs don't generally have much in the way of foreground objects, but for what you seem to want to do, you're going to have to pay attention to it -- you need to set the shot up in 3D in order to know where the foreground objects should be located for capture. If they are immovable, then you need to work the other way around and use the capture position to position your 3D camera.
Help with swimming pool water

I think you posted a while back that its best to u[…]

Sketchup 2026 Released

Considering how long a version for Sketchup 2025 t[…]

Greetings, One of my users with Sketchup 2025 (25[…]

Maxwell Rhino 5.2.6.8 plugin with macOS Tahoe 26

Good morning everyone, I’d like to know if t[…]