- Thu Jan 29, 2009 4:40 pm
#290455
If you're rendering still images (for mattes), then maxwell would be great (esepcially if you used multilight somehow). Although texture baking would have no bearing whatsoever on this pipeline. A matte is very different to a live environment in cg.
I think there's a bit of general mis-understanding about texture baking, so I'll attempt to clarify: Lightscape was a rendering engine that opened the door to texture-baking back in the early 90's. This was because Lightscape was a radiosity renderer that rendered the model as a whole, and was not view-dependent. in other words every surface in the model received and reflected light using a simple 'adaptive' type algorithm that kept running until all the original light energy was used up. Once complete you could navigate around the model (in OpenGL) and create images from any number of viewpoints with a raytraced pass.
Using Lightscape it was possible to output the model with the lighting 'baked' on into 3dsmax. In its raw form texture baking is relativley crude as it does not include reflections, refractions, caustics and other anomolies which are view-dependent. These would be added in a second pass.
Now unless i'm wrong (stand to be corrected here) Maxwell is an unbiased rendering system which means it is 'strictly' view dependent. Natural phenoema are NOT controllable and simply get calculated regardless (including caustics despite the option to turn it off - doesnt work), so there is no logical way texture baking would work.
Texture baking is used primarily for real-time 3d (games, interactive models etc...) or for improving efficiency in animation pipelines.
A far more sensible option for anyone looking for the level of control required for something like texture baking should look at a biased solution like vray whose IRMap is a perfect example of how texture baking would work.
check it out.....
I think there's a bit of general mis-understanding about texture baking, so I'll attempt to clarify: Lightscape was a rendering engine that opened the door to texture-baking back in the early 90's. This was because Lightscape was a radiosity renderer that rendered the model as a whole, and was not view-dependent. in other words every surface in the model received and reflected light using a simple 'adaptive' type algorithm that kept running until all the original light energy was used up. Once complete you could navigate around the model (in OpenGL) and create images from any number of viewpoints with a raytraced pass.
Using Lightscape it was possible to output the model with the lighting 'baked' on into 3dsmax. In its raw form texture baking is relativley crude as it does not include reflections, refractions, caustics and other anomolies which are view-dependent. These would be added in a second pass.
Now unless i'm wrong (stand to be corrected here) Maxwell is an unbiased rendering system which means it is 'strictly' view dependent. Natural phenoema are NOT controllable and simply get calculated regardless (including caustics despite the option to turn it off - doesnt work), so there is no logical way texture baking would work.
Texture baking is used primarily for real-time 3d (games, interactive models etc...) or for improving efficiency in animation pipelines.
A far more sensible option for anyone looking for the level of control required for something like texture baking should look at a biased solution like vray whose IRMap is a perfect example of how texture baking would work.
check it out.....
