- Mon Oct 08, 2012 12:11 pm
#361412
Hi All,
I have a few questions regarding the nature of unbiased render engines, and what the future possibilities are.
Although I do not really know how the rendering engine works, I have had my theories based on the functionality of Maxwell render and Fryrender.
One of the biggest problems with these engines is how incredibly slow they can be, but I have always been so impressed by the render quality and how intuitive they are.
In my opinion one of the most impressive features of these engines is the multilight function and fryrender swap: http://vimeo.com/2406473
It seems clear that this is all possible because light is purely additive and white light is just a mixture of other wavelengths and it makes me wonder how much it is possible to pre-calculate.. e.g:
If I made a complex scene with various emitters and all surfaces diffuse grey with 50% roughness, based on the calculations made by the engine, in theory could I not change almost anything in the scene based on the calculated data?
Its fair to say that things like reflected/refracted caustics would be out of the question because there would be never be enough samples focussed in that direction to produce clear caustics but I cant see why with a high enough sample level far more than just lighting could be changed in post?
This came to mind again recently after I was trying to figure out how to create an animation of a room illuminated by a large video wall, I cant see why a whole animation could not be done in one render as it is only colour intensity and that is changing.
(I am still trying to work out if there is a way I can 'code' the video screen material to apply these changes in post)
This is something I have always been curious about so please share your thoughts!
Jules
I have a few questions regarding the nature of unbiased render engines, and what the future possibilities are.
Although I do not really know how the rendering engine works, I have had my theories based on the functionality of Maxwell render and Fryrender.
One of the biggest problems with these engines is how incredibly slow they can be, but I have always been so impressed by the render quality and how intuitive they are.
In my opinion one of the most impressive features of these engines is the multilight function and fryrender swap: http://vimeo.com/2406473
It seems clear that this is all possible because light is purely additive and white light is just a mixture of other wavelengths and it makes me wonder how much it is possible to pre-calculate.. e.g:
If I made a complex scene with various emitters and all surfaces diffuse grey with 50% roughness, based on the calculations made by the engine, in theory could I not change almost anything in the scene based on the calculated data?
Its fair to say that things like reflected/refracted caustics would be out of the question because there would be never be enough samples focussed in that direction to produce clear caustics but I cant see why with a high enough sample level far more than just lighting could be changed in post?
This came to mind again recently after I was trying to figure out how to create an animation of a room illuminated by a large video wall, I cant see why a whole animation could not be done in one render as it is only colour intensity and that is changing.
(I am still trying to work out if there is a way I can 'code' the video screen material to apply these changes in post)
This is something I have always been curious about so please share your thoughts!
Jules