brodie_geers wrote:max3d wrote:Richard wrote:Wow Max, if GPU rendering for maxwell using maxwell by maxwell things look like they'll rip in 2013 (assuming that the earth doesn't split in 2012). Like Brodes, I thought ypu may have been suggest it 2013 until MR went GPU, I got confused again!
I can half imagine though if considering the price of tesla cards, maxwell (wouldn't you think they would have picked a different name there) may well be ex'y!
It will be build in so the price of Maxwell Render will no longer be important. 200$ will buy you Maxwell the polaroid version. Remember you heard it here first. Another design win for Maxwell

Sounds cool. If we shake our monitor will it render faster?
-brodie
Shake is discontinued after being taken over by Apple so that will no longer work. Nothing real actually.
To be serious again. I have a large collection of preview renderers. Some on CPU, some on GPU and nothing real (pun intended) on hybrid as the CPU is just too slow on these implementations. From what I have seen most of them seem to be at least usable on current hardware and a huge improvement in the CG workflow. It will however depend on the actual users how useful they find it. I do most of my testing with low to midrange poly models, simple textures and I try to avoid custom materials as it would make it more difficult to compare speeds and usability.
However the users of Maxwell will only be interested in their preview and they will use high poly models which are texture rich and use the complex Mawwell materials. My premises in the testing work I do are:
- preview rendering will be most useful in complex cases as every experienced user will know how his model will work under lighting conditions and materials he often uses, etc.
- ergo you will want to concentrate on the spots where you expect problems and sort that out to be certain you´re on the right way
- convergence to the final render output is not important while setting the scene as it´s not a big deal to push the final render button instead of just letting it go, BUT you want to be able to preview the end result otherwise it makes no sense. So a preview renderer in which you can enable/disable certain features would be okay to speed up things as the user will know what he´s concentrating on, but it should be completely clear and predictable to the user what the difference will be. Noise and fire flies f.i. are in my view acceptable as it´s easy to forecast how it will look without noise. So the preview renderer could produce more noise even when giving more time if this is needed to get the more or less instant feedback you are looking for.
Now these are my premises and thats why I tested Thea somewhere above with the kind of model and material I did and the time frames: 15sec, 2m.35s and long... as measuring points.
I would be very interested what others expect from a preview renderer and if they agree to my premises or have completely different demands. Maybe they really want fully interactive handling of complex scenes or are willing to sit for more than 2,5 minute to get proper feedback or etc.. I´m all ears.
Max.
* I didn´t update the Thea post as I´m still waiting for support to clarify things. I know how I can get caustics etc with different rendering technologies, but the model I use is something they don´t like and I don´t want to revert to the simpler case as provided by them. The intersection of glass with another material is considered by me just an example of where you would like to use your preview for. I have made loads of new renderings, used suggested settings, but the results although different are not very satisfactory. I still hope for further clarifications about some issues in the produced images which are still there after hours of cooking. I always try to be very supportive of new start ups, so I will wait a bit longer before updating the images.