Half Life wrote:24 logical cores = 12 physical cores + hyper-threading.
I'll be more than happy to reassess the situation when and if an open standard like openCL develops into the norm in GPU... right now I remain less than enthused about Nvidia's policies and therefore CUDA -- which is just my personal preference.
When I talk about pushing limits I'm talking primarily about material creation as there are some very serious limitations to material creation (and therefore realism) right now... it's the difference between the visual complexity of a real hardwood floor or a laminate "hardwood floor" -- Or a real weave on cloth (complete with fuzz) instead of a facsimile... etc. This is the real limit right now to realism... everything has the surface quality of realism assuming you don't look too hard or too close. The other thing that would help realism considerably is to expand beyond "3D' (meaning traditional 3-point perspective) into a more realistic mode like curvilinear perspective (5-point perspective).
SSS is key as is volumetric lighting in general.
All that together requires tremendous amounts or processing complexity and I see GPU only solutions as taking CG away from the places I want to go and more into a simplistic realm of fast but not great depth of realism.
I have no problem with the camera paradigm -- I am after all just a 2D artist at heart.
My purpose for Maxwell (and 3D in general) is to act as reference in my paintings in much the same way as I would using a model and props and taking photos in the real world... in some instances it is easier to do in 3D, and in some it is easier to do in the real world. The expense of hiring models and building props (not to mention studio photography equipment) makes it far more attractive to try to get most of what I need from 3D. Maxwell suits my purposes exceedingly well and I find their company philosophy to be very attractive.
Best,
Jason.
Ah, then I misunderstood you regarding the artistic aspects. If what you need is a camera and a studio then Maxwell is of course an excellent program. I would never criticize them for doing what their intention was: building a unbiased renderer. I justed wanted to warn you that this puts severe limits on what's available and why VFX directors will never use it.
Regarding material realism. Yes, there is a lot to win there but shouldn't the solution be to model more in detail. Real hardwood versus laminate is quite easy and solved a long time ago, but cloth like materials are elusive. You can emulate them reasonably well, but for realism you need poly's. That should in my view not be the task of the renderer but of the modeler. No automation can substitute this. There are excellent examples of people modeling to extreme precision which delivers what you want: full realism under all kinds of light situations. For arch/viz work this will never be done. For movies it's done and some amateurs take great pride in modeling in extreme detail.
The base line is that just like avoiding occlusion tweaks, cut-off path tracing and the thousands of other very advanced tricks the biased renderers use, you will also need to approach modelling in this way if you want real realism

So there goes all those handy things everyone demands from commercial renderers to avoid modeling detail. I don't think that approach will be very popular but the good news is that you can model as detailed as you want and just not use the box of tricks while rendering. This will get you much closer to the realism you're looking for.
In real rendering situations this will create one new problem however: the lack of internal precision. Large scenes with sub centimer details will lead to errors. It goes to far to fully explain the solutions, but afaik most or all commercial renderers available can't handle this. The solution is not too difficult but will slow down any rendering, so it's not hugely popular with developers, but that could well be implemented by a switch in the renderer.
The five *actually six* point perspective is a much discussed thing but I don't believe it would be a huge advantage, but this is more a psychology issue than a technical issue. If your eyes distorts the image so will it if it sees a 2D representation of reality. It hasn't been hugely popular in arts and there even have hardly been complaints about the single point perspective renderers *not artist* use to draw scenes. I assume you are aware of this issue.
Next post back to GPU vs CPU as this is a very interesting discussion, but a sidetrack of a sidetrack. I don't mind as discussions about realistic previews are closely tied to realism itself, but the moderators and readers may think otherwise. If they find it of interest I'm happy to participate in the 'realism' discussion as it's one of my favorite subjects.