All posts related to V2
By sacslacker
#329603
Personally, I'm just happy we are going to have a fast preview engine in the near future. Maxwell v2 has been quite awesome!
User avatar
By Voidmonster
#329608
juan wrote:(After siggraph my doctor told me to avoid gpu-cpu discussions for a while so I tried to not extend my answer too much, with zero success)
To find out if avoiding gpu-cpu discussions is right for you, please consult with your healthcare professional. Or some random stranger on the internet.

Complications of avoiding gpu-cpu discussions can include but are not limited to: a general sense of well-being, a desire to wake up in the morning, less coffee drinking, fewer aneurisms, what the Germans call ungötterweltschmerz, a profusion of kittens/puppies, a cleaner house, increased energy, fewer internet posts, excessive salivation, one-too-many tapas, delicious sangria and occasional controllable bowel syndrom.
User avatar
By max3d
#329624
juan wrote:Hi,

Thanks for your input max3d :)
max3d wrote:I still don't know why the GPU route had not been taken. I do realize that the development time for a full CUDA implementation would be huge so this could well be the only feasible solution for this year. Nothing wrong with that, but does that mean that there is a fundamental reason Next Limit would ignore multi core programming.
We have not revealed any decision about if we are moving to GPU or not, we never make announcements regarding middle-long term strategies. The only thing we have said is that we will release an interactive engine very soon and it is cpu based, because under the current circumstances we do think it is the best way to go. The reasons are already mentioned in our website (http://www.maxwellrender.com/pdf/Maxwel ... w_Info.pdf)

I attended many speeches about raytracing in Siggraph and unfortunately many of them were sponsored by hardware manufacturers. Some of them were pure advertisement and a waste of time, except because of the fact you could be sat resting for a while, something priceless in such an exhausting event. And I am not only talking about Nvidia and Intel here, its a general trend that is happening in these kind of events. (Anyway making occlusion calculations is not a so difficult thing at all, the complexity of such thing is levels of order of magnitude lower than a unbiased raytracer, and I am talking here about a general unbiased path tracer, not about Maxwell which is far beyond..).

Regarding who is more informed, this is an small world. Most of us are in the same mailing lists, attend the same events, after them we go to the same places for dinner, we have drinks together... maybe Brad knows as much as a chief scientist. (I love how it sounds, I wish there were more scientist in CG as in the old good days..now there are more sponsors than scientists :) )
max3d wrote:If what you need is a camera and a studio then Maxwell is of course an excellent program. I would never criticize them for doing what their intention was: building a unbiased renderer. I justed wanted to warn you that this puts severe limits on what's available and why VFX directors will never use it.
Just a minor note here: Maxwell has been used in vfx since the beginning and the number of studios that are moving to it is growing a lot, especially after v2 was released. Of course as any other product we do not pretend it’s used everywhere for every purpose but just wanted to point out this fact.

Juan

(After siggraph my doctor told me to avoid gpu-cpu discussions for a while so I tried to not extend my answer too much, with zero success)
Hi Juan,

I didn´t quote your complete answer as it was quite long. I did read it though and I fully understand what you are saying. On most issues I would make the same choice if I had to release something today. That users of renderers should or should not be aware of the technology behind it, is something I personally have a different opinion about, but that´s me. I love painters who actually study light, develop their own paints etc. Yes that´s very technical but if your main living is based on using a specific technology it would be smart to learn about it. That the average architect who incidentally needs a rendering doesn´t care is of course normal. There is a sharp distinction between these two.

I differ on some of the technical issues regarding CUDA programming you mention but that would turn this thread in a programmers thread and that is really out of the scope of this topic and even I don´t think CG artists should bother with branch prediction, stalls etc. Nice for a private discussion or on a different forum. You´re right that it´s a small world. I missed this year´s Siggraph due to a summer flue, but like you I know about everyone involved in programming 3d renderers and VFX directors. It´s indeed a pity that the old days where technology was easily shared by scientist and programmers have gone, but there are still interesting papers and over a beer we still talk like we did in the past.

The paper I quoted from was an example that people still publish papers while they could just as well keep their research private. There was one sentence in that article which helped me enormously. I had been wondering about it for twenty years and the guys just did the experiment for me. Finally I know! So there still is a helpful community out there.

Regarding Maxwell as an VFX tool, I disagree. Most VFX guys want full control over their output. if you can´t bias the renderer you can´t achieve what you want. There are some small studios with different ideas but the big players all want that. It got even worse as ILM and Pixar no longer want to work with outside solutions. Neither do they sell the fruits of their research to a larger audience (except for Renderman of course). They just gave up and do everything in house.

You see similar developments at other large studios. It´s in house only now or maybe a good deal with mental images and Nvidia as research partners. This development threatens to stall the core of this industry. There is no longer a bi-directional flow of information between the large content producers and the much smaller development companies. NL won´t be used (just like almost all other renderers) so they can´t learn and delevop based on animation production and VFX needs.

Anyway I didn´t blame NL for not jumping on a GPU solution if you already have a well developed CPU based renderer. Why do it now while it would anyway be something for a new release anyway. I just want to mention two things:

- Mental images produced Iray and the certify that you will get identical solutions with the GPU realtime solution as with their CPU offline classic renderer. So maybe NL finds the CUDA programming at the moment a constraint, it´s clearly a solvable problem with current soft and hardware.
- Intel promised in 2005 that within five year we would have 80 cores (remember Larabee). Well it´s 2010 now and there are no Intel processors with even a fraction of these 80 cores.

So yes, intel told everyone to go for the SIMD model and offered optimization help, but they didn´t produce the goods, while a 189$ card offers you 320 cores!

(and I still wonder if Paul really wrote the current Modo renderer, I suspect it to be a recycled Steve Worley product so no wonder they can´t make the jump to GPU. And the big difference between NL and Luxology that Bradd brags, while NL keeps silently working. I prefer the latter one and between the lines you gave enough information to understand the future of NL. Good enough for me.)

Max

who just find this industry debate interesting and sometimes amusing, but it´s easy if you still get a substantial part of your income from an extremely fast, easy to implement in hardware, really ´outdated´ scanline renderer ;)
User avatar
By juan
#329646
Hi Max, et all,
max3d wrote:That users of renderers should or should not be aware of the technology behind it, is something I personally have a different opinion about, but that´s me. I love painters who actually study light, develop their own paints etc. Yes that´s very technical but if your main living is based on using a specific technology it would be smart to learn about it.
Of course. I did not mean that, and sorry if it was understood in that way. I meant that the marketing hype around all these matters is polluting the technical discussions under the wood and it confuses users, which is what we are trying to prevent. But the more you know about the technology that runs behind the tools you use the better, in CG or in any other area.
max3d wrote:Regarding Maxwell as an VFX tool, I disagree. Most VFX guys want full control over their output. if you can´t bias the renderer you can´t achieve what you want. There are some small studios with different ideas but the big players all want that.
I just wanted to reply that sentence ("VFX directors will never use it") pointing out that Maxwell has already been used in VFX and the trend based on our sales is that it is being more and more extended in this market. Why? There is no question sometimes people need to control things that by the nature of Maxwell cannot be adjusted, but most of the times people are forced to tweak parameters to make a render appear right, and that comes with Maxwell for free. By example in Benjamin Button several people were struggling for weeks trying to match rendered backgrounds and real footages; and someone did a quick test with Maxwell. They got what they wanted in a few hours (which brings another interesting discussion about render times but that's completely offtopic..). Of course sometimes you need to put non realistic stuff, and you have many tools for doing that instead of Maxwell but not everything in VFX is about introducing bias, and more and more people are finding it out. Traditional cinematographers are loving Maxwell because now they can understand much better how real and virtual images will compose together. Our aim is that lighting TDs can focus only in lighting problems, without becoming necessary for them to be computer gurus that know that increasing the resolution of irradiance cache (or whatever other parameter that has no sense in real optics) will make their system goes our of memory. Again I repeat that there is room for biased and unbiased technologies, but do not underestimate how useful these last ones can be.
max3d wrote:It got even worse as ILM and Pixar no longer want to work with outside solutions. Neither do they sell the fruits of their research to a larger audience (except for Renderman of course). They just gave up and do everything in house.

That is what people from outside use to believe but this is very different behind the scenes, and believe me that after more than a decade of experience with Realflow we have seen enough of this (although of course we can't say a word when we are not allowed to ;) ). Ferrari, BMW, Red Bull, all of them claim to use their own CFD's but they all also use 3rd part software. A vfx studio, no matter of its size (be aware that usually they are not so big companies if we look at the big picture, actually most of them even if they look enormous they are pretty small) is in general not interested in recruiting permanently a bunch of fluid experts, they rarely can compete with groups of people that have been working 7/24 on that area for many years. Of course there are exceptions here and a few people in some studios have very good know-how in very specific areas. But it is much cooler to say "oh we did that 5 seconds effect coding an specific tool for a month, three thousand developers wrote 2 million lines of code specifically for that", that saying "we bought X and we had a couple of guys working on that shot". The truth uses to be in a point between these two edges, but at the end this is marketing too, and big vfx studios have to sell and justify their prices.

Juan
User avatar
By tom
#329649
yolk wrote:gpu rendering is so last year. i want my sound card to render my images :lol:
No! A sound card should instead contribute to the GPU/CPU discussion. Louder! :D
By zdeno
#329656
sandykoufax wrote: I have only on-board sound chip. :lol:

Great for You Sandykoufax It would render much faster without PCI-Northbridge time delay.

so what about network board? is it on-board chip too?
User avatar
By Richard
#329806
Hmmmm? The sound card idea would be cool, I can imagine even using the ML channels as a sound mixer!

"Tonight your RenderJockey is RJ drippy dick!"
User avatar
By jurX
#329863
That rocks the shit!!!Yeeeesssss!!!
User avatar
By max3d
#329916
juan wrote:Hi Max, et all,
max3d wrote:That users of renderers should or should not be aware of the technology behind it, is something I personally have a different opinion about, but that´s me. I love painters who actually study light, develop their own paints etc. Yes that´s very technical but if your main living is based on using a specific technology it would be smart to learn about it.
Of course. I did not mean that, and sorry if it was understood in that way. I meant that the marketing hype around all these matters is polluting the technical discussions under the wood and it confuses users, which is what we are trying to prevent. But the more you know about the technology that runs behind the tools you use the better, in CG or in any other area.
max3d wrote:Regarding Maxwell as an VFX tool, I disagree. Most VFX guys want full control over their output. if you can´t bias the renderer you can´t achieve what you want. There are some small studios with different ideas but the big players all want that.
I just wanted to reply that sentence ("VFX directors will never use it") pointing out that Maxwell has already been used in VFX and the trend based on our sales is that it is being more and more extended in this market. Why? There is no question sometimes people need to control things that by the nature of Maxwell cannot be adjusted, but most of the times people are forced to tweak parameters to make a render appear right, and that comes with Maxwell for free. By example in Benjamin Button several people were struggling for weeks trying to match rendered backgrounds and real footages; and someone did a quick test with Maxwell. They got what they wanted in a few hours (which brings another interesting discussion about render times but that's completely offtopic..). Of course sometimes you need to put non realistic stuff, and you have many tools for doing that instead of Maxwell but not everything in VFX is about introducing bias, and more and more people are finding it out. Traditional cinematographers are loving Maxwell because now they can understand much better how real and virtual images will compose together. Our aim is that lighting TDs can focus only in lighting problems, without becoming necessary for them to be computer gurus that know that increasing the resolution of irradiance cache (or whatever other parameter that has no sense in real optics) will make their system goes our of memory. Again I repeat that there is room for biased and unbiased technologies, but do not underestimate how useful these last ones can be.
max3d wrote:It got even worse as ILM and Pixar no longer want to work with outside solutions. Neither do they sell the fruits of their research to a larger audience (except for Renderman of course). They just gave up and do everything in house.

That is what people from outside use to believe but this is very different behind the scenes, and believe me that after more than a decade of experience with Realflow we have seen enough of this (although of course we can't say a word when we are not allowed to ;) ). Ferrari, BMW, Red Bull, all of them claim to use their own CFD's but they all also use 3rd part software. A vfx studio, no matter of its size (be aware that usually they are not so big companies if we look at the big picture, actually most of them even if they look enormous they are pretty small) is in general not interested in recruiting permanently a bunch of fluid experts, they rarely can compete with groups of people that have been working 7/24 on that area for many years. Of course there are exceptions here and a few people in some studios have very good know-how in very specific areas. But it is much cooler to say "oh we did that 5 seconds effect coding an specific tool for a month, three thousand developers wrote 2 million lines of code specifically for that", that saying "we bought X and we had a couple of guys working on that shot". The truth uses to be in a point between these two edges, but at the end this is marketing too, and big vfx studios have to sell and justify their prices.

Juan
Hi Juan,

Agree with your point about marketing polluting the issue.

VFX I disagree about. I´m privy to a lot of work done there and the larger ones need to use custom shaders. And they do produce them. Matching background drops or mattes to 3d output is a difficult area and realism is often an obstacle there. I will believe there are some small studio who work with maxwell (or other unbiased renderers) but it will be rare. Besides you want to take your renderers apart and control them with pieces of homegrown software. As far as I know Maxwell doesn´t have these facilities which you really need in a production pipeline. And yes there is also that issue of rendering times for full movie. The res and no of frames makes maxwell just unusable for that purpose.

That studios sometimes like to concentrate on some small interfacing blocks and boost about their ´in house solution´ while they did no more than develop a few custom shaders and fitted renderman in their pipeline I know. But believe me that there is a lot of custom work done inside studio´s. It will be different from the fluid world. I know a lot less from that world except that I´m an avid formula one fan and I researched the software a few times for use as the basis for water and smoke simulations. What I believe is that Nicholas Wirth, the ultimate CFD advocate, doesn´t use NL software. That some others are still way behind and only thanks to the severe testing restrictions finally got onto the CFD wagon is true. The didn´t develop all that in house as they reluctantly used it. However I think by 2010/2011 the big players will have their own software in place although often based on existing core functionality.

Anyway CFD for car design is out of the scope of computer graphics so it´s more about fire, smoke and water. I know for a fact that ILM developed a custom solution for it. What is in the scope of this discussion are for instance Scanline VFX and ILM who both create special effects. Both of them moved to GPU´s to do fire smoke etc effects and use their own in house tools which you can´t buy. The Harry Potter scene shows the new fire tool by ILM which was used by Weta with great success. Unfortunately for NL they didn´t provide their simulation software as they used a novel approach, but again GPU instead of CPU.

I think it´s safe to say that all large studios with their own renderfarms are in the process of converting to GPU farms. Their new tools are already based on it and the older stuff is being ported. The savings in time, space and energy savings per result are enormous. Actually I would expect Realflow and your newer product to move to GPU even before Maxwell does :)
  • 1
  • 10
  • 11
  • 12
  • 13
  • 14
  • 24

So, Apple announced deprecation at the developer c[…]

render engines and Maxwell

I'm talking about arch-viz and architecture as tho[…]

> .\maxwell.exe -benchwell -nowait -priority:[…]