All posts related to V2
User avatar
By max3d
#329310
Half Life wrote:I've said most of what I'd like to say about CPU vs GPU -- in this and the previous thread on the topic... my endpoint in the discussion is I'm interested in hybrid and not just one or the other but if I have to choose it's CPU as of now.

I have my reasons for believing that modelling materials is not the long term solution but since that is outside the scope of the thread I'll save it for another day... just bear in mind I'm thinking on the scope of years and decades (not months) in regards to use of the technology. In my opinion any technology that puts a barrier between how light works in the real world and the output is going down the wrong path. Computational power will catch up, it always does.

Programming isn't my thing -- I'm an artist... and my interest in this discussion stops at the practical consideration of having my tools perform the way I expect. I leave the "how" to people who enjoy and are skilled at that.

Best,
Jason.
Hi Jason,

I try to avoid being too technical as I am neither an artist nor a programmer but someone who sits in the middle. I know how layers of paint work and what the difference between painting in wet versus dry or fresco painting versus egg tempera is and why you would do it, but I also know how to achieve the visual effect in programming terms. I have been helping as technical advisor to recreate a Renaissance painting into the digital realm as one of the many challenging jobs so I try to communicate between the two worlds.

Explaining to artists why things can not be done the way they think about it, but how it could be achieved with a different method or the other way developing algorithms to implement reality and the way artist see it and want to manipulate it are both parts of my job. If I failed in my effort to keep the fine line between the two worlds I'm sorry.

Max.

P.S. noticed your last reply. Who would be more in the know and agree with you? I would find it interesting to know some names as I consider myself an insider (I don't think you know my background or 'status' in this industry).

I cited a paper describing a technology actually used in producing one of the best selling movies of this decade. Nothing hypothetical as the result has been seen by multi millions of people. That's not near term potential but last years result.

Furthermore I have nothing against Maxwell nor am I restricted to only that renderer. Due to my work I have access to most of the renderers in production or in early stages of development. I realize you won't be able to afford Pantaray or Renderman, but have a look at Mental Ray Iray. That is normal technology within reach of individual artists and uses GPU to the maximum.

http://www.mentalimages.com/index.php?id=634

No need for a bitter tone. For me this is just an open debate.
User avatar
By Mihai
#329311
max3d wrote:
a) that there is no doubt that the speed advantages will be from 20 up to 50 times a GPU solution (more figures in the article). Whatever that nice looking chap in that silly movie wanted to tell his audience of non experts is not acknowledged by scientists with a much better understanding of rendering real production stuff.

b) that quite complex code can be implemented on GPU cores.
It seems Nvidias marketing is far more effective than Intels :P If your main scientist/mathematician/programming guru comes to you and says this is the best we can do with our renderer with a GPU, you would tell him - just go back and work harder because obviously you're too dumb - Nvidia says miracles can happen. Good luck running that company. I have next to zero programming knowledge but just thinking rationally (!=marketing) wouldn't Luxology or any other company jump on this with all their resources if it was pretty clear for them from the beginning that 'there is no doubt' their renderer would suddenly be 20-50 times faster (and 100 times faster if you buy 3 inexpensive cards)? Wouldn't they want to be the first company to offer this? Now we have about 6-8 GPU renderers and all of them have too many limitations to be considered a full production render engine? Why is that? Why can't anybody understand Nvidias simple instructions and clear examples?

Quite complex code? How much more vaguely can you put it? I think the video from Luxology was very interesting and an honest account of what they (and Next Limit and others) currently think about GPU rendering. They took the time to investigate at least, knowing displacement won't work, irradiance cache won't work, etc etc. Ofcourse they would welcome any technology that speeds up rendering, but their product cannot currently be accomodated by GPUs, and they took the decision that their product has too many valuable features that would need to be sacrified thus making a worse product. What I really don't understand is why you think you're in such a clear position to disagree with them? Can you back up your views with more than Nvidias marketing and some Siggraph presentations on a specific piece of code for a specific situation?
is not acknowledged by scientists with a much better understanding of rendering real production stuff.
I'm sorry "our" scientists want to kill your dream, but they obviously missed that powerpoint presentation from Nvidia..... :lol:
The tone isn't bitter, just a bit arrogant...
User avatar
By Half Life
#329312
There is a big difference between knowing how to paint egg tempera and knowing the exact method of creating the brush you paint with and why it works or doesn't or the molecular structure of the egg yolk or pigment you apply... and of course they are not the point anyway -- the point is the art that results.

Most creatives couldn't care less about how to make the tools they just want tools that fit their needs at a reasonable cost.

I'm tired of reading through the massive amount of words you type to get to anything meaningful to me -- the gist of which is that you do not agree with Maxwell or Modo in their decisions as to how to handle their products... and are thinly veiled attacks at them. You are correct in that I do not know your status in the biz but that in itself is very telling... combined with the fact that you are slumming it up here with us regular type users, tells me all I need to know about who and what you are.

The contents of several similar threads will illuminate the names you seek... Try the search function.

Best,
Jason.
User avatar
By max3d
#329316
Mihai wrote:
max3d wrote:
a) that there is no doubt that the speed advantages will be from 20 up to 50 times a GPU solution (more figures in the article). Whatever that nice looking chap in that silly movie wanted to tell his audience of non experts is not acknowledged by scientists with a much better understanding of rendering real production stuff.

b) that quite complex code can be implemented on GPU cores.
It seems Nvidias marketing is far more effective than Intels :P If your main scientist/mathematician/programming guru comes to you and says this is the best we can do with our renderer with a GPU, you would tell him - just go back and work harder because obviously you're too dumb - Nvidia says miracles can happen. Good luck running that company. I have next to zero programming knowledge but just thinking rationally (!=marketing) wouldn't Luxology or any other company jump on this with all their resources if it was pretty clear for them from the beginning that 'there is no doubt' their renderer would suddenly be 20-50 times faster (and 100 times faster if you buy 3 inexpensive cards)? Wouldn't they want to be the first company to offer this? Now we have about 6-8 GPU renderers and all of them have too many limitations to be considered a full production render engine? Why is that? Why can't anybody understand Nvidias simple instructions and clear examples?

Quite complex code? How much more vaguely can you put it? I think the video from Luxology was very interesting and an honest account of what they (and Next Limit and others) currently think about GPU rendering. They took the time to investigate at least, knowing displacement won't work, irradiance cache won't work, etc etc. Ofcourse they would welcome any technology that speeds up rendering, but their product cannot currently be accomodated by GPUs, and they took the decision that their product has too many valuable features that would need to be sacrified thus making a worse product. What I really don't understand is why you think you're in such a clear position to disagree with them? Can you back up your views with more than Nvidias marketing and some Siggraph presentations on a specific piece of code for a specific situation?
is not acknowledged by scientists with a much better understanding of rendering real production stuff.
I'm sorry "our" scientists want to kill your dream, but they obviously missed that powerpoint presentation from Nvidia..... :lol:
The tone isn't bitter, just a bit arrogant...
Hi Mihai,

I'm sorry if it comes across as arrogant. I have said several times that I completely understand NL's decision to postpone a move to GPU and have no problem at all with that decision. I asked something about the future, without demanding a road map or implying any criticism. I ridicule the video of luxology but give solid arguments which can be countered if people think otherwise.

I have also put forward a lot of arguments why the suggested limitation (by Luxology) of GPU's isn't there. I haven't seen Nvidia's powerpoint presentations but I do read all the academic work in CG work as far as it concerns rendering technology and I do so for twenty years or so. That also helps to know what will be the future of rendering because the road from academic paper to actual working product is very long. The basis for Maxwell was already laid down ages ago. The writers of renderers do read all this stuff and then find clever way to optimize the described general solution. And they have to make it into a working product instead of a tech demo. That all takes years and years. But it also means the future of rendering is not a mystery but can be predicted reasonably well.

Would you agree that the chief scientist of WETA is possibly a bit more informed than someone at Luxology? The siggraph ACM paper I quoted from is a peer reviewed contribution, not some marketing blurb. If you have read my comments you can see that I hate marketing blurb and usually just ignore it. I actually don't understand your post. If you review that video with my notes at hand can you point me where I made a mistake in my interpretation?

I'm open for critic. You post to learn and boasting about reputations never works so I never do that. What would be the use if I told with whom I work? People would either not believe it or ignore it and criticize these people. Internet is a place where you can discuss ideas without any restriction. I have often learned a lot of people who were working on their thesis in CG and were half my age.

Max.
User avatar
By Bubbaloo
#329318
max3d wrote:I'm sorry if it comes across as arrogant. I have said several times that I completely understand NL's decision to postpone a move to GPU and have no problem at all with that decision. I asked something about the future, without demanding a road map or implying any criticism.
max3d wrote:Anyway nothing new in that video, nothing Next limit didn't already know, just like they will know and understand what I just wrote. So for me it's back to the fundamental reason why Maxwell doesn't move to the GPU's. Being fair and telling that they don't have a complete solution yet and that CPU-GPU traffic and mem management is a difficult subject is completely acceptable of course. My question was not on the current release nor a critic of the new preview facility but just about the future direction and if there are unforeseen barriers which I'm overlooking.
User avatar
By max3d
#329320
Half Life wrote:There is a big difference between knowing how to paint egg tempera and knowing the exact method of creating the brush you paint with and why it works or doesn't or the molecular structure of the egg yolk or pigment you apply... and of course they are not the point anyway -- the point is the art that results.

Most creatives couldn't care less about how to make the tools they just want tools that fit their needs at a reasonable cost.

I'm tired of reading through the massive amount of words you type to get to anything meaningful to me -- the gist of which is that you do not agree with Maxwell or Modo in their decisions as to how to handle their products... and are thinly veiled attacks at them. You are correct in that I do not know your status in the biz but that in itself is very telling... combined with the fact that you are slumming it up here with us regular type users, tells me all I need to know about who and what you are.

The contents of several similar threads will illuminate the names you seek... Try the search function.

Best,
Jason.
I don't know who you mean by "slamming it up with us regular type users". I posted here already in 2005 which seems a few year before you joined, discussed real light versus artistic needs in very old topics on this forum, try to shed light on your needs for better, physically correct materials (f.i in http://www.maxwellrender.com/forum/view ... 918#p98918 (which by the way are finally correctly implemented in v. 2.0 although I didn't test them), etc. It's not that I come out of the blue on the Maxwell forum.

Most of my posts have been constructive attempts to help people out with the technical complexities behind a real physically correct renderer. So I turn up again when there are interesting developments. What's wrong with that. I post on more forums if I have time to spare, but maxwell was one of the first of its kind and I still have a warm feeling about the user group which was active in these days.

Max.
By JTB
#329357
Well instead of trying to show us you are clever, (we know all of you are, because you choose Maxwell Render :D ) ...
it is better to ask for a new video! Since the one and only sample was shown by mistake, I hope we see another one soon, showing something new !
By zdeno
#329364
Bubbaloo wrote:
:lol: Are you sure?
+1

There is no chance it was by accident. no one puts his secrets on youtube.
User avatar
By Voidmonster
#329365
The thing I'd really like to know, and didn't have a chance to try out, is how does the preview react to objects being moved around? Does it need to re-voxelize the scene? (and my stupid brain has forgotten whether or not it needed to do a voxel pass to begin with, though I think it did)...

I can think of a couple of possibilities for sidestepping that 'problem'.
User avatar
By m-Que
#329369
Voidmonster wrote:The thing I'd really like to know, and didn't have a chance to try out, is how does the preview react to objects being moved around? Does it need to re-voxelize the scene? (and my stupid brain has forgotten whether or not it needed to do a voxel pass to begin with, though I think it did)...

I can think of a couple of possibilities for sidestepping that 'problem'.
You can see it in that sample video in a robot scene.
User avatar
By Voidmonster
#329380
m-Que wrote:
Voidmonster wrote:The thing I'd really like to know, and didn't have a chance to try out, is how does the preview react to objects being moved around? Does it need to re-voxelize the scene? (and my stupid brain has forgotten whether or not it needed to do a voxel pass to begin with, though I think it did)...

I can think of a couple of possibilities for sidestepping that 'problem'.
You can see it in that sample video in a robot scene.
Noted. My stupid brain wins.

So, it does need to voxelize. I wonder if the revoxelization could be avoided when moving objects and lights (or at least greatly sped up) if Maxwell were using similar voxel-transforms to what ZBrush does. My understanding is that those algorithms come from a public paper by, I think, Mitsubishi and were originally designed for medical imaging.
By renbry
#329553
Luxology just released a great video about CPU vs GPU rendering here:

http://www.cgchannel.com/2010/08/gpu-vs ... gle+Reader

a great watch and interesting results. obviously they're comparing a greatly optimised renderer vs a CUDA based renderer currently in ALPHA development, but the points he makes are interesting none the less.

matt
By samsam
#329567
Look at this demo of Vray RT GPU:

http://shop.vray.info/siggraph-2010-cha ... demos.html

- the SIGGRAPH 2010 - Chaos Group V-Ray RT CPU/GPU Demo link

(incidentally running on 3 x 480 GTX cards.)

This video further supports the notion that GPU renderers can produce very high levels of realism at breathtaking speed.

It must be difficult for software rendering programmers to know where to focus their energies right now.

From an end users perspective these changes bring both good and bad aspects - good as in it gets quicker and better - bad as in it makes us question whether we should we jump ship & buy/learn/integrate a new renderer.
User avatar
By juan
#329602
Hi,

Thanks for your input max3d :)
max3d wrote:I still don't know why the GPU route had not been taken. I do realize that the development time for a full CUDA implementation would be huge so this could well be the only feasible solution for this year. Nothing wrong with that, but does that mean that there is a fundamental reason Next Limit would ignore multi core programming.
We have not revealed any decision about if we are moving to GPU or not, we never make announcements regarding middle-long term strategies. The only thing we have said is that we will release an interactive engine very soon and it is cpu based, because under the current circumstances we do think it is the best way to go. The reasons are already mentioned in our website (http://www.maxwellrender.com/pdf/Maxwel ... w_Info.pdf)

In that link we don't say we are going through the cpu way, but we just say that from our point of view it is not time to force users to spend a lot of money *now* in hardware that might become obsolete very soon. Maybe we are working on a gpu engine, or maybe not, as I say we don’t talk about that. But it has nothing to do with “ignoring multicore programming”. We have always embracing multicore and simd friendly algorithms from the very beginning. Maxwell scales pretty well, almost linearly even with a very high number of cores (see our announcement here http://www.maxwellrender.com/version2/workflow.html). Maybe one day all the renderers in the world are gpu based, but I am sure that it will not happen with today’s hardware but we have to wait a bit (or more than a bit). We are not interested in releasing a product that supports only an small set of the features that currently Maxwell handles, using a material model with just a few static parameters, clipping the GI, skipping many indirect lighting effects as caustics.. we are Maxwell and by definition we do our best to reach the highest quality in every little detail. And that’s mainly why our current interactive engine is cpu based, we don’t sacrifice anything supported by the final production engine.
max3d wrote:The biggest problem I see is that you need to develop an algorithm to smartly use the available memory per GPU.
The problem cannot be reduced to a memory-use issue, ignoring other critical aspects as branch prediction, memory fetching, threads synchronization, cache misses, sharing issues, access patterns, context switches, and much more fun. Dress it with a lack of a mature standard, plus good development, profiling and instrumentation tools and you get a nice picture of how things are right now. Besides to this, these market is still hot and it would not be strange to see more dramatic movements soon.
samsam wrote: It must be difficult for software rendering programmers to know where to focus their energies right now.
The problem is that not developers, but users don't know how to focus their energies, and the marketing hype around all these things is not helping at all. Unfortunately all these things put a lot of pressure on the final user to decide where to invest his money so he has to choose not only the tools to use but he also must become a guru of the technologies that run under the hood, an from my point of view this is a pity. That’s why we try to isolate users from this storm as much as possible, trying to deal with this difficulties ourselves and avoiding users are too tied to an specific hardware manufacturer, being it Intel, Nvidia or a fan fabricator.
max3d wrote: ...
Would you agree that the chief scientist of WETA is possibly a bit more informed than someone at Luxology? The siggraph ACM paper I quoted from is a peer reviewed contribution, not some marketing blurb.
...
Source: PantaRay: Fast Ray-traced Occlusion Caching of Massive Scenes
This shows three things:...

I attended many speeches about raytracing in Siggraph and unfortunately many of them were sponsored by hardware manufacturers. Some of them were pure advertisement and a waste of time, except because of the fact you could be sat resting for a while, something priceless in such an exhausting event. And I am not only talking about Nvidia and Intel here, its a general trend that is happening in these kind of events. (Anyway making occlusion calculations is not a so difficult thing at all, the complexity of such thing is levels of order of magnitude lower than a unbiased raytracer, and I am talking here about a general unbiased path tracer, not about Maxwell which is far beyond..).

Regarding who is more informed, this is an small world. Most of us are in the same mailing lists, attend the same events, after them we go to the same places for dinner, we have drinks together... maybe Brad knows as much as a chief scientist. (I love how it sounds, I wish there were more scientist in CG as in the old good days..now there are more sponsors than scientists :) )
max3d wrote:If what you need is a camera and a studio then Maxwell is of course an excellent program. I would never criticize them for doing what their intention was: building a unbiased renderer. I justed wanted to warn you that this puts severe limits on what's available and why VFX directors will never use it.
Just a minor note here: Maxwell has been used in vfx since the beginning and the number of studios that are moving to it is growing a lot, especially after v2 was released. Of course as any other product we do not pretend it’s used everywhere for every purpose but just wanted to point out this fact.

Juan

(After siggraph my doctor told me to avoid gpu-cpu discussions for a while so I tried to not extend my answer too much, with zero success)
  • 1
  • 9
  • 10
  • 11
  • 12
  • 13
  • 24

So, Apple announced deprecation at the developer c[…]

render engines and Maxwell

I'm talking about arch-viz and architecture as tho[…]

> .\maxwell.exe -benchwell -nowait -priority:[…]