All posts related to V2
By numerobis
#329256
Thanks Jason! :D

I've just ordered ONE 980x... was expensive enough... :lol:
looking foreward to clock it... :mrgreen:
User avatar
By Half Life
#329258
No doubt :lol: - I'm still running a 920... I was just pointing out that the test they ran was pretty reasonable hardware-costwise when compared to a similar level GPU.

I'm not saying anybody has to spend that -- just that you are going to get more bang for your buck on pro-level gear with CPU than GPU... but I've been making this point for months :wink:

Best,
Jason.
User avatar
By Voidmonster
#329260
The problem with this comparison is that the Quadro card that's getting compared here sucks for GPU rendering. It's got just a bit over half the processing cores of new gaming cards.

The card that all the GPU rendering guys use costs $500 and has roughly the same performance as two of those Quadros. GPU rendering works best on cheap gaming cards.

I totally understand why this would be confusing. As computer users we're trained to believe that more expensive = faster. It's just not currently the case with GPU rendering, and nVidia isn't helping the situation with their stratified marketing.

My main point is that considering that the cost of an Octane license is trivial, that it leverages hardware that I would already get, I would be stupid to treat this as a binary choice. I am going to back both these horses. GPU rendering may be limited, but limitations can make for interesting art, and it gives me stuff in return for its limitations.

That will change some when the Maxwell realtime preview comes out. Having poked around with it at SIGGRAPH, I can say that it's really excellent and it gives me a lot of the things I like about Octane. Whether it runs fast enough on the hardware I've got is currently a big question mark. I suspect it will make me want to upgrade to a faster CPU.
User avatar
By Half Life
#329261
It's not raw speed you are paying for with Quadro it's the drivers, particularly OpenGL support... the components are more or less the same (though probably higher binned) as previous generation gaming cards.

I don't game -- so I have no use for gaming cards and their flaky drivers... and for that matter I have no use for Nvidia. As I've said before I use Firepro Cards which give a much better bang for the buck.

Best,
Jason.
User avatar
By max3d
#329263
Half Life wrote:It's not raw speed you are paying for with Quadro it's the drivers, particularly OpenGL support... the components are more or less the same (though probably higher binned) as previous generation gaming cards.

I don't game -- so I have no use for gaming cards and their flaky drivers... and for that matter I have no use for Nvidia. As I've said before I use Firepro Cards which give a much better bang for the buck.

Best,
Jason.
So you can't check the CUDA implementations. That's a shame as they are really fast and your calculations are really way off. The scalibility of current CUDA implementations is unlimited as they repeat all all data per card. So it's a matter of cooling and PCI-extenders and some already announced 19 inch racks. Every 250$ GTX460 with 2Gb will improve performance in a way unimaginable with CPU's.

Now I do consider this a stupid, brute force attack as memory management would make things much more interesting, but it would not be too difficult to start with poly culling depending on viewport, texture sharing through PCI 3.0 connections, etc. So that would already ease the current scene limitations and get them in a fully usable setup.

A next step would be to use the multi CPU control unit to use some really advanced memory and scene management on the GPU's. Again not too hard and I'm pretty sure MR is already working on it for their Iray renderer. They already have features to get to a full solution but with a live viewport where by using a sort of brush you can select where you want your computing power concentrated. As a user you know best where you expect results of problems with material and light tweaks in the scene you just focus on these parts.

I will react later to the video Mihai posted because a slow internet connection gives me ample time to write these posts while waiting.

*btw, jason, why do you use maxwell if you as an artist want to lift the quality level etc. Like VFX directors you should look elsewhere as Maxwell just like the GPU wave of renderers tries to act like a photocamera. Your creativity is limited to your light setup just like in real photography.*

Max.
User avatar
By max3d
#329264
Half Life wrote:It's not raw speed you are paying for with Quadro it's the drivers, particularly OpenGL support... the components are more or less the same (though probably higher binned) as previous generation gaming cards.

I don't game -- so I have no use for gaming cards and their flaky drivers... and for that matter I have no use for Nvidia. As I've said before I use Firepro Cards which give a much better bang for the buck.

Best,
Jason.
The OpenGL versus crappy game drivers is a bit outdated discussion. If your application supports it *sorry Maya users* you have fantastic speeds and great precision with the direct 11 approach. OpenGL is dying in the workstation apps market.
The urban myth about flaky just in time game drivers has a base but your app can nowadays demand final result instead of constant frame rate and Nvidia long ago got the flack for their lack of precision trick. Have a look at Direct 11 and its possibilities and requirements. It's quite revealing.

I don't game either and I am nobody's fan, but I like facts. If I can't use GPU acceleration in modern apps because ATI is way behind in producing a mature platform then every Ati pro card is a waste of money as it locks you out of an important range of application speed ups. I don't like Nvidia cause of some of the cheats in the past, but like I said I buy what I need for my professional purposes. I don't like Sony either so my Vaio has been replaced, but for professional camera's and broadcasting I don't ignore them. Only gamers can afford to be a fan boy.
User avatar
By max3d
#329265
Mihai wrote:This video from Luxology explains pretty well what the current situation is with GPU vs CPU rendering:
http://www.luxology.com/tv/training/view.aspx?id=536
Well I have watched and I'm actually a bit amazed about Luxology now. it seems like they missed out on years of technology. I can't understand that the chief scientist had 'picked up signals that there were some limitations on GPU renderings'. Did he actually read any ACM paper in the last ten years I wonder. Going to talk with Intel about possible optimizations by using SIMD instructions better? They carry the Intel logo prominently in that video and no doubt they participate in Intels performance program. I really don't understand how they could have missed all that.

Anyway the presentation of Greg is a bit silly. He outfits a box, okay a boxx, with as much GPU hardware as he can throw at it and then adds first one and later in the video two Quadro cards which are quite slow. It's not even clear if the benchmark comparisons are with that one Quadro 4800, but the way the story unfolds it must be.

So he compares a very outdated card with just 192 cores to his CPU monster. What if he had taken a three GPU GTX480 *the new one with 512 cores*. We would have seen an 8 times speed up with the GPU solution *yes they scale linear*. It would have dwarfed the performance of his 12 core box.

The price for a proper workstation with room for three GTX480's and let's say a decent i7/9xx *a 930 would do* with just six Gb internal memory would have been a fraction of his outfit.

So that video just proofed that Intel still has deep pockets to spend on marketing and that luxology either lost the way or is acting like it's lost and just has no proper solution.

I almost forgot that he mostly compared his irradiance cache with an unbiased renderer except for some scenes where flaws were visual. The performance with MLT in modo was already on a par with the octane results and that at a multitude of the costs.

His future talks are along the same lines I sketched in my earlier posts and we agree on that, but this was just silly.

Do they really think anybody with some technical knowledge will fall for this. It's just a bad excuse for missed R&D on their part. Does anybody at luxology believe that Autodesk is stupid, that Mental Ray can't already beat the quality of Octane with Iray. He knows all too well, but has to cover up as they will loose out big time with this cover up. Mental ray do too well to do these marketing blurbs but they could easily beat the quality and the performance on lower end hardware so their video would make modo silly. And they know it, just like they know Nvidia bought Mr so its clear that the future direction of Mr as well as Nvidia will be to use more of the GPU cores and to address the 'simplicity and shallowness' of graphic cores. Does Greg really has a technical job at Luxology or is he just a talking head for the marketing department?

Anyway nothing new in that video, nothing Next limit didn't already know, just like they will know and understand what I just wrote. So for me it's back to the fundamental reason why Maxwell doesn't move to the GPU's. Being fair and telling that they don't have a complete solution yet and that CPU-GPU traffic and mem management is a difficult subject is completely acceptable of course. My question was not on the current release nor a critic of the new preview facility but just about the future direction and if there are unforeseen barriers which I'm overlooking.

Max
User avatar
By max3d
#329268
Half Life wrote:No doubt :lol: - I'm still running a 920... I was just pointing out that the test they ran was pretty reasonable hardware-costwise when compared to a similar level GPU.

I'm not saying anybody has to spend that -- just that you are going to get more bang for your buck on pro-level gear with CPU than GPU... but I've been making this point for months :wink:

Best,
Jason.
In your calculation you claimed 12 cores per xeon while they actually have only six. It will be just a slip of the keyboard, but it could confuse others. If you really want 24 cores on a motherboard you will have to get X7560's with 8 cores. It would be silly to use just three so I would go for the 32 cores in the HP Proliant DL580 G7. Can be had for 30.000$ if you negotiate well. The 32 cores instead of 24 makes sense as the MP models are conservatively clocked so it will it be about the equivalent of the non existing 24 core machine. You need to add some hd's of your own as it comes diskless, but that's peanuts. Just like a proper video card.

Max
By numerobis
#329271
max3d wrote: Anyway the presentation of Greg is a bit silly. He outfits a box, okay a boxx, with as much GPU hardware as he can throw at it and then adds first one and later in the video two Quadro cards which are quite slow. It's not even clear if the benchmark comparisons are with that one Quadro 4800, but the way the story unfolds it must be.

So he compares a very outdated card with just 192 cores to his CPU monster. What if he had taken a three GPU GTX480 *the new one with 512 cores*. We would have seen an 8 times speed up with the GPU solution *yes they scale linear*. It would have dwarfed the performance of his 12 core box.

The price for a proper workstation with room for three GTX480's and let's say a decent i7/9xx *a 930 would do* with just six Gb internal memory would have been a fraction of his outfit.
...
I almost forgot that he mostly compared his irradiance cache with an unbiased renderer except for some scenes where flaws were visual. The performance with MLT in modo was already on a par with the octane results and that at a multitude of the costs.
...
So for me it's back to the fundamental reason why Maxwell doesn't move to the GPU's. Being fair and telling that they don't have a complete solution yet and that CPU-GPU traffic and mem management is a difficult subject is completely acceptable of course. My question was not on the current release nor a critic of the new preview facility but just about the future direction and if there are unforeseen barriers which I'm overlooking.

Max
+1

...but i have to add that the boxx is really cool :mrgreen:
max3d wrote: In your calculation you claimed 12 cores per xeon while they actually have only six.
Max
I think he means 6 real cores + 6 hyper threaded = 12 cores... x 2 = 24 cores
User avatar
By Half Life
#329293
24 logical cores = 12 physical cores + hyper-threading.

I'll be more than happy to reassess the situation when and if an open standard like openCL develops into the norm in GPU... right now I remain less than enthused about Nvidia's policies and therefore CUDA -- which is just my personal preference.

When I talk about pushing limits I'm talking primarily about material creation as there are some very serious limitations to material creation (and therefore realism) right now... it's the difference between the visual complexity of a real hardwood floor or a laminate "hardwood floor" -- Or a real weave on cloth (complete with fuzz) instead of a facsimile... etc. This is the real limit right now to realism... everything has the surface quality of realism assuming you don't look too hard or too close. The other thing that would help realism considerably is to expand beyond "3D' (meaning traditional 3-point perspective) into a more realistic mode like curvilinear perspective (5-point perspective).

SSS is key as is volumetric lighting in general.

All that together requires tremendous amounts or processing complexity and I see GPU only solutions as taking CG away from the places I want to go and more into a simplistic realm of fast but not great depth of realism.

I have no problem with the camera paradigm -- I am after all just a 2D artist at heart.

My purpose for Maxwell (and 3D in general) is to act as reference in my paintings in much the same way as I would using a model and props and taking photos in the real world... in some instances it is easier to do in 3D, and in some it is easier to do in the real world. The expense of hiring models and building props (not to mention studio photography equipment) makes it far more attractive to try to get most of what I need from 3D. Maxwell suits my purposes exceedingly well and I find their company philosophy to be very attractive.

Best,
Jason.
User avatar
By max3d
#329300
Half Life wrote:24 logical cores = 12 physical cores + hyper-threading.

I'll be more than happy to reassess the situation when and if an open standard like openCL develops into the norm in GPU... right now I remain less than enthused about Nvidia's policies and therefore CUDA -- which is just my personal preference.

When I talk about pushing limits I'm talking primarily about material creation as there are some very serious limitations to material creation (and therefore realism) right now... it's the difference between the visual complexity of a real hardwood floor or a laminate "hardwood floor" -- Or a real weave on cloth (complete with fuzz) instead of a facsimile... etc. This is the real limit right now to realism... everything has the surface quality of realism assuming you don't look too hard or too close. The other thing that would help realism considerably is to expand beyond "3D' (meaning traditional 3-point perspective) into a more realistic mode like curvilinear perspective (5-point perspective).

SSS is key as is volumetric lighting in general.

All that together requires tremendous amounts or processing complexity and I see GPU only solutions as taking CG away from the places I want to go and more into a simplistic realm of fast but not great depth of realism.

I have no problem with the camera paradigm -- I am after all just a 2D artist at heart.

My purpose for Maxwell (and 3D in general) is to act as reference in my paintings in much the same way as I would using a model and props and taking photos in the real world... in some instances it is easier to do in 3D, and in some it is easier to do in the real world. The expense of hiring models and building props (not to mention studio photography equipment) makes it far more attractive to try to get most of what I need from 3D. Maxwell suits my purposes exceedingly well and I find their company philosophy to be very attractive.

Best,
Jason.
Ah, then I misunderstood you regarding the artistic aspects. If what you need is a camera and a studio then Maxwell is of course an excellent program. I would never criticize them for doing what their intention was: building a unbiased renderer. I justed wanted to warn you that this puts severe limits on what's available and why VFX directors will never use it.

Regarding material realism. Yes, there is a lot to win there but shouldn't the solution be to model more in detail. Real hardwood versus laminate is quite easy and solved a long time ago, but cloth like materials are elusive. You can emulate them reasonably well, but for realism you need poly's. That should in my view not be the task of the renderer but of the modeler. No automation can substitute this. There are excellent examples of people modeling to extreme precision which delivers what you want: full realism under all kinds of light situations. For arch/viz work this will never be done. For movies it's done and some amateurs take great pride in modeling in extreme detail.
The base line is that just like avoiding occlusion tweaks, cut-off path tracing and the thousands of other very advanced tricks the biased renderers use, you will also need to approach modelling in this way if you want real realism :) So there goes all those handy things everyone demands from commercial renderers to avoid modeling detail. I don't think that approach will be very popular but the good news is that you can model as detailed as you want and just not use the box of tricks while rendering. This will get you much closer to the realism you're looking for.

In real rendering situations this will create one new problem however: the lack of internal precision. Large scenes with sub centimer details will lead to errors. It goes to far to fully explain the solutions, but afaik most or all commercial renderers available can't handle this. The solution is not too difficult but will slow down any rendering, so it's not hugely popular with developers, but that could well be implemented by a switch in the renderer.

The five *actually six* point perspective is a much discussed thing but I don't believe it would be a huge advantage, but this is more a psychology issue than a technical issue. If your eyes distorts the image so will it if it sees a 2D representation of reality. It hasn't been hugely popular in arts and there even have hardly been complaints about the single point perspective renderers *not artist* use to draw scenes. I assume you are aware of this issue.

Next post back to GPU vs CPU as this is a very interesting discussion, but a sidetrack of a sidetrack. I don't mind as discussions about realistic previews are closely tied to realism itself, but the moderators and readers may think otherwise. If they find it of interest I'm happy to participate in the 'realism' discussion as it's one of my favorite subjects.
User avatar
By Half Life
#329305
I've said most of what I'd like to say about CPU vs GPU -- in this and the previous thread on the topic... my endpoint in the discussion is I'm interested in hybrid and not just one or the other but if I have to choose it's CPU as of now.

I have my reasons for believing that modelling materials is not the long term solution but since that is outside the scope of the thread I'll save it for another day... just bear in mind I'm thinking on the scope of years and decades (not months) in regards to use of the technology. In my opinion any technology that puts a barrier between how light works in the real world and the output is going down the wrong path. Computational power will catch up, it always does.

Programming isn't my thing -- I'm an artist... and my interest in this discussion stops at the practical consideration of having my tools perform the way I expect. I leave the "how" to people who enjoy and are skilled at that.

Best,
Jason.
User avatar
By max3d
#329307
To sort out some confusion about the number of cores. Hyperthreading is nice (sometimes that is), but it are no real cores. These are marketing cores. Nice and spectacular for the old benchmarks with the typical usage patterns of average users where programs are stalled while other thread can now just go on. Works wonders on these benchmarks and it solved the hopeless barriers the CPU makers encountered by pushing the frequencies. Very effective weapon from Intel in the then war with AMD about the performance crown.

They were laughed at when they introduced it, and now it's taken completely serious and people really believe that 8 virtual cores at 2 Ghz is the same as 4 at 4,0 ghz. Poor AMD, they finally catched up and even beated Intel in their own game and then Intel changed the rules and to my huge surprise they succeeded. People nowadays believe it.

For rendering performance depending on the efficiency of the implementation etc. I would estimate the maximum performance gain of HT marketing cores at 15%. So a 12 real core machine with HT will be about a 14 core machine in performance. NOT 24, not by far.
I assume Maxwell to be more efficient due to the nature of the algorithms so I assume 15% would be too high, but I never tested it.

The other confusion in the topic is that the Luxology marketing blurb actually use (in a never to be delivered experiment) a 24 core *real ones* with a high speed link between two machines. So some people drooled over the idea while Jason quoted figures for half of that experimental 'machine'. So the costs for that imaginary machine can't be established, but my quote for a 32 real cores machine is however just lifted from HP's website. You can't buy it at your local hardware shop and afaik there are no motherboards freely available to build your own system, so it's a good reference number for that kind of performance Luxology wanted to demo.

It was btw even worse that they actually dared to compare a low cost beta program with a well developed not yet released 501 Modo in a setup which they admit will never be delivered and can't work. Everyone with some inside knowledge knows why a 1,8x speed increasement in real time previews over a network link will never work the way they did it. I'm pretty sure Next level knows exactly how this was done and can reproduce the same 'internal experiment'. As long as the person sitting behind the console exactly knows what he can do and you do some tricks in advance you can escape the speed of light limitation of a networked real time preview.

Jason insists that GPU solutions are crappy but has no basis for that. You can do some pretty advanced programming on CUDA. If you see what WETA studios together with Nvidia produced with PantaRay then you can hardly call that shallow like our friend at Luxology calls it. Out of CPU models of over thousands of millions polygons were handled by their new algorithm. A great breakthrough which coincidentally fits well with my remarks in the earlier post about a higher level of modelling.

The interesting however in this discussion is that this solution was also ported to CUDA processors and according to a Siggraph paper WETA and Nvidia produced "On complex scenes, a CPU implementation of the LOD-based ray tracing kernel achieves roughly 700K rays per second per core on
the CPU. On the same scenes, the CUDA implementation achieves 15M rays per second per GPU. "


Source: PantaRay: Fast Ray-traced Occlusion Caching of Massive Scenes
Jacopo Pantaleoni NVIDIA Research, Luca Fascione, Weta Digital, Martin Hill, Weta Digital, Timo Aila NVIDIA Research


This shows three things:

a) that there is no doubt that the speed advantages will be from 20 up to 50 times a GPU solution (more figures in the article). Whatever that nice looking chap in that silly movie wanted to tell his audience of non experts is not acknowledged by scientists with a much better understanding of rendering real production stuff.

b) that quite complex code can be implemented on GPU cores.

c) that Nvidia is currently way ahead of GPU development for mathematical purposes. CUDA is already very advanced while OpenCL is a slow limited solution. All the expertise at NVidia (and don't underestimate the large number of ph.d's they put on to it) has given them a huge advantage. This lead to all the big players jumping on the CUDA wagon so Nvidia can optimize processor cores and CUDA for real world problems. There is simply no equivalent at the OpenCL side of things. Apple doesn't have the knowledge, the OpenCL platform struggles with the different manufacturers demands and it will show extremely difficult to recoup for AMD as they dont control OpenCL.

Max
User avatar
By Half Life
#329308
My basis is based on testing the software in question before making any decisions to purchase -- I'm not into scientific papers as that is theoretical and I'm interested in what I can buy and use right now. I never said that GPU does not have strong potential, just that the implementations I've tried and seen and the hardware I want to use do not make GPU a practical consideration right now.

You are obviously very into the GPU deal and excited by the near term potential... great for you -- people who are much smarter than me and more well known and respected than you agree with me, so I guess you'll have to take it up with them.

Till then I hope you enjoy your time with non-GPU Maxwell.

Best,
Jason.
  • 1
  • 8
  • 9
  • 10
  • 11
  • 12
  • 24

So, Apple announced deprecation at the developer c[…]

render engines and Maxwell

I'm talking about arch-viz and architecture as tho[…]

> .\maxwell.exe -benchwell -nowait -priority:[…]