Please post here anything else (not relating to Maxwell technical matters)
By sebbo
#234021
I've just read about nvidias Tesla gpu based super computing and now i'm pretty intressted what you guys think about rendering hardware other then the traditional cpu?
does some of you use any dedicated rendering hardware, would you like to how does a renderfarm look based on gpu's ???
User avatar
By Maxer
#234092
I'm all for using as much power as possible, this has been talked about before and I think it's a good idea. I think the problem is that coding for GPU's is different than that of CPU's which is why this hardware isn't taken advantage of. I don't know if NL has any plans to pursue this avenue but it would help in closing the gap between it and other engines in terms of speed.
User avatar
By b-kandor
#234096
It seems the thing to code for is a PS3.

See this page:

http://fah-web.stanford.edu/cgi-bin/mai ... pe=osstats

And note the following:

1: PS3 has a current TFLOP of 456 with 25180 cpu's
2: windows has a current TFLOP of 180 with 189649 cpu's

I'm not going to do the exact math -but with approx. 1/8 the cpu's as windows users the PS3 is out computing them by 3 times. So, the PS3 is approx. 24 times faster than an (albeit average) pc.

More info:

http://folding.stanford.edu/FAQ-PS3.html
User avatar
By b-kandor
#234100
No, it's using the cell processor. There is a company now marketing super computers made with multiple PS3 running a beowulf cluster - they really are very powerful machines...
By codygo
#234124
There is some confusion over exactly how powerful the ps3 is after their huge marketing hype and anticipated release. After all, the ps3 uses nvidia graphics based on the geforce 7 architecture which is has a big part of the 1.8 teraflops the ps3 claims. A 7 series geforce is claimed to have about 550 gflops, which is really a pie in the sky theoretical number that for desktop and even gaming or professional usage is limited by other bottlenecks or even situations that would actually feed that much information. The next anticipated g92 line of video cards is claimed to have 1 teraflop by itself.

It's a bit like comparing mhz between different cpus, the PC vs cell is comparing floating point strengths of RISC processing vs integer performance of CISC (nowadays CISC has a lot of RISC functionality). RISC computes in many simple tasks requiring the programming to be more extensive, whereas cisc computes fewer complex instructions to do the same task and like almost everything else there are tradeoffs.

I think a good analogy would be to ask, how many "1.0 x1.0" operations can you do in a minute vs how many random questions you can answer in the same minute.
User avatar
By KRZ
#234126
BTW the cell processor can be found in several ibm products too. it would be nice if NL had a researchteam that evalutes if a switch to that plattform could make sense.
User avatar
By b-kandor
#234135
codygo wrote:There is some confusion over exactly how powerful the ps3 is after their huge marketing hype and anticipated release. After all, the ps3 uses nvidia graphics based on the geforce 7 architecture which is has a big part of the 1.8 teraflops the ps3 claims. A 7 series geforce is claimed to have about 550 gflops, which is really a pie in the sky theoretical number that for desktop and even gaming or professional usage is limited by other bottlenecks or even situations that would actually feed that much information. The next anticipated g92 line of video cards is claimed to have 1 teraflop by itself.

It's a bit like comparing mhz between different cpus, the PC vs cell is comparing floating point strengths of RISC processing vs integer performance of CISC (nowadays CISC has a lot of RISC functionality). RISC computes in many simple tasks requiring the programming to be more extensive, whereas cisc computes fewer complex instructions to do the same task and like almost everything else there are tradeoffs.

I think a good analogy would be to ask, how many "1.0 x1.0" operations can you do in a minute vs how many random questions you can answer in the same minute.
True, but I believe that folding@home is not including the gpu in those stats - I think those numbers are based on empirical data....

Of course different processors handle different sorts of instruction in more or less clock cycles. But given that folding@home is processor intensive as is rendering, and given that the ps3 is burning through folding@home work units - seems like a match made in heaven! :)

ok thanks for explaining. actually I do copy the T[…]

Sketchup 2026 Released

Fernando wrote: " Now that Maxwell for Cinema[…]

Hello Gaspare, I could test the plugin on Rhino 8[…]

Hello Blanchett, I could reproduce the problem he[…]