Page 1 of 1

4000Eur CUDA Based Supercalculator

Posted: Wed Nov 19, 2008 1:44 pm
by def4d
http://fastra.ua.ac.be/en/index.html

Again, will Maxwell run with GPUs ?

Posted: Wed Nov 19, 2008 3:25 pm
by Micha
:shock: Good question!

Posted: Wed Nov 19, 2008 3:27 pm
by Maximus3D
Nice piece of expensive hardware, and no Maxwell will not run on that as it's gpu based hardware.

/ Max

Posted: Wed Nov 19, 2008 4:18 pm
by MS
Maximus3D wrote:Nice piece of expensive hardware
What is a cost of Dual Quad Mac?
Maximus3D wrote:and no Maxwell will not run on that as it's gpu based hardware.
But that's the question - why not?

I am sure I do not buy such hardware now, but in near future, when the price falls down...

CUDA (and GPU based computation at all) should be considered as a good way for render machines/farms.

Posted: Wed Nov 19, 2008 4:29 pm
by Maximus3D
I don't know what the cost is, but it's most likely also pretty high. And to answer the question why not.. i remember Nicole replied to a similar question not long ago, her reply was no and i guess her reply is still valid for this question. My own guess as to why we won't see this is because it would require a massive code rewrite to be possible and that goes for all hardware and OS platforms Maxwell runs on, and CUDA/gpu support might not be possible to work well enough on all those platforms right now. Maybe in a few years when it matured more but not right now..

/ Max

Posted: Wed Nov 19, 2008 5:15 pm
by MS
Thank you Max, I did not know about Nicole's announcement.

Posted: Wed Nov 19, 2008 9:18 pm
by def4d
if i'm not wrong, it was at the very first infos about Cuda, so i can understand the answer at that time
Today it seems to be the pretty near future solution for us, and seriously at a ridiculous price, compared to the perfs...

Do you really think renderers won't use it?

That's the next limit guys, go go go!!!

Maxwell, CUDA and GPGPU

Posted: Thu Nov 20, 2008 10:21 am
by rmw
Personally Ive been really disappointed by the little benefit that these GPU 'supercomputers' appear to have for rendering. As the lack of interest by companies such as NextLimit shows, this seems to be just another one of those many algorithms that cant be accelerated by present GPUs.

Im definately not a GPGPU expert, but from what I gathered the cost/effort involved in developing such a solution for Maxwell would be a waste of time because:
1. Although the GPU/Tesla may have as many as 960 'cores' running at 600Mhz+, each 'core' or stream processor only has access to a very small amount of its own memory in the form of a register, etc while the larger memory is shared and has a relatively low bandwidth. Memory read/writes from shared memory are therefore very slow and will 'limit' the floating point performance achieved.

2. To compensate for the shared memory transfer overhead many operations would have to be performed on the small piece of data in each stream processor's own memory before it is transferred back to the main memory.

3. Im guessing that Maxwell's calculations would need to access the shared memory quite often (because of the light transport from one area in the image to another, etc) and therefore they dont merely consist of a large number of operations being done on small pieces of data but rather depend on continuous memory transfer operations.

4. In this way the acceleration achieved would be severely limited by the memory and, at best, be equivalent or slower than the performance of a cpu.

5. Finally, GPUs calculations with double precision numbers are very slow (8x slower than single precision) and, although Im not sure if Maxwell requires its calculations to be double precision this could also be a contributor to the lack of adoption/ plans for a GPGPU implementation of Maxwell.