Page 1 of 2
GPU Supercomputer
Posted: Sat May 31, 2008 1:35 pm
by numerobis
The next supercomputer using consumer hardware based on 4 Geforce 9800GX2 = 8cores à 128 subcores = 1024 subcores - in total as fast as 350 CPU cores...
Used for large-scale scientific computations
http://fastra.ua.ac.be/en/index.html
double precision support and 2GB video ram in near future...
soooo... WHEN does mxw will be able to make use of this power?!?
...maybe this article can help
http://numod.ins.uni-bonn.de/research/p ... double.pdf
Posted: Sat May 31, 2008 3:11 pm
by piem
I guess they (NL and many others) will have to re-write the whole stuff

Posted: Sat May 31, 2008 5:52 pm
by wagurto
Or maybe some else will write a new piece of software that take advantage of this power. Some day we will have maxwell quality in real time! cool!
Posted: Sat May 31, 2008 8:47 pm
by KurtS
Posted: Sat May 31, 2008 10:27 pm
by -Adrian
I think CUDA is far more interesting, because anyone can make use of it.
Posted: Sun Jun 01, 2008 2:10 pm
by Hybaj
GPUs are looking great for fluid and other physics simulations
Check the videos
http://www.youtube.com/watch?v=RqduA7myZok
http://www.youtube.com/watch?v=D4FY75Gw ... re=related
http://www.youtube.com/watch?v=ZgoDypGM ... re=related
I really do wonder what's keeping NL so long from not implementing GPU solving into their Realflow

Posted: Fri Jun 06, 2008 4:35 am
by deadalvs
licensing per core ...

Posted: Fri Jun 06, 2008 1:33 pm
by deadalvs
can cuda be called a standard so far ?
i guess NL won't start developing for any specific hardware unless there's just a secure base to develop on. as far as there still are nvidia and ati fighting for the throne... hmmmm.
we surely agree that it'd be great to get the speed of many standard cpus on one graphics card for rendering with maxwell.
dedicated hardware rendering... i've learned my lesson with three art vps PURE rendering cards (used around 8000 $)
but hey... future's coming ... hehe.
Posted: Sat Jun 07, 2008 3:26 pm
by Hybaj
CUDA doesn't actually sound that much "hardware specific". It just sounds more "Nvidia specific" which actually isn't that bad if you look at their market share
The future holds only 2 options. The first is the Nvidia's CUDA and the second is ATI's CTM. The CPUs seem out of question since they are hardly any competition to the GPUs in the physicial simulation field.
That means in the end you'll just have 2 checkboxes for GPU acceleration ATI or Nvidia (or just autodetect).
Having GPU doing tasks in the simulation field so much faster just makes no sense not to start using CUDA right now. Just look at the many applications that have already been ported with CUDA. Especially fluid simulation(
http://www.youtube.com/watch?v=Yb-C4CuvlXU). The speed gains are just wonderfull
Can we get any info on how NL is far into the whole GPU idea from Nicole???
Posted: Sun Jun 08, 2008 10:41 am
by deadalvs
that was one of the most relieving posts i ever read on that forum...
i am not as familiar with graphics hardware as you, Hybaj. i remember something a long time ago you mentioned having programmed on them yourself. am i correct ?
i mean it sounds just great to have a «two checkbox» solution. but isn't there a wide spread between the technics used so all had to be programmed twice ?
or how technically complex do you think it is ?
Posted: Sun Jun 08, 2008 2:39 pm
by Hybaj
My programming knowledge is very limited but you don't have to understand programming much to see the things that have already been done. I've got a guy right next to me who just few days ago finished his diploma about GPU computed genetical algorithms. From what he says and what I see it's not really insanely hard to write the needed code.
For example guys at Cebas already implemented Ageia's PhysX into thinking particles so these extra hardware implementations are really not that impossible or inefficient
http://cebas.com/products/feature.php?U ... 15&FID=627
The point is that the sooner you grasp the technology the better for you, your business and of course your customers. NL's Maxwell came just at the right time with Maxwell since CPU's have been finaly fast enough to do some decent pictures in let's say some decent time.
Too bad the aplications for GPU in path-tracer renderers seem to be quite limited

Posted: Sun Jun 08, 2008 7:54 pm
by jurX
Nvidia Gelato 2.2 Pro is for free now.
http://www.nvidia.com/page/gz_home.html
cu jurX
Posted: Mon Jun 09, 2008 8:36 am
by deadalvs
Hybaj wrote:
Too bad the aplications for GPU in path-tracer renderers seem to be quite limited

it's time to «toss around» those caustics...
* * *
mental ray going GPU is certainly a large opponent as a biased renderer. there are not many unbiased renderers around (yet), but the rendering speed seems about the only issue in their break-thru (next to controllability like for example light linking). as you say: the sooner such a new technology is adapted, the faster enemies can be tossed out of the market.
it's astounding how close to reality today's graphics engines in games can simulate reality in realtime. the question is... will both sides merge one day.
if i were developer i'd take that chance very very seriously when having the chance to speed up my software by factors of over 50 or even much faster.
Posted: Mon Jun 09, 2008 12:24 pm
by deadalvs
Hybaj wrote:
Too bad the aplications for GPU in path-tracer renderers seem to be quite limited

what a pity !

Posted: Thu Jun 12, 2008 4:46 am
by deadalvs
that link is just too sweet.
can't stop looking at it ...
http://www.nvidia.com/object/tesla_comp ... tions.html