Please post here anything else (not relating to Maxwell technical matters)
By numerobis
#271277
The next supercomputer using consumer hardware based on 4 Geforce 9800GX2 = 8cores à 128 subcores = 1024 subcores - in total as fast as 350 CPU cores... :shock:

Used for large-scale scientific computations
http://fastra.ua.ac.be/en/index.html

double precision support and 2GB video ram in near future...

soooo... WHEN does mxw will be able to make use of this power?!? :roll:


...maybe this article can help :lol:
http://numod.ins.uni-bonn.de/research/p ... double.pdf
Last edited by numerobis on Sat May 31, 2008 3:52 pm, edited 1 time in total.
User avatar
By piem
#271280
I guess they (NL and many others) will have to re-write the whole stuff :lol:
By wagurto
#271291
Or maybe some else will write a new piece of software that take advantage of this power. Some day we will have maxwell quality in real time! cool!
User avatar
By -Adrian
#271310
I think CUDA is far more interesting, because anyone can make use of it.
User avatar
By deadalvs
#271888
licensing per core ... :lol:
User avatar
By deadalvs
#271917
can cuda be called a standard so far ?

i guess NL won't start developing for any specific hardware unless there's just a secure base to develop on. as far as there still are nvidia and ati fighting for the throne... hmmmm.

we surely agree that it'd be great to get the speed of many standard cpus on one graphics card for rendering with maxwell.

dedicated hardware rendering... i've learned my lesson with three art vps PURE rendering cards (used around 8000 $)

but hey... future's coming ... hehe.
User avatar
By Hybaj
#272008
CUDA doesn't actually sound that much "hardware specific". It just sounds more "Nvidia specific" which actually isn't that bad if you look at their market share ;)

The future holds only 2 options. The first is the Nvidia's CUDA and the second is ATI's CTM. The CPUs seem out of question since they are hardly any competition to the GPUs in the physicial simulation field.

That means in the end you'll just have 2 checkboxes for GPU acceleration ATI or Nvidia (or just autodetect).

Having GPU doing tasks in the simulation field so much faster just makes no sense not to start using CUDA right now. Just look at the many applications that have already been ported with CUDA. Especially fluid simulation(http://www.youtube.com/watch?v=Yb-C4CuvlXU). The speed gains are just wonderfull :)

Can we get any info on how NL is far into the whole GPU idea from Nicole???
User avatar
By deadalvs
#272030
that was one of the most relieving posts i ever read on that forum...

i am not as familiar with graphics hardware as you, Hybaj. i remember something a long time ago you mentioned having programmed on them yourself. am i correct ?

i mean it sounds just great to have a «two checkbox» solution. but isn't there a wide spread between the technics used so all had to be programmed twice ?

or how technically complex do you think it is ?
User avatar
By Hybaj
#272039
My programming knowledge is very limited but you don't have to understand programming much to see the things that have already been done. I've got a guy right next to me who just few days ago finished his diploma about GPU computed genetical algorithms. From what he says and what I see it's not really insanely hard to write the needed code.

For example guys at Cebas already implemented Ageia's PhysX into thinking particles so these extra hardware implementations are really not that impossible or inefficient ;)

http://cebas.com/products/feature.php?U ... 15&FID=627

The point is that the sooner you grasp the technology the better for you, your business and of course your customers. NL's Maxwell came just at the right time with Maxwell since CPU's have been finaly fast enough to do some decent pictures in let's say some decent time.

Too bad the aplications for GPU in path-tracer renderers seem to be quite limited :P
User avatar
By jurX
#272045
Nvidia Gelato 2.2 Pro is for free now.

http://www.nvidia.com/page/gz_home.html

cu jurX
User avatar
By deadalvs
#272062
Hybaj wrote: Too bad the aplications for GPU in path-tracer renderers seem to be quite limited :P
it's time to «toss around» those caustics... :lol:

* * *

mental ray going GPU is certainly a large opponent as a biased renderer. there are not many unbiased renderers around (yet), but the rendering speed seems about the only issue in their break-thru (next to controllability like for example light linking). as you say: the sooner such a new technology is adapted, the faster enemies can be tossed out of the market.

it's astounding how close to reality today's graphics engines in games can simulate reality in realtime. the question is... will both sides merge one day.

if i were developer i'd take that chance very very seriously when having the chance to speed up my software by factors of over 50 or even much faster.
User avatar
By deadalvs
#272072
Hybaj wrote: Too bad the aplications for GPU in path-tracer renderers seem to be quite limited :P
what a pity ! :lol:

ok thanks for explaining. actually I do copy the T[…]

Sketchup 2026 Released

Fernando wrote: " Now that Maxwell for Cinema[…]

Hello Gaspare, I could test the plugin on Rhino 8[…]

Hello Blanchett, I could reproduce the problem he[…]