Page 1 of 1

Utilize GP GPU?

Posted: Wed Nov 30, 2005 6:44 pm
by marco333
Big problem for architect or 3d freelance is Time ! :cry:

The render time in crucial for them.
So..
Maxwellrender utilize GP GPU?
If not why?
I'll see computation with this is like turbo. I'll see C++ lib for compile software. Not simple I know , but I think, is very very important .
Is possible to see Maxwell with GP GPU utilize.
If reduce (10X or more) the render time Maxwell, architect or 3d freelance can utilize Maxwell in their pipeline. :D

Sorry for my bad english :wink:
Ciao a tutti

Posted: Wed Nov 30, 2005 6:48 pm
by Maximus3D
Maxwell is not using any GPU for it's calculations now, and i have doubts it'll be doing that in the future. But sure thing it would be very good if Maxwell used the GPU's raw power for rendering.

/ Max

Example of poser

Posted: Wed Nov 30, 2005 6:53 pm
by marco333
Follow this link you see the power of gp gpu :idea:

http://graphics.cs.ucf.edu/caustics/ and see videos

http://www.gpgpu.org/cgi-bin/blosxom.cg ... index.html
for more example

Posted: Thu Dec 01, 2005 12:59 am
by najamd
GPU rendering... ATI and NVIDIA are both putting time into it... Gelato is NVIDIAs GPU based 3d rendering system... and ATI is coming up with a video compression utility that uses the GPU.... I pray and hope for one day both these companies allow for a standard system where a renderer like MaxWell or any other app. can just connect to the GPU and tell it what to render... reason for my interest in companies using the GPU: 1 updates way faster... the video cards come out much faster then new processors do... 2 we talking about pure power... about 6 times more then a CPU... if I got my numbers right... MaxWell running 6 times faster... I don’t know what else would any person want.. :D

http://film.nvidia.com/page/gelato.html << Gelato
http://www.extremetech.com/article2/0,1 ... 749,00.asp << ATI

ND

Posted: Thu Dec 01, 2005 1:28 am
by smeggy
that would use up our licenses even faster, 1 for CPU, 1 for GPU.. eek
:shock:

Posted: Thu Dec 01, 2005 10:12 am
by marco333
One of the solutions could be to introduce a plugin (specific for NVIDIA or ATI) that through the network it approaches calculation in GP GPU, rather than to integrate it to the core of the application.
Sh is a library that acts as a language embedded in C++, allowing you to program GPUs (Graphics Processing Units) and CPUs for graphical and general-purpose computations in novel ways. For more information about Sh, see the About Section or read the FAQ.
from http://libsh.org/

Posted: Fri Dec 02, 2005 11:58 pm
by alexcount
not to mention running 4 cards pcie quad-sli.

I hope the programmers at nextlimit are reading up on development for these and similer proccesors (i.e. cell)
would anyone at nextlimit comment, after RC