Any features you'd like to see implemented into Maxwell?
By etienne
#14528
Hi all, this is my first post here.
Reading about GPU arithmetic implementations I started thinking that it would be a great feature of MR to support such a thing... in fact i suspect that 90% of MR's calculations are suitable for parallel vector processors ;) .... just imagine the power of a nVidia 6800GT series GPU or ATI X800 series put to good use and drasticaly increase rendering speed :roll: .

What do you think about this?
By etienne
#14593
...dissapointing ... after reading dozens of posts on this forum and seing rendering times as long as days I thought that making MR 2-3 times faster (even 15-20% is significant I say) using a technology which exists NOW and some of you allready own would make a big difference ...
..well, I see that most of you don't mind to wait 10 hours instead of 8-9 or maybe just 2-3 ...
Think about it a little bit more in that time you are waiting the rendering to finish ;)
User avatar
By Mihai
#14598
I'm not a programming expert, but from what I understand it is extremely difficult to write efficient multithreaded code. So there are more things that come into play than just re writing a few lines of code to take advantage of gpu's. I suppose you have to specifically write your code for a specific type of hardware, so there's a limiting factor which perhaps NL would not like to gamble on. Or make a version of Maxwell than can run certain code on a gpu? I think that would take a LOT of development time. Also perhaps there are things which gpu's still cannot do in terms of calculations.

I would very much prefer if in the future we see the Cell processors on a workstation, that they concentrate on making a version for it. Depends ofcourse a lot on what tools they will have to program for the Cell, but it seems Sony and it's partners are intent on making it available not just for game consoles:

"IBM launches formal initiative to push Cell beyond gaming"
http://arstechnica.com/news.ars/post/20050330-4757.html


Sony and Transmeta sitting in a [Cell Processor] tree...
http://arstechnica.com/news.ars/post/20050401-4765.html
"Notorious semiconductor company/thinktank, Transmeta announced yesterday that it entered into a multi-year "strategic alliance" with Sony. The deal as outlined: 100 of Transmeta's 208 engineers will now work in conjunction with Sony to adapt Transmeta's LongRun2 leakage prevention technology to Sony's semiconductors, specifically Cell and its derivatives."
User avatar
By tom
#14599
patience people, years before i was rendering only a cube for hours :mrgreen:
let the new technology have its time to be efficient...
as I always say : impossible is impossible
By etienne
#14626
Mihai Iliuta, multithread code is not the point of GPU arithm. support... what GPU's do at their best is parallel processing (or if you want, vector arithmetics) ...most of you know that modern GPU's can do multiple arithmetic operations @ the same time (on a CPU we usually do 1 @ a time, on GPU we can do 4,8 or even more ops in parallel) ...now that would be a great speedup for (teoreticaly) any renderer. Yes, one could say that CPU ops are pipelined too, so that the CPU can do more op's at a time, but this is just not true given the specific of code execution on x86 processors (you feed the with 16 multiplies one @ a time and they return 16 results again, 1 at a time) .... whatever, who know what am I saying understands better than I could explain here.
....and tom, the techology I am pointing to IS efficient right now, what we need is just the use of it :D ...links:

http://www.gpgpu.org - full of infos
gamma.cs.unc.edu/DB/main.pdf - this paper shows just how powerful GPU arithmetic is
...there are many many more info on www, I just wanted to point out here that it can be done

Have a nice day everybody
User avatar
By Mihai
#14630
hmmm...parallel processing, multithreading, isn't that the same thing? Meaning you have to write your code to know how to spread the calculations across multiple threads, at least that's what I thought it ment.

What you are talking about for now are experiments. Practically the only thing I've seen implemented in an app we all use is the collaboration between mentalray and nvidia. But what they've done there is implement some mentalray shaders ( i don't think any raytracing involved there, only scan line calculations) into nvidia's hardware.

All this is still very much in the area of research. What would you like NL to do? They have to first finish and finalize features, then maybe after a 1.0 release start to think about these things. I'm just saying don't get too disappointed if they haven't replied to your thread with great enthusiasm.
By etienne
#14633
....hummm.... in this situation paralle != multithreading , i'm not talking about multiple threads doing the same kind of operations but about let's say 1 thread that can process multiple blocks of data @ the same time. From my point of view it is not the same to do 4 multiplies in 4 threads (each tread doing 1 mul) or to do 4 multiplies in 1 thread :D ... the former involves 4 times processing power and can be done on a cpu (well yes, multicores anyone ? :D), the last can be done on GPU and it's available now :D
....the mentalray-nv stuff...well it's not what I was thinking of ..they're trying to move a lot of rendering process (or more exactly , a lot of final result) on the GPU in a traditional way...(both for the host app and the visualization part = GPU on this approach), but let's not talk about it here, it's not the place.
....and no, I don't expect a reply from NL team , they have enough to do right now so my "wish" is just a wish wich I'd be more than happy to see it's been considered at the time the MR is ready (and the technology efficient :D) ...the point of this thread was/is just to let the users to know about this posibility (GPU math) , that it can be done even now, and to discuss it... i bet there are many who didn't even know ;)

Thank you.

pls excuse my english as it's not my native lang.

after a lot of years doing arch-viz... almost 20 a[…]

render engines and Maxwell

Funny, I think, that when I check CG sites they ar[…]

Hey, I guess maxwell is not going to be updates a[…]

Help with swimming pool water

Hi Choo Chee. Thanks for posting. I have used re[…]