All posts related to V3
By feynman
#391894
So for each render node, one should buy one

1 - Nvidia GPU
2 - from the GeForce range http://www.geforce.com/hardware
3 - and observe the number of CUDA cores, the higher the better
4 - then make sure enough VRAM is fitted because the entire scene must fit in (is that so?)
5 - and select a Titan card if one wants to afford it

Is that how one must go about the change away from AMD?

GeForce GTX TITAN Z, 5760 CUDA cores, 12GB VRAM
GeForce GTX TITAN X, 3072 CUDA cores, 12GB VRAM
GeForce GTX TITAN Black, 2880 CUDA cores, 6GB VRAM
GeForce GTX 780 Ti, 2880 CUDA cores, 3GB VRAM
GeForce GTX 980 Ti, 2816 CUDA cores, 6GB VRAM
GeForce GTX 1080, 2560 CUDA cores, 8GB VRAM

Then there are all these 2nd tier vendors - Zotac, EVGA, MSI, Gainward, Asus, GigaByte - what on Earth is the difference? The last video game I played was PONG in 1974 so I have zero clue.
Last edited by feynman on Fri Aug 12, 2016 2:24 pm, edited 2 times in total.
By luis.hijarrubia
#391895
As it's said in the gpu conference development slides multi-gpu it's still in development, so i won't spend money in multi-gpu configurations yet.
And i won't buy a TITAN card. I think the price difference it's not worth it.
User avatar
By T0M0
#391896
@feynman:

Most of them make own versions with better cooling systems. Sometimes with even better power cascades and different electronic circuit layouts than common model has. Personally I prefer EVGA for their low RMA and good customer support, they are also exclusive Nvidia vendor, so no for AMD cards.
User avatar
By Asmithey
#391901
I will beat the Blender dog again....In house Blender plug-in support please. But it has to be like N.Ildar's amazing add-on, just a bit more polished and include some more features. Fast scene export. Squeaky wheel gets the grease, right?
By burnin
#391903
@feynman
That Titan Z is dual card, meaning 2x 6GB VRAM with 350W consumption (Kepler gen.) and no OoC (Out of core) memory support ever unless getting into RedShift.
-----------------------------------------

In short...
Compared to CPUs, GPUs aren't that energy efficient yet, except Quadros (Kepler, Maxwell) and new gen. gaming cards (Pascal) which also have on board tech for OoC memory. All older cards also run LOUD! Since they run more cores (slower but hotter), burn lots of calories, they need to breathe more. ;)
Using SSS & volumes... can bring GPU to a crawl (to same or even slower speeds vs. CPUs). Taking into account power usage, such renderings thus become more expensive than CPU's (from observations of GPU based Unidirectional Path Tracers).
Also note that users need to be wary of amount and sizes of textures & HDRIs used in the scene... since main marketing group are gamers, there are many limits which architectural illustrators & filmmakers have had and will notice (again).
Finally, am very skeptic about this release. As am experiencing constant crashes with multilight being GPU accelerated ever since it was implemented (no fix yet?). With all this in mind, i doubt this will be a pleasant ride.

Why must clients, users & potential customers plead? Why do developers & representatives hide their knowledge or ignorance? Is buying a product that one intends to use and earn a living from, come to a begging for a peek at the damn cat in the sack? Must a user/client provide own precious time to support developers or is one providing service to clients?
I know my priorities & am well acquainted, so things better be good.
As for Next Limit, nowadays i expect something more... at least some technical papers, videos, explanations on development, limitations and such. All of which would be very welcome, consequently appreciated by spreading the word and sharing wealth.

Otherwise just look over the fence... or up,
to the sky where stars shine bright.
Powered by the highly burning fuel... light is then & there shifted
by the gravity of myriad of suns to run to where gods reside
and star children are born...
... to storm on.


enjoy waiting
;)
Last edited by burnin on Fri Aug 12, 2016 9:40 pm, edited 4 times in total.
By ptaszek
#391906
feynman wrote:GPU rendering after all, which should be good news for MR popularity... Maxwell GPUs for Maxwell Render makes sense :lol:

Now, all that many existing users have to do is to throw away the MacBook Pros (AMD graphics), the MacPros (AMD graphics) and PCs with AMD FirePro cards. Here go $23,000 out of the window.

What a beautiful day :mrgreen:

To replace AMD FirePro cards in render node PCs - which Nvidia GPU should one buy? Does a certain VRAM limit on the GPU cause problems, when not the whole scene fits into its memory? Should one buy several GPUs for each render node PC?
OK so we have small problem here :) I will try to explain from my point of view.
For GPU rendering like guys said before NVIDIA is only the option.
Then yes CUDA CORES and VRAM are 2 main parameters.
CUDA CORES - more u have faster you render. It can be veryyy fast on GPU!
VRAM is like your RAM so probably right now you have 64GB and GPU GXT Titan gives you 12GB.
If you will use 2x titan it won't be 24GB but still 12GB.
Thing is if your scene is heavy (many polygons, many high-res textures) probably it just won't render on GPU then you will have to use CPU which you already have. So you will still use your 23k investment :)

Right now very popular are GTX cards (gaming) and probably that will be targeted first. Quadro is to slow for same price and I don't know that much about Teslas because of the high price. GTX cards are named by the versions and power. first number means version old 6xx, 7xx, 8xx, 9xx and newest 10xx. Also the power and memory x60, x70, x80 and titan. Keep in mind new cards are also consuming less energy and are more efficient.

soo
GTX 660 - 6xx quite old (2012) and x60 quite slow.
GTX 680 - old and fast (but using more W)
GTX 880 - not that old (2014) and fast
GTX 1070 - newest (2016) and fast (faster than 880)
GTX 1080 - newest (2016) and super fast (faster than 1070)
GTX Titan (I guess its like 1090) - fastest form GTX range

Keep in mind that new versions have more RAM also.

1070 which I am aiming is 8GB VRAM which is quite OK for me for the stuff I do (4GB was enough when I was using different GPU render) (https://www.behance.net/MaciekPtaszynski)


The brand. I would go for the most popular (ASUS, EVGA, GIGABYTE) but if you will be looking at them I would go for non-reference design which is 2 or 3 fans from top. Reference design is one fan behind which gives you in most cases more noise. Anyway for MAC PRO maybe its not that bad idea since the warm air goes not up but to the back. At the end its personal choice but I like non-reference construction.
http://myfilehost.weebly.com/uploads/1/ ... 1_orig.png

So in theory the card for 500euro will render faster then processors for 3000-4000 euro and thats the deal.
That's the good news.

Now the bad news.
Apple likes AMD cards which is huge mistake in my opinion. It means you don't rly have an option for new mac pro (maybe some thunderbolt external options but its not super cool solution I guess).
For old MAC PRO (I have one also) its bit better but also not super cool. Officially you have 2 opitons GTX 680 mac edition EVGA and quadro K5000.

Both are old and not that cool anymore (GTX680 has 2GB ram). There is also a company in US where u can buy good GTX cards which are modified for MAC's. They have good opinion and people are quite happy with it. They charge extra around 90$ for mid range card. Keep in mind that you are buying new card for more $ but with bit more compatibility in OSX (you will get all ports working and boot screen not black but with apple logo) Still you will have to install web drivers which is not a big deal. Plse read more on there webpage http://www.macvidcards.com.
One more bad news.
since 10xx is best option now for us.. its won't work for MAC yet. Drivers don't exist but there should be out there soon I hope (1-2 months) http://www.tonymacx86.com/threads/nvidi ... 70.192399/
So I would wait for it. Also we are limited with power supply so old mac pro can handle only one heavy card :(

BUT!

We don't know how GPU will work in Maxwell yet! Maybe on OSX it will come later or something like that.
Maybe GPU first will be used only for preview mode not for the final rendering. Maybe you will be limited with some options (like SSS won't work yet on GPU). There are still a lots of open points to make any changes in hardware.
If you want to stay with MACs like me, let's wait first for Maxwell v4, Then maybe 10xx drivers will be released and if we are lucky all will work good! :)

If not and Maxwell GPU will rox. I go for WINDOWS :(

Good luck ;)
User avatar
By Max
#391912
I cant wait to see more about what NL done on GPU. If they fully translate the whole engine to GPU, or its CPU+GPU computing is a bit confusing to me and needs to be clarified.

from the website "Maxwell 4’s major new feature - a GPU render engine! The new engine runs on nVidia graphic cards and uses all the power that GPUs provide to accelerate the render process. All the technology under the hood is identical to the classic CPU engine - which means your images are exactly the same, unbeatable Maxwell quality."

So it looks like it wont use CPU anymore.
User avatar
By chedda
#391914
I saw them demo GPU in a webinar i remember it's a toggle of engine. CPU or GPU not combined like Thea.
User avatar
By eric nixon
#391919
a few thoughts..

The g100 pascal chip (only available currently in the tesla p100 card) is what you will want to use for maxwell because it has a 2:1 ratio for Single Precision and Double Precision.

A workstation with 4 of those cards (hopefully the later 32gb version) is too expensive for almost everyone.
It will take a long time before we see that chip in cheaper consumer gaming cards, and I'm not sure how much VR will benefit from that DP efficiency so its possible nvidia will keep it for the tesla cards only :(

So maybe in 2 years we can pick up second hand cards, or maybe there will be a cheaper cloud rendering options via gpu farms.. meh

So far I've invested in a new case only.. a lian-li with 11 pci slots! ...will need to use watercooled cards to keep it quiet..

Image
User avatar
By Aniki
#391920
OT, Eric, are you happy with your case? Which exact model is it then?

Thanks.

I am waiting for the updated Titan based on the 1080 chip, to have at least 24 Gb (single) Vram.. else I see all the GPU rendering solutions going down the drain..
  • 1
  • 2
  • 3
  • 4
  • 5
  • 21
Sketchup 2025 Released

Thank you Fernando!!!!!!!!!!!!!!!!!!!!!!!!!!! hwol[…]

I've noticed that "export all" creates l[…]

hmmm can you elaborate a bit about the the use of […]

render engines and Maxwell

Funny, I think, that when I check CG sites they ar[…]