All posts related to V2
#325557
Hybaj wrote: A single GTX 480 is almost like having ~105 ghz of Intel's i7 processing power.
No it's not. We can play the numbers game all day long if you want and skew the results in any direction, but in the end it's just marketing aimed at people who don't actually understand how it works. The GFlop is the new PMPO watt.
Hybaj wrote: 30 core intel cpu's with the power of i7 per core in the next few years? I don't really think so .. the architecture would have to be changed dramatically.
Right, and Nvidia has this magic insight on multicore architecture that Intel is completely missing, which is why they're able to put the power of 30 i7 chips into a $500 add-on card. Intel has no chance of catching up, it's all over, GPU wins.
Hybaj wrote: It's all about the way people talk about other software which is not really rational or fair.
I can agree on this, but for different reasons. The thing is, many people posting in this thread are incredibly excited by a new and unproven piece of tech which might just as well disappear in 5 years. They see some prototypes using this technology and then they announce the end of CPU raytracers and the beginning of a new era. The point of posting quality comparisons is that it's not at all clear at this moment in time that you can achieve the same level of quality with a GPU renderer, while maintaining the speed boost (despite those 105 ghz of i7 power). This has been mentioned a few times in the discussion already, but it has been conveniently ignored or reinterpreted as hypocrisy or what have you.

Any programmer with a raytracing book can whip up a GPU renderer in relatively short time. However, getting from there to a full featured photorealistic engine is a long way (and if you look at the Octane PR, the effort can drive people pretty close to the edge ;) ). Given the fact that the architectures in this area tend to change rapidly, it may be unwise to dedicate resources to it so early. After all, I see many people looking at the current implementations as previz solutions and then going to a stable, mature engine for the final render. If you're asking for this, then you're asking Next Limit to develop a new render engine (and then I assume release it at no additional cost, as part of Maxwell). I mean, seriously now.

If the day comes when there will be a production-ready GPU renderer that clearly outruns CPU-only engines and Next Limit doesn't have an answer to that, then yes, they've made a wrong move. In the mean time, this discussion is just watercooler talk. Also, I really don't get why people assume they have a right to be notified of every research avenue or strategic direction that a company is pursuing just because they bought a product. No company works like that. NL may be just as well 30 seconds away from a major breakthrough that will bring high quality GPU raytracing goodness to the masses, but they don't have to tell you (and the competition) about it. In a more realistic scenario, they're probably evaluating all this and have good reasons for not joining or talking about the make-an-incomplete-GPU-raytracer party. I'm pretty sure they're not doing it just to spite you.
#325561
Half Life wrote: Last year 8 cores was a big deal -- it doubled in roughly a year... and a dual socket motherboard would put you at 32 i7 cores on a single machine. So yes, it will happen whether you think so or not.
8 Cores is so 2009.

Image

:lol:
#325574
Mihnea Balta wrote: No it's not. We can play the numbers game all day long if you want and skew the results in any direction, but in the end it's just marketing aimed at people who don't actually understand how it works. The GFlop is the new PMPO watt.
In terms of rendering power with path-tracer yes it is.. look at the web for comparisons with the gpu renderers, do your own calculations and you'll see that you'll end up in the 95 - 120 ghz region.
Mihnea Balta wrote:
To put it bluntly. No they have no magic insight but the difference is that their gpus are highly specialized computing units and not like broad-purpose CPU's. And that's a big big difference for computation abilities on certain tasks. But why am I explaining to you .. you must know these facts already.

Mihnea Balta wrote: I can agree on this, but for different reasons. The thing is, many people posting in this thread are incredibly excited by a new and unproven piece of tech which might just as well disappear in 5 years. They see some prototypes using this technology and then they announce the end of CPU raytracers and the beginning of a new era. The point of posting quality comparisons is that it's not at all clear at this moment in time that you can achieve the same level of quality with a GPU renderer, while maintaining the speed boost (despite those 105 ghz of i7 power). This has been mentioned a few times in the discussion already, but it has been conveniently ignored or reinterpreted as hypocrisy or what have you.7

Any programmer with a raytracing book can whip up a GPU renderer in relatively short time. However, getting from there to a full featured photorealistic engine is a long way (and if you look at the Octane PR, the effort can drive people pretty close to the edge ;) ). Given the fact that the architectures in this area tend to change rapidly, it may be unwise to dedicate resources to it so early. After all, I see many people looking at the current implementations as previz solutions and then going to a stable, mature engine for the final render. If you're asking for this, then you're asking Next Limit to develop a new render engine (and then I assume release it at no additional cost, as part of Maxwell). I mean, seriously now.
The main problem was always the speed. And suddenly there are renderers out there which manage to bring massive speed improvements to the game. If this is not a reason to be excited about then I don't know what is :P I'm personally not really jumping up and down. I'm only quite amused actually :P I've got one gtx 480 and when I play with this stuff it's really fun. Hmm massive architecture changes happen... well umm when you compare a gpu to a cpu and how different they are and what they specialize in and the market factors, technology costs and the desire to make money on every little technology step... gpu's are just years in front of cpu's when it comes to who can be faster for certain calculations.
Mihnea Balta wrote: If the day comes when there will be a production-ready GPU renderer that clearly outruns CPU-only engines and Next Limit doesn't have an answer to that, then yes, they've made a wrong move. In the mean time, this discussion is just watercooler talk. Also, I really don't get why people assume they have a right to be notified of every research avenue or strategic direction that a company is pursuing just because they bought a product. No company works like that. NL may be just as well 30 seconds away from a major breakthrough that will bring high quality GPU raytracing goodness to the masses, but they don't have to tell you (and the competition) about it. In a more realistic scenario, they're probably evaluating all this and have good reasons for not joining or talking about the make-an-incomplete-GPU-raytracer party. I'm pretty sure they're not doing it just to spite you.
What is actually production-ready renderer? Maxwell? Well compare it to Mental Ray, V-ray, Renderman or others. Maxwell still looks in comparison to them like a slow one-trick pony but a very nice and photorealistic one trick pony actually. The sad truth is, that path-tracers are not really suitable to be full production-ready renderers.. just way too many complications... those damn photons :) but hopefully this curse will be lifted in the future by future massive speed increases.
And high-quality.. ummm I think Arion looks pretty good.. sure the tone-mapping and stuff looks like Fryrender and it has less features but damn.. it still has more features then the path-tracers had in the good old days. And what about Iray? It even somehow does displacement - probably only exporting it. Also it looks like any other high-quality path-tracer out there :D Give it time and the number of features will expand :P
#325589
Hybaj wrote: In terms of rendering power with path-tracer yes it is.. look at the web for comparisons with the gpu renderers, do your own calculations and you'll see that you'll end up in the 95 - 120 ghz region.
I don't need to look at the web, I've been doing realtime graphics programming since before shader model 1.1 GPUs, and I can tell you that multiplying ghz by number of cores isn't going to give you the full story. Neither are attempts to estimate the performance of something by pulling a floating point operation count out of thin air and then dividing the peak gflops of a processor by that number.

Maybe you can compare things like that when you're building toy raytracers that render spheres on planes with IBL and fully specular materials, but things start getting trickier when you want to go further.
Hybaj wrote: The main problem was always the speed. And suddenly there are renderers out there which manage to bring massive speed improvements to the game.
No they don't. They don't solve the same problem. I'm not convinced they will still show "massive improvements" after all the features are implemented. Octane posts some impressive speed multipliers (100x, was it?), but they use their quick-and-dirty kernel in those tests. That's not really apples to apples, and one has to wonder why they do that. On the other hand, when you try a more realistic scene in it, not a car on a plane, you may find that the speed boost is gone, while the quality is obviously worse.

I see I'm not getting across with the point of the quality comparisons, so let me reverse it: I can write a simplified CPU-only path tracer that shows massive improvements over Maxwell, or anything else, but I would only be proving that if you don't render the same thing, you can be faster. Remember this image made with Octane? Of course you will get a huge speed boost when you produce that kind of images, compared to a real renderer. First make it render the front grill in the reflected image, then come tell us how much faster it is. I can probably render that better with a traditional rasterizer and SM 3.0 shaders, at 30 fps; should I go announce my brand new render engine that provides 100x speedups over Octane, and 10000x over Maxwell?

Also, you have Vray, which is getting 30x speedups over Maxwell without a large loss in visual quality (or any loss, in some cases). What's the use of current GPU raytracers in this context, since they provide about the same speed, but clearly inferior quality?

This multicore stuff will evolve in the following years, but it's not clear how. Current GPUs are still tricky to program and there are tons of performance pitfalls. It's a waste to invest time in massaging your code to fit into a current-gen GPU, just to see that many of the limitations are lifted when a new generation arrives. You'll have no advantage over somebody who's just starting development on the new generation so it's pretty unwise from a business standpoint. It's fun to experiment with this stuff, but it's too early to commit to it (with the obvious exception of mental ray, given who owns them). Just think what happens if NL writes their stuff for CUDA (effectively making a second Maxwell), and Intel finally releases Larrabee 6 months later. They'll have to rewrite the entire engine and it's not just the change in programming language, it's also a different architecture; almost all the insight they've gained when writing for Nvidia GPUs will be worthless. They have to write a new engine, and they will have these three things - Maxwell CPU, Maxwell CUDA, Maxwell LRB - which don't share any code. Any fix, any new feature has to be written 3 times, in three different ways. And then ATI comes up with something, and the fun starts again. (And people will expect the entire bundle to cost $99, obviously). Do you see what I mean by too volatile at the moment?
#325603
So you don't need to look at anything to know things.. I see.... theorizing is nice but...

Ok write a CPU only path-tracer with as much features, quality and usability as Arion renderer that shows massive speed increases over Maxwell. And make it as usable with interior scenes, ibl lightning, caustics and all the basic stuff, because Arion is pretty much usable for more scenarios than path-tracers used to in the goold old days. You know what? Write a CPU path-tracer that is usable as Octane is. I'll pay you 1000 bucks for it.

And I seriously don't know what's your point about the Octane picture but if you would do your homework, had a skilled eye (like I have.. ok ok i'm boasting now.. no really i'm pretty good at this) and knew about the features of that software you would know that the picture was rendered with settings that actually make this rendering a biased rendering and therefore is completely missing reflections in the reflections. If it would be switched to the unbiased mode you could see them quite easily. Cherry picking is really stupid when you cherry pick the wrong stuff. You then give people the impression that you see stuff in pictures that's not really there. This kind of remembers me that there were a few boneheads who were accusing Fryrender in the beginning of not being able to do reflective caustics (as one of the tricks to make renderings faster) because one of the picture was missing them because caustics were simply turned off by a little check-box in the renderer's interface. Anyway the developer is working on a MLT implementation.

But you're spot on with the different architectures stuff. Therefore CPU+GPU renderers like Arion would not sustain fatal wounds. Anyway logic and market tells us would be a bad step for Nvidia to completly change things that people would have to rewrite all the code from the scratch, so it might not happen. But yeah.. who knows.. we'll see where this ship is heading :)
#325625
I would like you to check the initial question.... What we expect from Maxwell and NL, not Octane...
I believe that the answer is given, it is almost like "we just wait and see, it is too early to say.... can't invest many resources to such new technology especially with all the limitations (GPU RAM etc)... Also, the new multi CPU systems almost do the job... We keep an eye on this new GPU thing but we don't think we will immediately make an new GPU product"

Please let other companies and products do their job... If you don't like it, don't buy it... The same is for Maxwell... well for new users, not us...

I am quite sure that NL will give something great as always... The question as always, is when... Soon???
#325640
"Debunking the 100X GPU vs. CPU myth: an evaluation of throughput computing on CPU and GPU"

comparison made by intel: core i7 960 & Geforce GTX 280

http://portal.acm.org/citation.cfm?id=1 ... 1#Fulltext

direct pdf download:
http://portal.acm.org/ft_gateway.cfm?id ... N=50783980


and nvidias answer... :mrgreen:

http://blogs.nvidia.com/ntersect/2010/0 ... intel.html
#325664
Thanks for showing the Intel report and the Nvidia. Intel can't claim that a 250% advantage on speed (on average) is not significant. My calculator says that a 10 hour job on Intel's i7 960 will take 4 hours, at most, on the Nvidia GTX280. That's huge! And that is the average, and not specifically for rendering. Given that I am interested in only graphical processing, the rendering advantages are probably 14x faster or better.

I'm rendering some jewelry right now. 8 hours a render on a Q6600 running 2.88 GHz. And I have 18 renders to do. I am seriously thinking of buying a Tesla C2050 card with 3GB of RAM and running Arion as my renderer. I have a lot invested in time and money with Maxwell, but I cannot ignore getting rendering jobs done in 1/14th the time. I can tell you that I'm not buying another upgrade from NL unless its for a GPU version of Maxwell.
#325672
I think this tread became too .... loaded or something. all we maxwell users say is, make it faster without us having to buy private farms to get good SL. clients today know that ppl that use other software can deliver over midnight or over couple of hours if needed, and I always have to say, "wait for a few days to get the final image...". CPU, GPU or XYZ or whatever, just faster. not 10% faster. 10X faster ... my studio cannot afford buying 20 PC with 20 nodes and every small interior will become a farm render just to get it after a day or two. and I think others think this as well.
#325682
Buying is one problem... Updating is another and bigger...
So, yes, we believe that Maxwell is a great renderer but there is a competition out there... and a limit of what a client can see when it comes to quality... So, quality is for artists, quality combined with speed for professionals...
#325687
I think there is a ton of room for improvement in rendering technology in terms of realism, quality and ease of use... all areas I hope Maxwell will continue to dominate in the future. I don't want development in any of those areas to suffer or to regress for some phantom speed increase.

Maybe you missed this fact, but right now "GPU" isn't really GPU but rather Nvidia GPU because of CUDA only support in most cases -- based on past performance I would not trust Nvidia to treat it's partners or clients fairly when they have the whole rendering market locked into their tech.

To put all your eggs in Nvidia's basket is to set yourself up to be seriously gouged down the line -- if for some reason you don't already think they are gouging you... and I do believe Nvidia is behind the marketing hype. They have alot of experience with the gaming market in whipping the user base into a froth over having the latest gear... to which I can only say "a fool and their money are soon parted".

Until OpenCL is a true option and gives developers and customers the option to use any GPU they choose I will pass and I hope Next Limit does as well... And even then I still would only want a hybrid CPU+GPU solution.

Best,
Jason.
#325693
The beauty of Maxwell Render:
Image

Maxwell is the clear leader when it comes to quality. Those of you considering moving to a new GPU render engine led by marketing geniuses need to remember, that in the highly competitive world of 3D graphics and visualization, quality is what sets you apart from the multitudes of so called "visualization artists" churning out mediocrity on a daily basis. When there is a GPU render engine that can give me (and my clients) Maxwell quality, then I'll consider switching. Truthfully, I think when GPU rendering technology is ready for a full featured Maxwell quality engine, it will be Next Limit that creates it.
  • 1
  • 7
  • 8
  • 9
  • 10
  • 11
render engines and Maxwell

I'm talking about arch-viz and architecture as tho[…]

When wanting to select a material with File > O[…]

> .\maxwell.exe -benchwell -nowait -priority:[…]