By Mihnea Balta - Thu Jun 24, 2010 1:08 pm
- Thu Jun 24, 2010 1:08 pm
#325557
Any programmer with a raytracing book can whip up a GPU renderer in relatively short time. However, getting from there to a full featured photorealistic engine is a long way (and if you look at the Octane PR, the effort can drive people pretty close to the edge ;) ). Given the fact that the architectures in this area tend to change rapidly, it may be unwise to dedicate resources to it so early. After all, I see many people looking at the current implementations as previz solutions and then going to a stable, mature engine for the final render. If you're asking for this, then you're asking Next Limit to develop a new render engine (and then I assume release it at no additional cost, as part of Maxwell). I mean, seriously now.
If the day comes when there will be a production-ready GPU renderer that clearly outruns CPU-only engines and Next Limit doesn't have an answer to that, then yes, they've made a wrong move. In the mean time, this discussion is just watercooler talk. Also, I really don't get why people assume they have a right to be notified of every research avenue or strategic direction that a company is pursuing just because they bought a product. No company works like that. NL may be just as well 30 seconds away from a major breakthrough that will bring high quality GPU raytracing goodness to the masses, but they don't have to tell you (and the competition) about it. In a more realistic scenario, they're probably evaluating all this and have good reasons for not joining or talking about the make-an-incomplete-GPU-raytracer party. I'm pretty sure they're not doing it just to spite you.
No it's not. We can play the numbers game all day long if you want and skew the results in any direction, but in the end it's just marketing aimed at people who don't actually understand how it works. The GFlop is the new PMPO watt.Hybaj wrote: A single GTX 480 is almost like having ~105 ghz of Intel's i7 processing power.
Right, and Nvidia has this magic insight on multicore architecture that Intel is completely missing, which is why they're able to put the power of 30 i7 chips into a $500 add-on card. Intel has no chance of catching up, it's all over, GPU wins.Hybaj wrote: 30 core intel cpu's with the power of i7 per core in the next few years? I don't really think so .. the architecture would have to be changed dramatically.
I can agree on this, but for different reasons. The thing is, many people posting in this thread are incredibly excited by a new and unproven piece of tech which might just as well disappear in 5 years. They see some prototypes using this technology and then they announce the end of CPU raytracers and the beginning of a new era. The point of posting quality comparisons is that it's not at all clear at this moment in time that you can achieve the same level of quality with a GPU renderer, while maintaining the speed boost (despite those 105 ghz of i7 power). This has been mentioned a few times in the discussion already, but it has been conveniently ignored or reinterpreted as hypocrisy or what have you.Hybaj wrote: It's all about the way people talk about other software which is not really rational or fair.
Any programmer with a raytracing book can whip up a GPU renderer in relatively short time. However, getting from there to a full featured photorealistic engine is a long way (and if you look at the Octane PR, the effort can drive people pretty close to the edge ;) ). Given the fact that the architectures in this area tend to change rapidly, it may be unwise to dedicate resources to it so early. After all, I see many people looking at the current implementations as previz solutions and then going to a stable, mature engine for the final render. If you're asking for this, then you're asking Next Limit to develop a new render engine (and then I assume release it at no additional cost, as part of Maxwell). I mean, seriously now.
If the day comes when there will be a production-ready GPU renderer that clearly outruns CPU-only engines and Next Limit doesn't have an answer to that, then yes, they've made a wrong move. In the mean time, this discussion is just watercooler talk. Also, I really don't get why people assume they have a right to be notified of every research avenue or strategic direction that a company is pursuing just because they bought a product. No company works like that. NL may be just as well 30 seconds away from a major breakthrough that will bring high quality GPU raytracing goodness to the masses, but they don't have to tell you (and the competition) about it. In a more realistic scenario, they're probably evaluating all this and have good reasons for not joining or talking about the make-an-incomplete-GPU-raytracer party. I'm pretty sure they're not doing it just to spite you.