All posts related to V2
User avatar
By Richard
#328896
God I remember the first pc that I worked on in the days of autocad <v2 ran on dos (pre windows) and the office was so excited when we upgraded from a 20mb HD to 40 which was a rather big investment then! Mind you everything took so long we ditched the CAD system and went back to hand! A simple refresh had to be left to run over lunch!
User avatar
By max3d
#329228
David Solito wrote:My first computer :roll:

Image
Me too, but I ordered for the first time ' online' to get the 16kb extension! A wobbly block which made sure that your enormously complex programs which no longer fitted in 4kb were always lost before you could save them to your tape recorder. Usually around dawn after a long all nighter.

At least you learned to program very well but I was glad when I upgraded to a bit more reliable hardware. I still can't believe I managed to fill that 20kb with that shitty keyboard....

Oeps, I didnt have this one at all. I had the ZX80, the predecessor. You spoiled brats already had all those luxuries...
User avatar
By max3d
#329229
To get back on topic. It's great that the Maxwell interactive preview is as fast and usable as it is. However I still don't know why the GPU route had not been taken. I do realize that the development time for a full CUDA implementation would be huge so this could well be the only feasible solution for this year. Nothing wrong with that, but does that mean that there is a fundamental reason Next Limit would ignore multi core programming.

The biggest problem I see is that you need to develop an algorithm to smartly use the available memory per GPU. Consumer cards are restricted and problably will stay restricted to around 2GB so you need a way to efficiently manage the GPU -CPU traffic. Octane does it all on the GPU so has this problem. Indigo seems to have found a way around it with it's hybrid solution but the efficiency of the GPUs drops then. It's still a huge speed improvement though and the easy expansion to more GPU's and the fast development in that sector makes it attractive.

Rendering is one of the areas where multi cores could be extremely efficient. A parallel approach is almost always feasible. Is there any particular reason Maxwell couldn't go that way. I don't expect a PM with the source now, but a general remark would be appreciated.

Max
User avatar
By Voidmonster
#329233
The whole GPU-versus-CPU smackdown continues to be interesting. From the high end perspective, and hell, I'm using Maxwell so I have to consider myself to have at least one foot in that camp, he's on the money. If I have lots of cash to throw at the problem of speed, CPUs are going to get me where I want to be.

But as a user closer to the hobbyist end of the spectrum, I don't have the resources to buy a 12 core server from a premium vendor like Boxx. I don't even have the resources to build an equivalent computer myself. What I'm seeing is that for the cost of a graphics card that I would want to buy anyway for games I *also* get rendering performance in the same general performance ballpark as a vastly more expensive workstation if I'm willing to live with more limited rendering capabilities, some of which may be solved, others which may not be.

There's no conflict for me. Economically, I want *both* of these things. That is, in fact, what I have. Contrary to the way Peebler is describing it, you can get very significant performance from video cards that cost less than the motherboards for those Intel CPUs. I would've been very curious to see the results of a dollars-to-dollars comparison using 4-6 commodity game cards in an extender box.

And thank you very much, Next Limit, for that World Cup promotion! It was the incentive I needed to get 2.0, and I'm loving it.

But I'm also loving Octane. Not because it approaches the finish line faster (though with my CPU it most definitely does) but because its interactivity makes getting there vastly more pleasant.

Which is why I'm also super excited by the new preview in Maxwell.
User avatar
By Half Life
#329234
I suppose that depends greatly on your definition of "the finish line" -- to me 3D still has a ton of work to do before it lives up to its potential... I got into 3D (I've been a 2D artist for many years) because I saw the writing on the wall. But that's still just potential, there's so much room for improvement in areas of realism and accuracy... until we break through those barriers the speed barrier is meaningless.

The current GPU fad could be summed up "how to get crap fast" -- I'm not interested in my work being the visual equivalent of fast food... I'd rather take my time and have a gourmet feast.

I realize that we all have jobs and deadlines to deal with -- and that our clients often don't have the sophisticated tastes we might... but if history has taught me anything it is that sooner or later the sophistication of the audience will outpace you if you don't keep striving for that next level. Then you find yourself irrelevant and cast aside... therefore it's better as a career strategy to strive more for quality than speed.

You get what you focus on... make sure it's the right thing.

Best,
Jason.
User avatar
By Half Life
#329249
You can build it with one of these these:

Asus Dual LGA 1366 Motherboard
http://www.newegg.com/Product/Product.a ... 6813131378

and two of these:
Intel Xeon X5650
http://www.newegg.com/Product/Product.a ... 6819117231

And Viola you have 24 cores in one machine for $2300

To put that in context -- here's the grossly overpriced Nvidia Quadro FX 5800
http://www.newegg.com/Product/Product.a ... 6814133253

For $3000 which doesn't even include the system it need to run.

For the almost $700 price difference you can load up on ram and a reasonable Firepro card if you don't have any to re-use from your current system.

FirePro Card:
http://www.newegg.com/Product/Product.a ... 6814195096

12 gigs of ECC memory:
http://www.newegg.com/Product/Product.a ... 6820226053

(you can get a better deal for ECC memory on Mushkins website for even more ram)

I don't know about you but that math adds-up to me.

Best,
Jason.
  • 1
  • 7
  • 8
  • 9
  • 10
  • 11
  • 24

So, Apple announced deprecation at the developer c[…]

render engines and Maxwell

I'm talking about arch-viz and architecture as tho[…]

> .\maxwell.exe -benchwell -nowait -priority:[…]