All posts related to V2
By l1407
#372553
numerobis wrote:
photomg1 wrote:I'd love that to be the case numerobis, but i1407 is most probably correct those 15 core cpu's are due soon. Also I almost feel sure next limit would be shouting about if it was a xeon phi.

We can dream though , for the record I would have jumped on the phi even if it was just for linux . Thats a crazy increase in speed from where I am today although I'm 99 percent sure that bench speed Juan gave
is from a configuration that i1407 mentioned. Not sure where he is getting the prices from.
yes i thought that too, but this would not even be worth mentioning... you could always build an E7 8 CPU system for a crazy amount of money and see a logical speed increase compared to a single or dual socket sys. But for this amount of money you could also build yourself a 50 CPU farm with 300 cores @4-4,5GHz and a much higher benchmark. ;)
There is many many difficult manage for 50 cpu farm on the Maxwell manager and Maxwell monitor, hubs....etc!
By kami
#372558
One thing I was always hoping for in terms of speed optimization would be a really good multiple importance sampling. (check 1.2.2 on http://www.fxguide.com/featured/the-state-of-rendering). Meaning that the engine would detect which parts of the image need more time to clear out and focus on those parts. That would really save a lot of render time and making for example the use of sss or even displacement much more practical.
I sometimes launch renders with carpets twice - once the whole scene without the carpet and once just the carpet.

Even tough I am looking forward to v3 very much, I really understand what numerobis said. A lot of those new features I will also never user, or don't use them very often. While there are still a lot of every-day situations that don't work as expected:
- weird noises or spots that take a lot of time to clean up
- having to use work arounds for certain of situations
- bump creates facetting in extreme lighting situations
- very buggy network render
- hard to use displacement

But I suppose/hope some of those issues are also adressed in v3, but are not really advertised as a feature :)
User avatar
By seghier
#372560
really excelent version
i hope maxwell team add more options:
-add icons to add objects ; lights ...
-add icons in viewport to hide ; unhide ; group ; lock objects
-add sliders in the viewports to control the camera
-matte shadow and reflection
-transparent background
-drag and drop materials directly to fire viewports
-ambient occlusion channel
-avoid intersection of objects when using object cloner
-particles support emmiter material
-horizon haze
-procedural scratch
-----
sometimes when i merge objects the result have differents normal face and it's impossible to flip them ; can maxwell team solve that ?
User avatar
By juan
#372562
kami wrote:One thing I was always hoping for in terms of speed optimization would be a really good multiple importance sampling. (check 1.2.2 on http://www.fxguide.com/featured/the-state-of-rendering).
I do not understand this hype with multiple importance sampling, all of a sudden. MIS is a technique developed almost 20 years ago, not something that can nearly be called "state of the art" at all. Any serious render (I do not mean any commercial render, but any raytracer with more than 200 lines of code) should use it everywhere. Of course if a renderer does not solve caustics at all, or uses biased approaches like limiting light/material samples, etc. could have less issues with noise derived from high energy paths but it does not mean some renders use MIS better than others, in general terms is a very well known area and most of the know-how about this technique is public.

I do not mean the article has incorrectness, Mike Seymour did an outstanding job there. What I mean is that after that article some people came to us saying "hey, have you tried that MIS thing? That will speed up Maxwell!" and that is like saying that x286 architectures will help to clean subsurface scattering.

Some people and I shared our thoughts about light transport in a course during last Siggraph:

http://cgg.mff.cuni.cz/~jaroslav/papers/2013-ltscourse/
By feynman
#372565
juan wrote:Some people and I shared our thoughts about light transport in a course during last Siggraph:
"Juna Canada" <- they turned you into a female under "Publication & Presentation" ; )
By Polyxo
#372567
juan wrote: I do not understand this hype with multiple importance sampling, all of a sudden. MIS is a technique developed almost 20 years ago...
Let's not stick to that word - you of course know all the relevant papers and understand them. We on the other hand - and maybe also the author
of the quoted FX guide article needed to find a word :)
I think what's broadly meant are recently implemented features inside other unbiased products which either allow using a brush to manuallly determine
regions of interest or which do some sort of automatic cleanup in areas which are still very noisy. Whatever might be the math behind it :).
User avatar
By juan
#372570
Polyxo wrote: I think what's broadly meant are recently implemented features inside other unbiased products which either allow using a brush to manuallly determine
regions of interest or which do some sort of automatic cleanup in areas which are still very noisy. Whatever might be the math behind it :).
Ah, well, that is a very different thing that again has nothing to do with MIS. While the brush thing can be useful sometimes I guess it could generate flickering issues in animations, and overall many times the source of noise in a region of the image comes from a different place and sampling more that area will not really speed things up too much. Anyway we look deeply into any kind of speed optimization, be sure about that ;)

Juan
By Polyxo
#372572
Thank you Juan! Is there time for another example? ground plane material

Isolated perfectly lit objects on a white background (aka High Key Images) are bread and butter in Industrial Design.
Not only for final shots but also for all the images one creates during the design process. A clean look, a simple composition,
concentration on the actual item - this is what one needs if one has a stack of mostly identical images and wants to decide
about about subtle variations in the model, material and surface structure.
Also – which is huge – such images are just perfectly suitable in compositing - they blend with the paper colour, resulting in
a great simplification (= time saving) in page layout processes. Presentations or brochures containing such images hardly can look
completely terrible.

Yes, sure one of course already today can render on white background with Maxwell using own templates, some custom materials and render passes.
The problem here: There's still some setup required and - more important - the material chosen for groundplane and backdrop by default always
is a fully featured scene member. Therefore it's inevitably subject of highly complex and time intense calculations, when the sole effect the Editor
is actually after is having just one subtle object shadow, just enough to make the stuff look plausible and not appear floating in mid air.
That is also exactly the effect one wants to achieve in actually photographing this sort of images - all one does here is try to blow out
shadows and undesired reflections and caustics with a lot of light. In case that this didn't work sufficiently well when shooting one goes brightening
the background in Post. A lot of effort is spent in the real world in order to achieve these artificial light conditions.

I really don't understand why there's no groundplane material which simplifies reaching that goal in Maxwell. Seemingly a lot of your customers sit
in front of CAD programs and I believe it is not extremely surprising that Keyshot is quite strong in ID - software which defaults to creation of exactly
these simple High Key Studio shots. I imagined such a groundplane material in Maxwell defined infinite, once applied to a planar surface (or on the planar parts of
pipe -backdrop geometry) it would simply extend to the horizon. This material would give you that shadow and some colour bleed but nothing else
fancy - it really had the job to be dead boring and let it be by bending physics a bit. Wouldn't such also help cutting rendertimes?
By kami
#372584
What I was talking about, is that some scenes might look perfectly fine at SL 16, but there is this one SSS objects which needs SL 22 to look good. If I would have to render the whole scene to SL22 it would take at least about 10x longer. So I usually rerender the region with a higher SL which is much faster. For still images, this is working fine. But for animations there is no easy way to render out a region, when you have a moving camera.

I am not a big fan of that "brush" thing, at it is something you would have to do in the render window during the process (which would not work for network renderings for example)
User avatar
By Mihai
#372586
Can you give an example of that? And that it really is 10x faster? What Juan was saying is that you can't really limit the sampling to that little area you have marked. Because the lighting falling on that little area may be influenced by stuff that's going on in another part of the render. Just thinking about this logically, I guess it would mean that if you split up your render into 10 render regions.....you would get a 10x faster render than if you had rendered the entire image in one go. I really doubt that's true.....
By kami
#372588
I mean that the speeding up process only would happen due to the fact that I don't need the whole image at SL22 (which takes very long) but only a small part of the image ...
Does that never happen to you that some parts of the image (which are in direct sunlight for example) clean up very fast while there is a spot in a corner behind the kitchen counter which is still noisy. There would be no need to render the whole image up to SL22, when 3/4 of the image were already perfect at SL16. But from SL16 to SL22 the engine still focusses on the whole image which will result in an even lesser noise in the sunlit area of the image when you only want that corner behind the counter with less noise.
The downside is that you lose a lot of time, because that sunlit part of the image was already fine at SL16 AND you will never have an image with a continous noise-spread over the image, because the sunlit part will always be clearer.
Of course it would need kind of an automatic detection where the most noise in the image is and make the engine focus more on these areas.
User avatar
By choo-chee
#372590
kami wrote:I mean that the speeding up process only would happen due to the fact that I don't need the whole image at SL22 (which takes very long) but only a small part of the image ...
Does that never happen to you that some parts of the image (which are in direct sunlight for example) clean up very fast while there is a spot in a corner behind the kitchen counter which is still noisy. There would be no need to render the whole image up to SL22, when 3/4 of the image were already perfect at SL16. But from SL16 to SL22 the engine still focusses on the whole image which will result in an even lesser noise in the sunlit area of the image when you only want that corner behind the counter with less noise.
The downside is that you lose a lot of time, because that sunlit part of the image was already fine at SL16 AND you will never have an image with a continous noise-spread over the image, because the sunlit part will always be clearer.
Of course it would need kind of an automatic detection where the most noise in the image is and make the engine focus more on these areas.
happens to me a lot. problem is that when I try to region render the noisier parts, some of the reflections don't match good.
  • 1
  • 22
  • 23
  • 24
  • 25
  • 26
  • 42

I don't think that in 2 years AI will be precise l[…]

Help with swimming pool water

Hi Andreas " I would say the above "fake[…]