- Mon Sep 17, 2012 7:55 pm
#360729
Hello All,
Bit of an odd question but I wondered if anyone had put a lot of thought into the resolution vs SL/time curve in Maxwell?
I am sure everyone knows that when a noisy image is downscaled 4 times for example, the 4 pixels will blend together and produce an average colour therefore reducing noise.
I have just bought a Nikon D800 and was happy to find (after running some tests) that the 36mp resolution when downscaled to that of a 5d mk3 produces par, or superior high iso performance.
Basically its obviously the same with Maxwell, When I render very high res the details come out very quickly and it almost seems that an hour of 2048x1536 at low sl looks better than an hour of high sl 1024x768, more grainy yes, but more details and when downscaled its
barely below par.
has anyone played around with this concept? I had another idea that it might be better to render high res with low sl, run neat image and then downscale..
I would think that a renders time would increase to the exponentially squared sum but is the increase in time more than that?
Is a pixels colour determined on a subdivision basis? it seems almost that if that is the case one might as well render high and have the option of re-sampling in Photoshop...
my head is starting to hurt...
Bit of an odd question but I wondered if anyone had put a lot of thought into the resolution vs SL/time curve in Maxwell?
I am sure everyone knows that when a noisy image is downscaled 4 times for example, the 4 pixels will blend together and produce an average colour therefore reducing noise.
I have just bought a Nikon D800 and was happy to find (after running some tests) that the 36mp resolution when downscaled to that of a 5d mk3 produces par, or superior high iso performance.
Basically its obviously the same with Maxwell, When I render very high res the details come out very quickly and it almost seems that an hour of 2048x1536 at low sl looks better than an hour of high sl 1024x768, more grainy yes, but more details and when downscaled its
barely below par.
has anyone played around with this concept? I had another idea that it might be better to render high res with low sl, run neat image and then downscale..
I would think that a renders time would increase to the exponentially squared sum but is the increase in time more than that?
Is a pixels colour determined on a subdivision basis? it seems almost that if that is the case one might as well render high and have the option of re-sampling in Photoshop...
my head is starting to hurt...