Idea:
Remember how the benchwell scene originally had a higher SL (17 i think) and took 2 to 3 times as long to render? (which everyone hated)
Once Nehalem Skulltrail and more so once 4-way Nehlamen MP Xeons are out, we're gonna get increasingly meaningless results, with things like harddrive access making up 1/4th of the render time.
It just occurred to me, that if we could accurately determine how much longer an increased SL takes, we could calculate the old results
down, while keeping a reasonable range of render times.
Let's say we found out through a line of thorough testing, that SL 19 takes ~4.25 times as long as the current SL 15. The results would then be adjusted as follows:
old entries,
new entries
Before:
- 4x Xeon i7 - 2:50s
- 2x Intel Core i7 - 5:00s
- Intel Core i7 - 9:30s
- Intel Core 2Q6600 - 23:30
- Intel Core 2D6600 - 45:30s
After:
- 4x Xeon i7 - 12:02s
- 2x Intel Core i7 - 21:15s
- Intel Core i7 - 40:22
- Intel Core 2Q6600 - 1:39:52 (old * 4.25)
- Intel Core 2D6600 - 3:13:22 (old * 4.25)
So the
new results are calculated the normal way (and thus retain former accuracy), while the
old ones are multiplied with the determined SL ratio.
Since users of "old" systems will want to keep adding results without rendering for hours, two scenes with the old and new SL would remain available during the transition. The users could select/define the scene used in the entry form, and the multiplier would only be applied during queries (readouts), so that the original results remain untouched.
This technique could be furthered to when we all run Larabee and Nvidia Tesla setups, still waiting for flying cars. If someone dared to let the scene run to SL25 to verify accuracy, we could even compare the large render farms to our systems.
Let me know what you think, is there a logical flaw in there (prior to having run tests)?