Page 1 of 1
Network render not using all CPU's
Posted: Fri Feb 24, 2012 11:49 am
by Gary Bidwell
When rendering a Maxwell still image through PipelineFX's Qube render manager on 5 slave machines only 3 max out, the other two just use about 5 percent of there cpu power. I am rendering a single still image with some separate passes ie. material, Fresnel, alpha etc.. as 8bit .png files. Why isn't Maxwell maxed out on all machines? is it because of the passes?
Thanks.
Mac OSX 10.6.8
Re: Network render not using all CPU's
Posted: Fri Feb 24, 2012 12:34 pm
by Mihai
Are you sure those are rendering at all? Related to your other post, maybe these nodes are running out of RAM? Are their temp folders local or have you changed the temp folder location to point to a networked folder? It could be they take a really long time to write their temp MXI file for each incremental SL update.
Re: Network render not using all CPU's
Posted: Wed Mar 14, 2012 4:06 pm
by Gary Bidwell
The .mxi (single) file is being written to a networked volume. I assume this is the merged result of the individual slave machines? Does each slave write it's own temp .mxi file before merging to a final .mxi on the network in my case?
Thanks for your help.
Re: Network render not using all CPU's
Posted: Wed Mar 14, 2012 4:44 pm
by dmeyer
What is the Qube log saying? If you submit the job with a high verbosity command Maxwell will spit stdout back to Qube.
Re: Network render not using all CPU's
Posted: Mon Apr 30, 2012 12:25 pm
by Gary Bidwell
None of the slave machines are writing local temp .mxi files. All appear to be writing back to the final .mxi location on the network drive and as such are writing over each other! How do I fix this?
Re: Network render not using all CPU's
Posted: Mon Apr 30, 2012 1:15 pm
by Mihai
You should have local temp folders for each of the nodes, NOT saving each nodes MXI file to a common networked folder. Besides the files overwriting, you will also have very long write times since each node is trying to write to the same location.
To get a better of idea of what happens when a network render is merged, see this page:
http://support.nextlimit.com/display/ma ... ork+render
(towards the bottom of the page - "Render process")
You can specify the location of the local temp folder of each node from File>Preferences in the Node GUI, and also in the Manager GUI.
Re: Network render not using all CPU's
Posted: Tue May 01, 2012 1:54 pm
by Gary Bidwell
Thanks. The local temp folder issue is fixed now. Running on a Mac network and I had issues with my (UNC) paths.
But I think my main problem arises from trying to submit cooperative single frame still image jobs via Qube render manager software. If my desired SL is 10 for the scene, qube will render my still image to SL 10 on every slave machine, also it wont merge the result at the end. So my only option is to manually merge 'n' individual still images all with the same name (via script or command line?) and guess the resultant combined SL level.
Does Qube have provisions for single frame coop renders or does this have to be done via Maxwell Render Manager/Monitor? We don't really want two render managers (Maxwell/Qube) running on the farm as this complicates managing the resources.
Thanks Mihai.
Re: Network render not using all CPU's
Posted: Tue May 01, 2012 3:42 pm
by dmeyer
Gary Bidwell wrote:Thanks. The local temp folder issue is fixed now. Running on a Mac network and I had issues with my (UNC) paths.
But I think my main problem arises from trying to submit cooperative single frame still image jobs via Qube render manager software. If my desired SL is 10 for the scene, qube will render my still image to SL 10 on every slave machine, also it wont merge the result at the end. So my only option is to manually merge 'n' individual still images all with the same name (via script or command line?) and guess the resultant combined SL level.
Does Qube have provisions for single frame coop renders or does this have to be done via Maxwell Render Manager/Monitor? We don't really want two render managers (Maxwell/Qube) running on the farm as this complicates managing the resources.
Thanks Mihai.
Qube does not have an out of the box way to do co-op renders, unless one is able to specify the correct parameters manually via command line with such seed and SL level that is required to combine to SL10. I presume this is possible but don't know.
I worked with PipelineFX to get the original SimpleCmd jobtype built but we never included co-op functionality since we never use it.
Re: Network render not using all CPU's
Posted: Fri May 04, 2012 12:29 pm
by Gary Bidwell
Thanks. I can do a coop render (single frame) via cube, I just need to manually merge via mximerge command line and guess the sl.
Re: Network render not using all CPU's
Posted: Fri May 04, 2012 2:45 pm
by dmeyer
Gary Bidwell wrote:Thanks. I can do a coop render (single frame) via cube, I just need to manually merge via mximerge command line and guess the sl.
One clue on the merging..Qube does allow for dependant jobs.
So for instance, if there is a command for maxwell that allows one to specify several MXI's to merge into one, you can daisy chain that job off of your render job.
So you could create a SimpleCmd job with the merging command, and make it dependant on your still frame jobs. So the merge job will kick off automatically once the render jobs are completed.
Re: Network render not using all CPU's
Posted: Sun May 06, 2012 2:50 pm
by Gary Bidwell
Thank you. I will give that a go.
Re: Network render not using all CPU's
Posted: Mon May 07, 2012 4:08 pm
by dmeyer
Gary Bidwell wrote:Thank you. I will give that a go.
If you really want to get fancy you can then chain another job off that to copy the final .mxi back to your workstation and opens it in Photoshop for you.
