- Sat Feb 24, 2007 9:12 pm
#211835
olà !
* * *
a few questions about scientific computing and potential maxwell supercomputing
* * *
previously info and «colorful pictures»:
http://www.dalco.ch/press
http://www.fluent.com/
* * *
about using «supercomputers»
1[
software like fluent or any other scientific application: do they run ONE large OS with a software that distributes all calculations over all processors OR is there a «small» OS running on each rack/blade that is fed with tasks for just this unit.
my question is pointing to shared memory technologies (over the whole supercomputer) that are used today which seems to imply just ONE OS.
2[
are these applications especially written to run on supercomputers and thus there is no way to implement a workflow with a linux/redhat version of maxwell that would «see» the computer as ONE unit with n cpus instead of just one/two/four.
i want to underline the question if a supercomputer in such a workflow would rather render an image as ONE .mxi, as a standard computer does, OR if such a workflow would spread .mxis with seeds (1) to (n+1) across all available racks. what would be the optimum?
3[
is it possible to use a «standard» software like maxwell for linux to run on such a machine? are there special OSes/software implementations around that would spread a maxwell task over the whole cluster ?
4[
which is the max numbers of cores, maxwell can allocate under one OS?
* * *
please don't just tell me this is all bullshit i ask, i'm just interested in the technical facts - not based on economy or whatever...
i have not really been in touch with supercomputing. i'm just at the moment in contact with one of the leaders of renderrocket.com, which makes me interested of a possible use of a supercomputer in conjunction with maxwell rendering.
thanks !
* * *
a few questions about scientific computing and potential maxwell supercomputing
* * *
previously info and «colorful pictures»:
http://www.dalco.ch/press
http://www.fluent.com/
* * *
about using «supercomputers»
1[
software like fluent or any other scientific application: do they run ONE large OS with a software that distributes all calculations over all processors OR is there a «small» OS running on each rack/blade that is fed with tasks for just this unit.
my question is pointing to shared memory technologies (over the whole supercomputer) that are used today which seems to imply just ONE OS.
2[
are these applications especially written to run on supercomputers and thus there is no way to implement a workflow with a linux/redhat version of maxwell that would «see» the computer as ONE unit with n cpus instead of just one/two/four.
i want to underline the question if a supercomputer in such a workflow would rather render an image as ONE .mxi, as a standard computer does, OR if such a workflow would spread .mxis with seeds (1) to (n+1) across all available racks. what would be the optimum?
3[
is it possible to use a «standard» software like maxwell for linux to run on such a machine? are there special OSes/software implementations around that would spread a maxwell task over the whole cluster ?
4[
which is the max numbers of cores, maxwell can allocate under one OS?
* * *
please don't just tell me this is all bullshit i ask, i'm just interested in the technical facts - not based on economy or whatever...
i have not really been in touch with supercomputing. i'm just at the moment in contact with one of the leaders of renderrocket.com, which makes me interested of a possible use of a supercomputer in conjunction with maxwell rendering.
thanks !

- By Mark Bell