Page 1 of 1

problems with block transfer

Posted: Thu Mar 11, 2010 6:44 pm
by Josephus Holt
I ran a couple of tests.
1. I created a new file, inserted one block (embedded) composed of a single mesh into a new file, all well there, log viewer shows 1 block was transferred, it rendered out fine.
2. Created another new file, inserted one block composed of 4 meshes and 1 surface, the log viewer showed 5 blocks transferred (which is interesting that it makes a block out of each mesh/surface, is that right???), rendered out fine.
3. I opened a new file, inserted a couple of objects plus two the two blocks from above. Problems here. Log viewer shows 5 blocks (#2 above) and fails to render block #1 since it apparently was not transferred.
The time dealing with this stuff is just killing me. Hope you can help me. Joe

Just did another test where I exploded block #1 which did not render, then made a block of it inside Rhino. Then log viewer shows 6 blocks and all renders fine. :shock: :shock: I just tried inserting from file but using the other two insertion options but got same results (block not being transferred).

Re: problems with block transfer

Posted: Thu Mar 11, 2010 9:04 pm
by JDHill
1) checks out here. No instance is actually exported, since there is only one block; of course, to create an instance, you first need a mesh to instance.
2) similar to above, no instances are actually exported; they are reported that way because, being blocks, they are exported in the block-export code, but what is exported here are the five source meshes that would be instanced if there were more copies of the block in the document.
3) I'm not able to duplicate the problem here; the blocks and objects are rendering as expected. Your description is nice and detailed, but unfortunately it's not failing here yet.

Re: problems with block transfer

Posted: Fri Mar 12, 2010 1:16 am
by Josephus Holt
Can I assume that our answers 1), 2), and 3) correspond to my numbers?
Let me see if I understand how the block/instance works. A Rhino block is geometry that is externally described so helps keep the Rhino file size down, but when exported by the plugin, the data is counted along with all the surfaces and meshes, so a single Rhino block does not save any ram as far as Maxwell is concerned. So where there is an advantage if RAM becomes limiting, is to make copies of the block whenever possible and Maxwell will read each copy of the original as an instance (much less data required). The blocks that have given me trouble were all inserted into my Rhino scene as instances, rather than importing as objects and then converting to blocks. I don't know if at the end of the day it makes any difference in how Rhino identifies/describes that block, but I'll run some tests when I've finished up these three renders with some of the blocks that I was using when Maxwell crashed. Do I understand this part ok?

Re: problems with block transfer

Posted: Fri Mar 12, 2010 1:51 am
by JDHill
Let's just start from the beginning...

1. in Maxwell we can render meshes and we can render instances of meshes
2. in Rhino, we can have objects, and we can have instances of blocks, which are in turn made up of objects

In Rhino, the concept is more abstract than it is in Maxwell; it involves:

1. a block 'definition' - geometry and/or block references
2. a block 'reference' - a reference to a block definition

When you block an object in Rhino, it gets put into a table of block definitions; the objects in the table are invisible, and they exist at 0,0,0. When you 'insert' a block, Rhino takes the specified definition and places a reference to it in the document, using the specified transformation from 0,0,0. Ditto for any other references to this particular block definition.

When we export an object out to an MXS file, we can write (a) its mesh, and optionally (b) any number of instances of that mesh. If the object we are exporting is a block reference, the plugin will query Rhino for its source render mesh (from the object held in Rhino's block definition table) and write that into the MXS. If there is more than one reference to this particular block definition, then the plugin will just ask Rhino for the transformations used by the remaining references, exporting each one as an instance in the MXS.

So, with regards to:
there is an advantage if RAM becomes limiting, is to make copies of the block whenever possible and Maxwell will read each copy of the original as an instance (much less data required).
This is correct. If you had a hundred cubes, and you individually 'Block'-ed them in Rhino, you would gain nothing; what you would end up with would be a hundred block references, each of which point back to a different cube in the document's block definition table. Conversely, if you had one cube, and you blocked that, then 'Insert'-ed a hundred copies of that block, you would use, roughly speaking, 1/100th the amount of memory at render time. Of course, a cube is a very simple object, having only 12 triangles; as an instance in Maxwell must have a unique name, a transformation, a material assignment, etc., this ratio actually depends on the size of the mesh which is being instanced. As I've said elsewhere, instancing something like a single blade of grass a million times is a very poor strategy, since it is likely that the size, data-wise, of an instance is actually larger than the blade's mesh data.

Finally:
The blocks that have given me trouble were all inserted into my Rhino scene as instances, rather than importing as objects and then converting to blocks.
This seems consistent with the odd block-related problems we've been seeing with V5. It would appear that some of the bookkeeping gets out of order internally in the Rhino document, such that various copies of blocks which should be there are simply not present (or at least given) at export time. That the behavior would be different depending on whether the blocks were defined in the current document or not does not seem far-fetched, given the nature of the issue. However, since the problem is so transient (anything affected by a simple close/re-open is terrible), it's still difficult to guess what the root cause might be.

Also, MXS files crashing maxwell.exe during the render is unacceptable and should be completely unrelated to any of the above discussion; we would want to get one of these files to test (the smaller the better), in which the crash is repeatable, and is therefore not likely to be related to memory problems, etc.

Re: problems with block transfer

Posted: Fri Mar 12, 2010 3:50 am
by Josephus Holt
Thank you much for the thorough reply...hope that others can read this as well so you don't have to repeat it, which I'm sure you do anyway.
I've been running some tests but the only issue I've found so far is after having inserted some block/instances and copied them, then I inserted another block/instance and made 400 copies. These 400 copies did not transfer. I saved and closed Rhino and re-opened and then they showed up fine.
It seems that issues started showing up only when big files were involved, either directly or as attached files. The interesting thing is that when I exploded the blocks, more ram was required but no crashing.
I am at this point more likely to move over to c4d for the exterior renders which will have to have a lot of instances.
Thank you seems kind of lame considering your gracious help all the time. Joe