Everything related to the integration for Rhinoceros.
#393821
https://drive.google.com/file/d/0B-YR1O ... sp=sharing

Working on a background to render boats in front of. Small sample attached. Nobody will ever see it up close like this.

However I'm wondering if there's anything relatively easy I can do to improve the appearance/realism of the intersection of the sea and beach. It is a really hard edge. (not particularly apparent in this shot actually because of the reflections)

I'd think this should be more of a feathered edge with the bottom surface gradually appearing.

I thought about a transparency map to make the water transparent as it neared the intersection but was unable to add a mapping channel to the maxwell sea tiles.

Any other ideas?
#393822
To custom-map the sea surface, you'd need to change it to display type = mesh, explode the block, and use the generated mesh instead. Besides that, one idea might be to paint a custom texture for the beach, to simulate how the sand is always damp within some feet of the water's edge.
#393823
Good idea on sand wetness. Could have different spec here also etc.

Is there anything I can do with the water material itself to make it look more realistic, in that the depth variation doesn't render the way it looks in reality, i.e. with the water visibly getting shallower as it nears the beach and colour variation as the bottom nears the surface. In the rendering it looks sortof equal depth all the way to the edge.
#393833
Apologies for the delay in responding, just after my last post I tried to update the email address in my forum profile, which screwed up my account, preventing me from logging in afterward.

On the topic, there are a few things I'd keep in mind. First, that if you are using a real dielectric water material (as opposed to some form of non-refractive AGS), both that this is computationally expensive, and that the scene is not likely to be modeled in a strictly realistic way (which is fine).

Regarding the former, wherever we look in the world, what we see comes down largely to probability -- the probability that a given photon has reflected from the object we're viewing, and reached our eye. For a perfectly smooth surface, the probability approaches 100%, as all photons reflect in a direction equal and opposite from where they came; a perfectly diffuse surface, on the other hand, is one in which any given photon has an equal probability of reflecting in any direction. Most of the world (effectively all) resides somewhere between the two.

Now let's take the scenario in question -- what do we know, and how can it help us minimize the cost of the computation? First, the sun's rays are effectively parallel, so given a very reflective water surface, we can see that there is little likelihood of any given photon taking a path much different from that of any other photon. And in order for us to see the underwater surface, we know that a photon must first pass through the water's surface, refract to encounter the surface below it, and then reflect in such a way that upon being refracted at exit, it enters the camera. Any photon for which we calculate a path, where that path ends up anywhere but the camera, which is true for the vast majority, represents work that has been wasted; while it would be nice to be able to avoid doing such work in the first place, if you are following, you can reason that it is unavoidable, since it is only the calculation itself that determines whether the work ends up being pointless.

The upshot being, we can thus see why photons following this path are far less likely to be found than those which are either reflected from the environment (the dome of the sky) to the camera from the water's surface, or those which hit the beach directly and reflect to the camera. And in order to help balance things out, one thing we can do is to make the radius of the sun disc unrealistically large; by forcing the sun's rays to be less parallel, we will increase the likelihood of photons reaching the camera through the improbable sun > refract > sand > refract > camera path. In some cases this is unacceptable, but it may be a valid hack here.

Another thing we can see given this analysis is that another way of saying "optimization" is to say model things in such a way that photons are killed as early as possible. Because, the calculation for any given light ray can be abandoned only when it can be proved that it has no possible way of affecting what the camera sees. A main way of proving that is for the ray to end up intersecting not any geometry, but rather the environment. In this case, that could mean that a photon either reflects from the water surface to the environment, or that it refracts through the water surface, before then encountering either the sea bed -- or -- if we reduce the size of that sea bed, the environment below the scene. Simply, make both the sea surface, and the sea bed surface below it, as small as possible while still achieving the image you are after.

Next, similar to what I suggested with painting a custom map for the beach sand, it may also be valid to employ another hack, where you paint a complementary map for the water's edge, to be used as an opacity mask for two material layers applied to the sea surface -- one containing a real dielectric water BSDF for the majority of the (deeper) water, and one with an AGS material using the same Nd (i.e. 1.51) as water, to be blended in closer to the shoreline. AGS representing a reflective BSDF mixed with vacuum (i.e. with a refractive index of 1.0), it can reflect like water, but refract like vacuum (which is to say, not at all).

However, this may not be necessary, and I would only recommend pursuing it after first playing with the attenuation value of your water material, to ensure that given the scale of the scene, and the nature of the water itself, your water material is absorbing light at a rate to cause the deeper portions to appear suitably darker, and vice versa.
#393840
This was helpful. Thanks Jeremy. Will post update for comment when I get something. The shoreline transparency map will take a bit of doing.

Actually I prefer the look of full AGS-style water. Get the bottom contour variation I'm looking for and seems to be easier to control the colour... The scene looks a little "surreal" which is OK, maybe even ideal for what I'm doing.

Should there be any noticeable difference in the render between the original Extension water, and the mesh after I explode it?
#393841
It is possible there will be some difference, since to obtain the in-Rhino mesh version of the sea surface, the plugin queries the sea extension library for "proxy display" faces; these have no normals, which means normals for the mesh will be calculated by Rhino during export. In practice, any difference will likely be imperceptible.
#393844
It depends what you are referring to; it will take longer to export the mesh from Rhino, since the extension consists of writing just a few numbers into the scene, as opposed to megabytes of mesh data. Rendering-wise though, as I understand it, it should render at the same speed (meaning, the extension works by writing a mesh inside the engine, just prior to rendering).
#393846
An alternative would be to export just the sea mesh to MXS, and then reference that. Aside from that, it would come down to reducing the size/resolution of the mesh (10 minutes suggests it is pretty large/dense), to reduce the amount of data that must be written.
Will there be a Maxwell Render 6 ?

Let's be realistic. What's left of NL is only milk[…]