Discusión general en español sobre Maxwell Render.
By mane162
#82622
ENGLISH VERSION :wink:

HOW LIGHTING WORKS?

Before talking about normal maps specifically, it’s important that I give a general overview of the process of lighting a 3d model so you can have a good foundation for understanding what the normal maps are doing. This is a very simple explaination. If you want to learn more, just follow the links in the text.

So how does lighting work? How do we tell how dark or bright to make each point on the screen so that the object looks like it’s being lit by the lights in the scene? First it’s important to know the direction that each point on the surface is facing. The direction that a point on the surface is facing is called a normal. You can imagine a normal as a line extending from the surface point. The line is perpendicular to the surface. Next we need to know where the light is in our scene. We create a line from the point on the surface to the position of the light. This line is called the light vector. (Vector is a fancy math term for line.) So now we have two vectors coming out of our surface point, the light vector and the normal. If we measure the angle between the two lines then we know how to light the point.
Image
If the angle is small (the two vectors are pointing in a similar direction) then we know that the surface point needs to be bright because it’s looking almost straight at the light. If the angle is large then we know that the point needs to be dark because it is facing away from the light (assuming there’s just one light).
Image Image

That’s pretty much all there is to it. (Of course there are all kinds of other cool stuff you can do like specular and reflection, but we’ll get into that in another tutorial.) The core mathematical formula for lighting looks like this:

brightness = N dot L
N is the direction that the surface is facing (the surface normal) and L is the line that we draw from the surface point to the light source (the light vector). “Dot” is the way we measure the angle between the two lines. It’s a dot product of the two vectors.

ENTER THE NORMAL MAP
So how does this apply to real-time models? Up until just recently, most real-time video game models have been lit using per-vertex Gouraud shading (pronounced Guh-row). That’s a big fancy title that basically means that only the vertices were lit with the N dot L formula (only the corners of the polygons) and then all the pixels on the polygons in between got their lighting by interpolation. So if my polygon had one dark vertex and one bright vertex, the pixels in between would just be a linear gradient from light to dark.
Image
It’s a short cut that allows the graphics hardware to do a lot less calculations because it’s only doing the N dot L thing at a few points instead of all of them. Then it makes a quick estimate of how the surface in between the verts should be lit. This method works pretty well, but it doesn’t look as realistic as doing the lighting calculation at every pixel.
Image
The image above illustrates the problem with Gouraud shading. This low-poly sphere is lit per-vertex using Gouraud shading. It's obvious that the linear interpolation isn't good enough to make the lighting look convincing.

Sometimes you get the lighting you want with Gouraud shading, but sometimes you get some strange artifacts that don’t look good at all. If the triangles in your model are large your lighting will look really poor. You can only put detail in your model using more polygons so you’re limited by the number of polygons the game engine can push.

What’s the solution to these problems? Per-pixel lighting! Starting with the GeForce2 graphics card, graphics hardware now has the ability to calculate the N dot L lighting formula at every pixel instead of at every vertex. This eliminates the problems caused by Gouraud shading and opens up the door to some really cool possibilities.
Image

This low-poly sphere is lit per-pixel. Even though it's still a low poly sphere, it's shading is nice and smooth because the lighting calcualtions are done for every pixel.

Per-pixel lighting uses an RBG texture to encode the data needed to create surface normals in a regular texture map. This texture containing surface normal data is called a normal map. The red, green, and blue channels of the normal map represent the X, Y, and Z values of the normal vector. Here's an example of a normal map that I created:
Image
Remember when I said that the surface normal always goes perpendicular to the surface? That wasn’t necessarily true. When you use normal maps, you can make the normal at each pixel go in whatever direction you want. In the image above we can see that the light blue pixels (R 127, G 127, B 255) represent normals that are pointing straight out of the screen. The pink pixels represent normals that are tweaked to the right. Green pixels represent normals that are tweaked up. Purple pixels represent normals that are tweaked down, and dark blue/green pixel are normals tweaked to the left.

You can make it look like your surface has lots of extra bumps, or scratches, or any other type of surface detail simply by editing the normal at each pixel so they make the surface appear to go in directions that it really doesn’t. The tweaked normals fool the eye into believing that the surface has more detail than it really does because of the way the lighting reacts with the normal at each pixel. If you’ve ever painted a bump map for a non-real-time model, you already understand this principle. You can use normal maps to achieve the exact same results as a bump map – only in real-time. In fact, it’s very easy to just paint a plain old bump map for your real-time model and then convert it to a normal map. Then apply the normal map to your model and you’ve got bump mapping in real time! The first half of the tutorial will show you how to do this.
Image Image Image

Next you run a special program for generating the normal map. The program puts an empty texture map on the surface of the low res model. For each pixel of this empty texture map, the program casts a ray (draws a line) along surface normal of the low res model toward the high res model. At the point where that ray intersects with the surface of the high res model, the program finds the high res model’s normal. The idea is to figure out which direction the high res model surface is facing at that point and put that direction information (normal) in the texture map.
Image

CREATING NORMAL MAP AS BUMP MAP

The easiest method of making your real-time models look more detailed is to used a normal map created from a bump map. In this method, the normal map provides, in real-time, the exact same funtionality that a bump map adds to a software rendered model. The basic principle works like this:

1. Create a real-time model.
Image
2. Paint a bump map for your model in Photoshop or another paint program
Image
3.Convert the bump map to a normal map with Nvidia's Photoshop plug-in. Other simple programs can also do this. See the last page of the tutorial for more info.
Image
4.Apply the normal map to the model and render it will a per-pixel lighting shader.
Image

You end up with a model that looks like it has a lot of extra surface detail. This process is probably pretty simple to understand so in this tutorial I'll just cover the step that might be new to you: Converting the bump map to a normal map.

The conversion is done using a Photoshop plug-in from Nvidia. You can follow the link on page 3 of the tutorial to download this plug-in. Once you've downloaded it just run the executable to install it. Next, follow the steps below to create a normal map.

1. Open your bump map in Photoshop. Be sure that it is a "power of two" image (64x64 or 128x128 or 256x256, etc) and that the light colored pixels represent raised surface details and the dark pixels represent lowered surface datails.

2. Choose "Save As" from the File menu. Give your texture a name and choose DDS format. DDS stands for Direct Draw Surface. It's an image file format that is used natively by DirectX. Don't worry if DDS isn't the final file format that you want. You can always open the image again and save it as another format once it's converted.
Image
3.When you click the Save button, this options dialog will appear that allows you to specify all of the parameters for saving your image.
Image
I was really impressed with how much control all of these options gave me over exactly what I wanted to do with my image. You can do a lot more with DDS format than we're going to cover in this tutorial.
4. Click on the "Normal Map Settings ..." button to access the normal map conversion options. The following window will open:
Image
5. Click the "Convert to Tangent Space Normal Map" box. Set Filter Type to "4 sample" and Scale to "4." Set Height Source to "Average RGB." (Once you've converted your bump map to a normal map, you're welcome to come back to this dialog and play around with these settings to achieve different results. For example, the scale value will make your bumps appear higher of lower off of the surface.) When you're done, click the OK button.

6. Back at the DDS format options window, click the Save button. This will convert your image to a normal map and save it in DDS format. When the dialog goes away, you're image will look just the same as it did before you started. Close the image and re-open it to see the normal map that you have created.

LINKS TO ADITIONAL INFORMATION

There is a lot of information on the web about generating normal maps using lots of different techniques and different software packages. I've compiled a list of the links that I found the most useful.

Links to Additional Information
There is a lot of information on the web about generating normal maps using lots of different techniques and different software packages. I've compiled a list of the links that I found the most useful.

Creating Normal Maps:

http://developer.nvidia.com/object/nv_t ... tools.html
Nvidia’s DDS plugin for Photoshop. This plugin converts bump maps into normal maps

http://www.pixolator.com/zbc-bin/ultima ... =011260&p=
A forum thread with lots of information about creating normal maps using ZBrush.

http://www.drone.org/tutorials/normal_maps.html
A technique for generating normal maps in Maya.

http://www.pinwire.com/article82.html
A very cool method for creating normal maps for flat surfaces in just about any 3D program.

http://reblended.com/www/alien-xmp/Tuto ... alMap.html
A technique for creating normal maps using Blender.

http://members.shaw.ca/jimht03/normal.html
Creating normal maps using Cinema 4D.

http://www.ati.com/developer/tools.html
NormalMapper - ATi's normal map generator program

http://www.soclab.bth.se/practices/orb.html
A program similar to ATi's NormalMapper that generates normal maps from a high and low res model.

http://www.seanomlor.com/mikeb/
An extented version of ATi's NormalMapper program that adds OBJ support and the ability to render displacement maps

http://www.ionization.net/tutsnorm1.htm
Another tutorial on generating normal maps.

http://sparks.discreet.com/downloads/do ... 2&wf_id=83
A free plugin for 3DS Max that renders normal maps. This one is VERY limited because the UV unwrap of both high and low res models much be the same.

http://www.mankua.com/kaldera.cfm
A commercial plug-in for max that creates normal maps.

http://be3d.republika.pl/howto_d3_normalmap.html
A tutorial that explains how to generate and display normal maps in SoftImage XSI.

http://amber.rc.arizona.edu/lw/normalmaps.html
A plugin and tutorial for generating normal maps in Lightwave

http://www.jeffparrott.com/normalmaptut_01.html
Jeff Parrott has written a tutorial that shows how easy it is to create a normal map in Maya 6.

http://66.70.170.53/Ryan/nrmphoto/nrmphoto.html
Ryan Clark as created a clever way of creating normal maps from real world surfaces!

http://www.kennorman.com/assets/images/ ... malmap.htm
A nice tutorial for creating normal maps in Maya 6 by Ken Norman.


http://www.poopinmymouth.com/tutorial/n ... rkflow.htm
A very thorough work flow tutorial by Ben Mathis on creating normal maps in Max 7.

http://www.codeproject.com/cs/media/Nor ... ressor.asp
A tool for converting regular normal maps to DXT5 swizzled compressed normal maps. It will even do batch converts! Very cool.

Displaying and Viewing Normal Maps in Per-Pixel Lighting

http://developer.nvidia.com/object/IO_3 ... lugin.html
http://developer.nvidia.com/object/MayaCgPlugin.html
Nvidia’s Cg plug-in for Max or Maya. This plugin displays real-time shaders (including those that use normal maps) in the real-time viewport of 3DS Max or Maya.

http://be3d.republika.pl/howto_d3_normalmap.html
A tutorial that explains how to generate and display normal maps in SoftImage XSI.

http://www.drone.org/tutorials/normal_maps.html
This tutorial also shows how to display normal maps in Maya.

http://www.crytek.com/downloads/index.php?sx=polybump
A demo that shows the cool results of using normal mapping. You can also use their viewer to see the results of your own normal maps.

http://www.ati.com/developer/tools.html
ATi's NormalMapper program also comes with a simple viewer for displaying your models with the normal map applied.

http://developer.nvidia.com/object/nv_t ... tools.html
Nvidia’s DDS plugin for Photoshop displays a real-time preview of a flat surface with your normal map applied.

http://www.maxplugins.de/max5.php?search=gnormal
A plugin for 3DS Max that allows you to put your normal map in the bump map channel of a material and see the results using max's renderer.

http://www.carboneros.com/mel.htm
A shader node for Maya written by Alex Carbonero that allows you to use and render normal maps with Maya's software renderer.

I hope it´s usefull

mane162
User avatar
By def4d
#83075
:shock: WOW :shock: Impressive

So many many many thanks, Mane !!!!

I hope i could be helpfull for you one day (or two) :D :shock:
User avatar
By tom
#83079
Thank you for the English version Mane! Tutorials section is waiting for you! :D
Will there be a Maxwell Render 6 ?

Let's be realistic. What's left of NL is only milk[…]