Welcome to the TWC Wiki! You are not logged in. Please log in to the Wiki to vote in polls, change skin preferences, or edit pages. See HERE for details of how to LOG IN.

Shaders

From TWC Wiki
Revision as of 08:56, 30 March 2007 by Professor420 (talk | contribs)
Jump to navigationJump to search

Introduction

As the fidelity of graphics increase, as there are more and more maps and models and polygons an artist must make to strive towards their intended graphical outcome, one can often feel lost in a sea of terms that we don't understand. Everyone knows what a normal map is, but how does it work? What is "gloss"? What is a specular texture?

Most of know know what a normal map looks like, we know what gloss does, we know that specular means highlights. But its been my experience that actually knowing something about these techniques, getting under the hood of shaders (the things that drive the actual rendering, and all we are really concerned with as modelers and texture-artists), has increased my ability many times over.


I'm going to ignore the traditional way of learning vertex and pixel shaders; that is, doing your rendering with a vertex shader, and then showing you how to do it so much better with a pixel shader. As artists, this is mostly pointless.

Overview of Shaders

What is a Shader?

Everyone heard of shaders, or shader-based engines or software, but most people don't know what they are, or what the craze is. The most succinct explanation I can think of for a shader is:

A shader takes something, does stuff to it, and gives you something else.

Depending on who you are, that is either the most mundane explanation, or the most intriguing explanation. Obviously, programmers find it intriguing. And I hope you will too.

Shaders come in two sorts. Vertex shaders, and pixel shaders. They go together hand in hand, but it is pixel shaders that are where the magic happens.

World, Object, and Tangent Space

We generally talk about shaders, and graphics in general, with regards to three different types of "spaces," World Space, Object (or Local) Space, and Tangent Space. I will introduce these briefly.

"Spaces" work very similarly to the "Reference Coordinate System" in 3ds (or the Tool Settings panel in Maya). 3ds has two "Sytems" which are of importance to us: World and Local.

To test them out, create an object, and translate and rotate it. Use "World," and your Move and Rotate axes will always stay aligned with the 3ds axis (Z up, X horizontal, Y into the screen). This is the equivalent of "World Space" in graphics programming. Now go into "Local." When you have an object selected, the Move and Rotate will adjust itself to the "local" axis of the object... if you rotate the object 90 degrees around the Z axis, the X axis now points into the screen, and the Y horizontal. Play around and experiment. This is the "Local/Object Space" in graphics programming. Finally, select a vertex of your object (still use Local Space). You will see that the move/scale gizmo now is aligned with the vertex's "normal" (actually its the averaged normal of the adjacent faces but that's not important). This is what is referred to as "Tangent Space." (Please refer to Wikipedia if you don't understand what a Normal is... it points perpendicular to a surface, essentially).

So how does this apply to Shaders? Well, Vertex Shaders will 'use' these spaces, putting different 'inputs' into the same space so they can be measured and compared. For this article, we will be mostly concerned with World Space. Object space is very similar. Tangent Space is more complicated, but conceptually you should understand it after this article. We will explore Tangent Space more when we cover Normal Mapping.

Floats and Vectors

This is simple. A float is a real number. It contains a certain number of bits of information which limit its accuracy, but don't worry about it. So, a float2 are two real numbers, and a float3 are three real numbers, and a float4, four real numbers.

Another name for a float is a scalar value. "4" and 0.0374832 are both floats. Floats are used for a variety of things.

Float2's are normally used for UV coordinates. An example of a float2 is "5.3 2.5493", ie, its just a list of two numbers.

Float3's are Vectors, and can represent any "direction." Vectors point in a certain direction, and the float3 corresponds to the XYZ values of the direction. RGB colors are also vectors. They are depicted with the subtext of "name.xyz" or "name.rgb".

Though float3's are vectors, float4's are more commonly used. They contain a fourth number, which can refer to the length of a vector, or the value of an alpha channel. They are depicted with the subset of "name.xyzw" or "name.rgba".

Normalization

Normalization essentially setting a vector's length to 1 (dividing the vector by its magnitude (aka length), so you get a vector pointing in the same direction. This is important for multiplying and comparing vectors (a requirement for thing such as lighting).

Vertex Shader

Now that we know what a Shader does, let's take a look at one. This shader converts things to World Space. We will break down a simple shader line-by-line.

Remeber our initial definition? A shader takes something, does stuff to it, and gives you something else. Well, we first have to set up what "something" we take, and the "something else" we will eventually get.

// input from application 
struct a2v { 
 float4 position  : POSITION; 
 float2 texCoord  : TEXCOORD0; 
 float3 tangent  : TANGENT; 
 float3 binormal  : BINORMAL; 
 float3 normal  : NORMAL; 
}; 

All application inputs inherited from the application like this are in Object Space (many rendering engines allow you to take variables, such as light position, in world space or object space, but they are not part of the vertex input structure... they are separate variables). These are the things the application passes into the vertex shader. The application says "this vector (float3) is your tangent, this is your normal, and this is your binormal. This float2 is the UV coordinates of the vertex. And this vector (float4) is the vertex's position in Object Space.

// output to fragment program 
struct v2f { 
 float4 position     : POSITION; 
 float3 lightVec     : TEXCOORD4; 
 float3 eyeVec      : TEXCOORD3; 
 float2 texCoord  : TEXCOORD0; 
 float3 worldTangent   : TEXCOORD6; 
 float3 worldBinormal  : TEXCOORD7; 
 float3 worldNormal    : TEXCOORD5;
}; 

This is what the vertex shader outputs into the pixel shader (AKA, fragment program). The actual code of the vertex shader will show us how we go calculate these outputs. TEXCOORD is just a semantic for a "register," that says to the vertex shader, "store this number in this place with the name "TEXCOORD#".

Now, the actual Vertex Shader code (this is the "stuff we do to it, going back to our original definition):

v2f v(a2v In) 
{ 
 v2f Out = (v2f)0; 
Out.position = mul(In.position, wvp);
Out.texCoord = In.texCoord;

These are just your standard things to do. This first "zeros out" your result to make sure the calculations are correct. Then, you convert your vertex position (in object space) to "screen space" so it shows up correctly on the screen (multiplying by the "world view projection matrix"). Finally, you take your input UV coordinates and pass them through, unmodified.

float3 worldSpacePos = mul(In.position, world);
Out.lightVec = lightPosition - worldSpacePos;
Out.eyeVec = eyePosition - worldSpacePos;

Matrix multiplication is a doozy... don't even try to think about it mathematically. This multiplies the object space vertex position by the "world matrix" to find the world space vertex position. Then we take the world space light and eye position, subtract the world space vertex position, and we get a vector pointing from the light (or eye) to the vertex position.

In this case, our light and eye are in world space... if they weren't, we'd multiply their object space positions by the World Matrix to put them into world space, before we do the subtraction.

Out.worldNormal = mul(In.normal, worldIT).xyz;
Out.worldBinormal = mul(In.binormal, worldIT).xyz;
Out.worldTangent = mul(In.tangent, worldIT).xyz;

This just converts your normal, binormal, and tangent inputs, into world space, by multiplying them by the World Inverse Transpose Matrix. Now, everything (vertex position, light vector, eye vector, normal, binormal, and tangent vectors) are in world space.

Lighting

Before we move into Pixel Shaders, we need to understand lighting, both Vertex Lighting and Per-Pixel Lighting. Fortunately they use exactly the same math. Before we get into the pixel shader, which is intertwined with normal mapping as far as we are concerned, we must understand lighting more fully.

As far as we are concerned, lighting comes in two essential forms: diffuse lighting, and specular lighting. There are different techniques for both of these, and there are also interesting ways to do ambient lighting, sub-surface scattering, anisotropic lighting, etc., but since this is an intro, we will look at the most common formulas of the most common lighting types.

Diffuse Lighting

Two inputs are of important when we consider diffuse lighting. The normal (N), and the light vector (L). The comparison between these, called the Dot Product, determines how much illumination reaches a surface (we call this, NdotL). The dot product is a mathematical function of two vectors; if they are 'facing' each other head on, the result is 1. If they are perpendicular, the result is 0. If they are parallel, the result is -1, but we usually "clamp" any negative values to 0.

So, let us look at this setup, of a single light positioned directly above the vertex of a plane. The lines pointing from the vertex are the vertex normals. (IMAGE) The dot product of the vertex directly under the light is 1. The darker a vertex is, the lower its dot product. Exact values aren't important, only the idea is.

Let's also look at the same setup, but with a plane with exponentially more vertices. {IMAGE) The lighting is done exactly the same way, the NdotL is calculated per-vertex, and interpolated linearly across the plane... meaning, that if one vertex has an NdotL of 1, and an adjacent vertex has an NdotL of 0, the point half-way between the two vertices will have a dot product of .5. Because things are done per-vertex, however, this interpolation/lack of sampling creates problems, as we can see in the following image. (IMAGE) The NdotL of all vertices is exactly the same, so the surface is shaded with a solid color (the NdotL is .6 at each vertex, and thus is .6 everywhere).

Enter, per-pixel lighting and pixel shaders. With per-pixel lighting, we get the following result to the same exact geometry: (IMAGE) The reason is that we are finding the NdotL at each pixel instead of vertex. This is much more accurate and precise. Instead of being concerned with the normal of each vertex, we are concerned with the normal of each pixel, which is given to us from, you guessed it, a normal map (or the vertex shader, but those days of non-normal-mapped surfaces are behind us).

YOU can help us improve this Wiki! ~ Look for ways to help and editing advice. ~ If you need further advice, please post here.