OpenGL ES From the Ground Up, Part 4: Let There Be Light!

简介: FRIDAY, MAY 1, 2009 OpenGL ES From the Ground Up, Part 4: Let There Be Light! Continuing on with OpenGL ES for the iPhone, let's talk about light.

FRIDAY, MAY 1, 2009

OpenGL ES From the Ground Up, Part 4: Let There Be Light!

Continuing on with OpenGL ES for the iPhone, let's talk about light. So far, we haven't done anything with light. Fortunately, OpenGL still lets us see what's going on if we don't configure any lights. It just provides a very flat overall lighting so that we can see stuff. But without defined lights, things have a tendency to look rather flat, as you saw in apps we wrote back in  part 2.

iPhone SimulatorScreenSnapz001.jpg

Shade Models

Before we get into how OpenGL ES deals with light, it's important to know that OpenGL ES actually defines two  shade modelsGL_FLAT and GL_SMOOTH. We're not even going to really talk about  GL_FLAT, because you'd only use it if you wanted your apps to look like they came from the 1990s:

GL_FLAT.jpg

GL_FLAT rendering of an icosahedron. State of the art real-time rendering... 15 years ago.

GL_FLAT treats every pixel on a given triangle exactly the same in terms of lighting. Every pixel on a polygon will have the same color, shade, everything. It gives enough visual cues to have some three-dimensionality and it's computationally a lot cheaper than calculating every pixel differently, but flat shaded objects tend not to look very real. Now, there might be a certain retro value to using  GL_FLAT at times, but to make your 3D objects look as realistic as possible you're going to want to go with the  GL_SMOOTH drawing mode, which uses a smooth but fairly fast shading algorithm called  Gouraud shading. GL_SMOOTH is the default value.

The only reason I bring it up is at all is simply because some of the stuff discussed toward the end of this article doesn't work exactly the same if you're using GL_FLAT shading. Given how antiquated and unnecessary  GL_FLAT is for most purposes, I didn't want to spend time covering it.

Enabling Lights

For purposes of this article, I'm going to assume that you're continuing on with the final project from Chapter 2, the flat-looking spinning icosahedron. If you didn't create the project then, and want to follow along,  you can grab the Xcode project here.

The first thing we need to do is enable lights. By default, manually-specified lights are disabled. Let's turn that feature on now. Go into  GLViewController.mand add the code in bold below to the  setupView: method:

-(void)setupView:(GLView*)view
{
    const GLfloat zNear = 0.01, zFar = 1000.0, fieldOfView = 45.0; 
    GLfloat size; 
    glEnable(GL_DEPTH_TEST);
    glMatrixMode(GL_PROJECTION); 
    size = zNear * tanf(DEGREES_TO_RADIANS(fieldOfView) / 2.0); 
    CGRect rect = view.bounds; 
    glFrustumf(-size, size, -size / (rect.size.width / rect.size.height), size / 
               (rect.size.width / rect.size.height), zNear, zFar); 
    glViewport(0, 0, rect.size.width, rect.size.height);  
    glMatrixMode(GL_MODELVIEW);
    
    glEnable(GL_LIGHTING);
    
    glLoadIdentity(); 
    glClearColor(0.0f, 0.0f, 0.0f, 1.0f); 
}

Typically, lights are something you will enable once during setup, and not touch again. It's not a feature you should be turning on and off before and after drawing. There might be some odd cases where you turned them on or off during your program's execution, but most of the time you just enable when your app launches. That single line of code is all it takes to enable lighting in OpenGL ES. Any idea what happens if we run the program now?

lights_enabled.jpg


Enabling Lights

Egads! We've enabled lighting, but haven't created any lights. Everything we draw except the grey color we used to clear the buffers will be rendered as absolute pitch black. That's not much of an improvement , is it? Let's add a light to the scene.

The way lights are enabled is a little odd. OpenGL ES allows you to create up to 8 light sources. There is a constant that corresponds to each of these possible lights, which run from  GL_LIGHT0 to  GL_LIGHT7. You can use any combination of these five lights, though it is customary to start with  GL_LIGHT0 for your first light source, and then go to  GL_LIGHT1 for the next, and so on as you need more lights. Here's how you would "turn on" the first light,  GL_LIGHT0>:
    glEnable(GL_LIGHT0);

Once you've enabled a light, you have to specify some attributes of that light. For starters, there are three different component colors that are used to define a light. Let's look at those first. 

The Three Components of Light

In OpenGL ES, lights are made up of three component colors called the  ambient component, the  diffuse component, and the  specular component. It may seem odd that we're using a color to specify components of a light, but it actually works quite well because it allows you to specify both the color and relative intensity of each component of the light at the same time. A bright white light would be defined as white ( {1.0, 1.0, 1.0, 1.0}), where a dim white light would be specified as a shade of grey ( {0.3, 0.3, 0.3 1.0}). You can also give your light a color cast by varying the percentages of the red, green, and blue components.

The following illustration shows what effect each component has on the resulting image.

component.jpg

Source image from Wikimedia Commons, modified as allowed by license.

The specular component defines direct light that is very direct and focused and which tends to reflect back to the viewer such that it forms a "hot spot" or shine on the object. The size of the spot can vary based on a number of factors, but if you see an area of more pronounced light such as the two white dots on the yellow sphere above, that's generally going to be coming from the specular component of one or more lights.

The diffuse component defines flat, fairly even directional light that shines on the side of objects that faces the light source. 

Ambient light is light with no apparent source. It is light that has bounced off of so many things that its original source can't be identified. The ambient component defines light that shines evenly off of all sides of all objects in the scene.

Ambient Component

The more ambient component your light has, the less dramatic the lighting effects you achieve will be. Ambient components of all light are multiplied together, meaning that the scene's overall ambient light will be the combined ambient light from all of the enabled light sources. If you're using more than one light, you may wish to specify your ambient component on only one of them and leave the ambient component on all the others at black ( {0.0, 0.0, 0.0, 1.0}) so that it's easier to adjust the amount of ambient light in the scene, because you'll only have to adjust it in one place.

Here's how you would specify the ambient component of a light to a very dim white light:

    const GLfloat light0Ambient[] = {0.05, 0.05, 0.05, 1.0};
    glLightfv(GL_LIGHT0, GL_AMBIENT, light0Ambient);

Using a low ambient value like this will make the lighting in your scene more dramatic, but it also means that the side of an object that is not facing a light, or objects that have other objects between them and the light will not be seen very well in your scene.

Diffuse Component

The second component of light that you can specify in OpenGL ES is the  diffuse component. In the real world, diffuse light would be light, for example, that has passed through a light fabric, or bounced off of a white wall. The rays of diffuse light have scattered, which gives a softer appearance than direct light with less chance of hot spots or shininess. If you've ever watched a professional photographer using studio lights, you've probably seen them using  softboxesor reflecting their lights into umbrellas. Both passing through a light material like white cloth and reflecting off of a light colored material will diffuse the light to give a more pleasing photograph. In OpenGL ES, the diffuse component is similar, in that it is light that reflects very evenly off of the object. It's not the same as ambient, however, because it's directional light, so only the sides of an object that is facing the light source will reflect diffuse light whereas all polygons in the scene will be hit by ambient light.

Here's an example of specifying the diffuse component for the first light in a scene:

    const GLfloat light0Diffuse[] = {0.5, 0.5, 0.5, 1.0};
    glLightfv(GL_LIGHT0, GL_DIFFUSE, light0Diffuse);

Specular Component

Finally, we get to the specular component. This is the component of light that is very direct and tends to bounce back to the view in the form of hot spots and glare. If you wanted to give the appearance of a spotlight, you would specify a very high specular component, and very low diffuse and ambient components (you'd also define some other parameters, as you'll see in a moment).
Note: the specular value of the light is only one factor in determining the size of the specular highlight, as you'll see in the next installment. The first version of this blog posting incorrectly had shininess as a light attribute, it's actually a material attribute only, sorry about that. Shininess is something we'll discuss next time when we start creating materials

Here's an example of setting the specular component:

  
 const GLfloat light0Specular[] = {0.7, 0.7, 0.7, 1.0};


Position

There's another important attribute of a light that needs to be set, which is the light source's position in 3D space. This doesn't affect the ambient component, but the other two components are directional in nature and can only be used in light calculations if OpenGL knows where the light is in relation to the objects in the scene. We can specify the light's position like this:

    const GLfloat light0Position[] = {10.0, 10.0, 10.0, 0.0}; 
    glLightfv(GL_LIGHT0, GL_POSITION, light0Position);

That position would place the first light behind the viewer a little to the right and a little ways up (or a lot to the right and up if the OpenGL units represent large values in your virtual world).

Those are the attributes that you'll set for nearly all lights. If you don't set a value for one of the components, it will default that component to black  {0.0, 0.0, 0.0, 1.0}. If you don't define a position, it will fall at the origin, which isn't usually what you want.

You might be wondering what the alpha component does for lights. The answer for ambient and specular light is  not a darn thing. It is used, however, in the calculations for diffuse light to determine how the light reflects back. We'll discuss how how that works after we start talking about materials because both material and light values go into that equation. We'll discuss materials next time, so for now, don't worry too much about alpha, just set it 1.0. Changing it won't have any affect on the program we're writing in this installment, but it could, at least for the diffuse component, in future installments.

There are some other light components you might optionally consider setting.

Creating a Spotlight

If you want to create a directional spot light - a light that points in a particular direction and only shines light in a certain angle of view, in essence, creating a light that shines in the shape of a frustum (remember what that is?) as opposed to shining out in all directions like a bare light bulb would, then you need to set two additional parameters. Setting  GL_SPOT_DIRECTION allows you to identify the direction the light is pointing in. Setting  GL_SPOT_CUTOFF defines the spread of the light, similar to the angle used to calculate the field of vision in the last installment. A narrow angle creates a very tight spotlight, a broader angle creates something more like a floodlight.

Specifying Light Direction
The way  GL_SPOT_DIRECTION works is that you specify an x, y, and z component to define the direction it's pointing in. The light isn't aimed at the point in space you define, however. The three coordinates you provide are a  vector, not a vertex. Now, this is a subtle but important distinction. A vector is represented by a data structure that's identical to a vertex - it takes three  GLfloats, one for each of the three Cartesian axes, to define a vector. However, the numbers are used to represent a direction of travel rather than a point in space.

Now, of course, everybody knows that it takes two points to define a line segment, so how can a single point in space possibly identify a direction? It's because there's a second implied point that's used as the starting point, which is the origin. If you draw a line from the origin to the point defined in the vector, that's the direction represented by the vector. Vectors can also be used to represent velocity or distance, with a point further away from the origin representing a faster velocity or further distance. In most uses in OpenGL, the distance from the origin is not actually used. In fact, in most cases where we're going to use a vector, we're going to  normalize that vector so that it has a length of 1.0. Normalizing the vectors. We'll talk about vectors more as we move on, but for now, if you want define a directional light, you have to create a vector to define the direction it is shining in. You might do that like this, which would create a light that is shining straight down the Z axis:

    const GLfloat light0Direction = {0.0, 0.0, -1.0};
    glLightfv(GL_LIGHT0, GL_SPOT_DIRECTION, light0Direction);

Now, what if you want light pointing at a specific object? That's actually pretty easy to figure out. Take the position of the light and the position of the object and feed those to the function  Vector3DMakeWithStartAndEndPoints() that can be found in  OpenGLCommon.h provided with my OpenGL project template, and it will return a normalized vector pointing from the light to the specified point. Then you can feed that in as the  GL_SPOT_DIRECTION value for your light.

Specifying Light Angle
Specifying the direction of the light won't have any noticeable effect unless you limit the angle that the light is shining in. If you specify a cutoff angle, it's got to be less than 180° because the angle you specify for  GL_SPOT_CUTOFF defines how many degrees from the center line on BOTH sides of the centerline, so if you specify 45° you are actually create a spot light with a total field of 90°. That means the maximum value you can specify for  GL_SPOT_CUTOFF is 180°. Here's an illustration of the concept:

light field.png


Here's how you would restrict the angle to a 90° field of vision (using as a 45° cutoff angle):

     glLightf(GL_LIGHT0, GL_SPOT_CUTOFF, 45.0);


There are three more light attributes that can be set. They work together, and they are WAY beyond the scope of this blog posting, however I may do a post in the future on  light attenuation, which is how light "falls off" as it moves further away from its source. You can create some very neat effects by playing with the attenuation values, but that will have to wait for a future post.

Putting it All Together

Let's take all that we've learned and use it to set up a light in the  setupView: method. Replace your  setupView: method with this new one:

-(void)setupView:(GLView*)view
{
    const GLfloat zNear = 0.01, zFar = 1000.0, fieldOfView = 45.0; 
    GLfloat size; 
    glEnable(GL_DEPTH_TEST);
    glMatrixMode(GL_PROJECTION); 
    size = zNear * tanf(DEGREES_TO_RADIANS(fieldOfView) / 2.0); 
    CGRect rect = view.bounds; 
    glFrustumf(-size, size, -size / (rect.size.width / rect.size.height), size / 
               (rect.size.width / rect.size.height), zNear, zFar); 
    glViewport(0, 0, rect.size.width, rect.size.height);  
    glMatrixMode(GL_MODELVIEW);
    
    // Enable lighting
    glEnable(GL_LIGHTING);

    // Turn the first light on
    glEnable(GL_LIGHT0);
    
    // Define the ambient component of the first light
    const GLfloat light0Ambient[] = {0.1, 0.1, 0.1, 1.0};
    glLightfv(GL_LIGHT0, GL_AMBIENT, light0Ambient);
    
    // Define the diffuse component of the first light
    const GLfloat light0Diffuse[] = {0.7, 0.7, 0.7, 1.0};
    glLightfv(GL_LIGHT0, GL_DIFFUSE, light0Diffuse);
    
    // Define the specular component and shininess of the first light
    const GLfloat light0Specular[] = {0.7, 0.7, 0.7, 1.0};
    const GLfloat light0Shininess = 0.4;
    glLightfv(GL_LIGHT0, GL_SPECULAR, light0Specular);
    
    
    // Define the position of the first light
    const GLfloat light0Position[] = {0.0, 10.0, 10.0, 0.0}; 
    glLightfv(GL_LIGHT0, GL_POSITION, light0Position); 
    
    // Define a direction vector for the light, this one points right down the Z axis
    const GLfloat light0Direction[] = {0.0, 0.0, -1.0};
    glLightfv(GL_LIGHT0, GL_SPOT_DIRECTION, light0Direction);

    // Define a cutoff angle. This defines a 90° field of vision, since the cutoff
    // is number of degrees to each side of an imaginary line drawn from the light's
    // position along the vector supplied in GL_SPOT_DIRECTION above
    glLightf(GL_LIGHT0, GL_SPOT_CUTOFF, 45.0);
    
    glLoadIdentity(); 
    glClearColor(0.0f, 0.0f, 0.0f, 1.0f); 
}

Pretty straightforward, right? We just put all the pieces from above together, so we should be good to go, right? Well, try running it and see.

lights_enabled.jpg


Aw, now, come on! That's not fair! We've got lights, why can't we see anything now? The shape pulses between black and grey, but it doesn't look any more like a 3D shape then it did before.

Don't Worry, It's Normal

And by normal, I don't mean typical, or average. I mean normal in the math sense; to mean "perpendicular to". That's what we're missing. Normals. A surface normal (or polygon normal) is nothing more than a vector (or line) that is perpendicular to the surface of a given polygon. Here's a nice illustration of the concept (this one is from Wikipedia, not from me:)

OpenGL doesn't need to know the normals to render a shape, but it does need them when you start using directional lighting. OpenGL needs the surface normals to know how the light interacts with the individual polygons.

OpenGL requires us to provide a normal for each vertex we used. Now, calculating surface normals for triangles is pretty easy, it's simply the cross product of two sides of the triangle. In code, it might look like this:

static inline Vector3D Triangle3DCalculateSurfaceNormal(Triangle3D triangle)
{
    Vector3D u = Vector3DMakeWithStartAndEndPoints(triangle.v2, triangle.v1);
    Vector3D v = Vector3DMakeWithStartAndEndPoints(triangle.v3, triangle.v1);
    
    Vector3D ret;
    ret.x = (u.y * v.z) - (u.z * v.y);
    ret.y = (u.z * v.x) - (u.x * v.z);
    ret.z = (u.x * v.y) - (u.y * v.x);
    return ret;
}


As you saw before,  Vector3DMakeWithStartAndEndPoints() takes two vertices and calculates a normalized vector from them. So, if it's so easy to calculate surface normals. So, why doesn't OpenGL ES do it for us? Well, two reasons. First and foremost, it's costly. There's quite a bit of flaoting point multiplication and division going on, plus a costly call to  sqrtf() for every single polygon.

Secondly, because we're using  GL_SMOOTH rendering, so OpenGL ES needs to know  vertex normals not the surface normals, which is what that function above calculates. Calculating vertex normals is even more costly because it requires you to calculate a vector that is the average of all the surface normals for all the polygons in which a vertex is used.

Let's look at an example (this is recycled from an earlier post, feel free to skip ahead if you're already comfortable with what vertex normals are)



That's not a cube, by the way, For simplicity, we're looking at a flat, two-dimensional mesh of six triangles. There are a total of seven vertices used to make the shape. That vertex marked A is shared by all six triangles, so the vertex normal for that vertex is the average of the surface normals for all seven triangles that it's part of. Averaging is done per vector element, so the x values are averaged, the y values are averaged, and the z values are averaged, and the resulting values are re-combined to make the average vector.

So, how do we get vectors for our icosahedron's vertices? Well, this is a simple enough shape that we could actually get away with calculating the vertex normals at run time without a noticeable delay. Usually, you won't be working with so few vertices and will be dealing with far more complex objects, and more of them. As a result, you want to avoid calculating normals at runtime except when there's no alternative. In this case, I decided to write a little command line program to loop through the vertices and triangle indices and calculate the vertex normal for each of the vertices in the icosahedron. This program dumped the results to the console as a C struct, and I just copied that into my OpenGL program.
Note: Most 3D programs will calculate normals for you, though be careful about using those - most 3D file formats store surface normals, not vertex normals, so you'll still usually be responsible at least for averaging the surface normals to create vertex normals. We'll look at creating and loading objects in later installments, or you can go read some of my  earlier posts about writing a loader for the Wavefront OBJ file format.
Here's the command line program I wrote to calculate the vertex normals for our icosahedron:

#import <Foundation/Foundation.h>
#import "OpenGLCommon.h"

int main (int argc, const char * argv[]) {
    NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];

    NSMutableString *result = [NSMutableString string];
    
    static const Vertex3D vertices[]= {
        {0, -0.525731, 0.850651},             // vertices[0]
        {0.850651, 0, 0.525731},              // vertices[1]
        {0.850651, 0, -0.525731},             // vertices[2]
        {-0.850651, 0, -0.525731},            // vertices[3]
        {-0.850651, 0, 0.525731},             // vertices[4]
        {-0.525731, 0.850651, 0},             // vertices[5]
        {0.525731, 0.850651, 0},              // vertices[6]
        {0.525731, -0.850651, 0},             // vertices[7]
        {-0.525731, -0.850651, 0},            // vertices[8]
        {0, -0.525731, -0.850651},            // vertices[9]
        {0, 0.525731, -0.850651},             // vertices[10]
        {0, 0.525731, 0.850651}               // vertices[11]
    };
    
    static const GLubyte icosahedronFaces[] = {
        1, 2, 6,
        1, 7, 2,
        3, 4, 5,
        4, 3, 8,
        6, 5, 11,
        5, 6, 10,
        9, 10, 2,
        10, 9, 3,
        7, 8, 9,
        8, 7, 0,
        11, 0, 1,
        0, 11, 4,
        6, 2, 10,
        1, 6, 11,
        3, 5, 10,
        5, 4, 11,
        2, 7, 9,
        7, 1, 0,
        3, 9, 8,
        4, 8, 0,
    };
    
    Vector3D *surfaceNormals = calloc(20, sizeof(Vector3D));
    
    // Calculate the surface normal for each triangle

    for (int i = 0; i < 20; i++)
    {
        Vertex3D vertex1 = vertices[icosahedronFaces[(i*3)]];
        Vertex3D vertex2 = vertices[icosahedronFaces[(i*3)+1]];
        Vertex3D vertex3 = vertices[icosahedronFaces[(i*3)+2]];
        Triangle3D triangle = Triangle3DMake(vertex1, vertex2, vertex3);
        Vector3D surfaceNormal = Triangle3DCalculateSurfaceNormal(triangle);
 Vector3DNormalize(&surfaceNormal);
        surfaceNormals[i] = surfaceNormal;
    }
    
    Vertex3D *normals = calloc(12, sizeof(Vertex3D));
    [result appendString:@"static const Vector3D normals[] = {\n"];
    for (int i = 0; i < 12; i++)
    {
        int faceCount = 0;
        for (int j = 0; j < 20; j++)
        {
            BOOL contains = NO;
            for (int k = 0; k < 3; k++)
            { if (icosahedronFaces[(j * 3) + k] == i) contains = YES; }
            if (contains)
            { faceCount++; normals[i] = Vector3DAdd(normals[i], surfaceNormals[j]); }
        }
        
        normals[i].x /= (GLfloat)faceCount;
        normals[i].y /= (GLfloat)faceCount;
        normals[i].z /= (GLfloat)faceCount;
        [result appendFormat:@"\t{%f, %f, %f},\n", normals[i].x, normals[i].y, normals[i].z];
    }
    [result appendString:@"};\n"];
    NSLog(result);
    [pool drain];
    return 0;
}

A little crude, perhaps, but it gets the job done and allows us to pre-calculate the vertex normals so that we don't have to do that calculation more than once or at runtime. When the program is run, the output is this:
static const Vector3D normals[] = {
 {0.000000, -0.417775, 0.675974},
 {0.675973, 0.000000, 0.417775},
 {0.675973, -0.000000, -0.417775},
 {-0.675973, 0.000000, -0.417775},
 {-0.675973, -0.000000, 0.417775},
 {-0.417775, 0.675974, 0.000000},
 {0.417775, 0.675973, -0.000000},
 {0.417775, -0.675974, 0.000000},
 {-0.417775, -0.675974, 0.000000},
 {0.000000, -0.417775, -0.675973},
 {0.000000, 0.417775, -0.675974},
 {0.000000, 0.417775, 0.675973},
};


Specifying Vertex Normals

You saw above the array of normals that we will provide to OpenGL. Before we can do that, though, we have to enable normal arrays. This is done with the following call:

    glEnableClientState(GL_NORMAL_ARRAY);

The way we feed the normal array to OpenGL is using this call:
    glNormalPointer(GL_FLOAT, 0, normals);

And that's all there is too it. Let's add these elements to our  drawSelf: method, which gives us this:

- (void)drawView:(GLView*)view;
{
    
    static GLfloat rot = 0.0;
    
    // This is the same result as using Vertex3D, just faster to type and
    // can be made const this way
    static const Vertex3D vertices[]= {
        {0, -0.525731, 0.850651},             // vertices[0]
        {0.850651, 0, 0.525731},              // vertices[1]
        {0.850651, 0, -0.525731},             // vertices[2]
        {-0.850651, 0, -0.525731},            // vertices[3]
        {-0.850651, 0, 0.525731},             // vertices[4]
        {-0.525731, 0.850651, 0},             // vertices[5]
        {0.525731, 0.850651, 0},              // vertices[6]
        {0.525731, -0.850651, 0},             // vertices[7]
        {-0.525731, -0.850651, 0},            // vertices[8]
        {0, -0.525731, -0.850651},            // vertices[9]
        {0, 0.525731, -0.850651},             // vertices[10]
        {0, 0.525731, 0.850651}               // vertices[11]
    };
    
    static const Color3D colors[] = {
        {1.0, 0.0, 0.0, 1.0},
        {1.0, 0.5, 0.0, 1.0},
        {1.0, 1.0, 0.0, 1.0},
        {0.5, 1.0, 0.0, 1.0},
        {0.0, 1.0, 0.0, 1.0},
        {0.0, 1.0, 0.5, 1.0},
        {0.0, 1.0, 1.0, 1.0},
        {0.0, 0.5, 1.0, 1.0},
        {0.0, 0.0, 1.0, 1.0},
        {0.5, 0.0, 1.0, 1.0},
        {1.0, 0.0, 1.0, 1.0},
        {1.0, 0.0, 0.5, 1.0}
    };
    
    static const GLubyte icosahedronFaces[] = {
        1, 2, 6,
        1, 7, 2,
        3, 4, 5,
        4, 3, 8,
        6, 5, 11,
        5, 6, 10,
        9, 10, 2,
        10, 9, 3,
        7, 8, 9,
        8, 7, 0,
        11, 0, 1,
        0, 11, 4,
        6, 2, 10,
        1, 6, 11,
        3, 5, 10,
        5, 4, 11,
        2, 7, 9,
        7, 1, 0,
        3, 9, 8,
        4, 8, 0,
    };
    
    static const Vector3D normals[] = {
        {0.000000, -0.417775, 0.675974},
        {0.675973, 0.000000, 0.417775},
        {0.675973, -0.000000, -0.417775},
        {-0.675973, 0.000000, -0.417775},
        {-0.675973, -0.000000, 0.417775},
        {-0.417775, 0.675974, 0.000000},
        {0.417775, 0.675973, -0.000000},
        {0.417775, -0.675974, 0.000000},
        {-0.417775, -0.675974, 0.000000},
        {0.000000, -0.417775, -0.675973},
        {0.000000, 0.417775, -0.675974},
        {0.000000, 0.417775, 0.675973},
    };
    
    glLoadIdentity();
    glTranslatef(0.0f,0.0f,-3.0f);  
    glRotatef(rot,1.0f,1.0f,1.0f);
    glClearColor(0.7, 0.7, 0.7, 1.0);
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
    glEnableClientState(GL_VERTEX_ARRAY);
    glEnableClientState(GL_COLOR_ARRAY);
    glEnableClientState(GL_NORMAL_ARRAY);
    glVertexPointer(3, GL_FLOAT, 0, vertices);
    glColorPointer(4, GL_FLOAT, 0, colors);
    glNormalPointer(GL_FLOAT, 0, normals);
    glDrawElements(GL_TRIANGLES, 60, GL_UNSIGNED_BYTE, icosahedronFaces);
    
    glDisableClientState(GL_VERTEX_ARRAY);
    glDisableClientState(GL_COLOR_ARRAY);
    glDisableClientState(GL_NORMAL_ARRAY);
    static NSTimeInterval lastDrawTime;
    if (lastDrawTime)
    {
        NSTimeInterval timeSinceLastDraw = [NSDate timeIntervalSinceReferenceDate] - lastDrawTime;
        rot+=50 * timeSinceLastDraw;                
    }
    lastDrawTime = [NSDate timeIntervalSinceReferenceDate];
    
}

Voilá, Almost

Now, if we run it, we do indeed get a rotating shape that looks like a real, gosh-honest-to-goodness three-dimensional object.

grey3d.jpg


But what happened to our colors?

That, my friends, is our segue into the next installment of this series: OpenGL ES Materials. When you're using lighting and smooth shading, OpenGL expects you to provide  materials (or textures, but we're not ready to segue there yet) for the polygons. Materials are more complex than the simple colors that we provided in the color array. Materials, like lights, are made of multiple components, and can be used to create a variety of different surface treatments. The actual appearance of objects is determined by the attributes of the scene's lights and the polygons' materials.

But, we don't want to leave off with a grey icosahedron, so in the meantime, I'll introduce you to another OpenGL ES configuration parameter that can be enabled:  GL_COLOR_MATERIAL. By enabling this option like so:
glEnable(GL_COLOR_MATERIAL);

OpenGL will use the provided color array to create simple materials for our polygons based on the color array, giving a result more like this:

colorico.jpg


If you don't feel like typing everything in (or copying and pasting), you can check out the final project  right here.



目录
相关文章
|
图形学 iOS开发
OpenGL ES From the Ground Up, Part 6: Textures and Texture Mapping
MONDAY, MAY 25, 2009 OpenGL ES From the Ground Up, Part 6: Textures and Texture Mapping An alternative to creating material...
1342 0
OpenGL ES From the Ground Up, Part 5: Living in a Material World
WEDNESDAY, MAY 13, 2009 OpenGL ES From the Ground Up, Part 5: Living in a Material World Okay, in the last installment, we talk...
1879 0
|
iOS开发 Ruby
OpenGL ES From the Ground Up, Part 3: Viewports in Perspective
SATURDAY, APRIL 25, 2009 OpenGL ES From the Ground Up, Part 3: Viewports in Perspective Now that you got a taste of how to draw...
1183 0
|
Go iOS开发
OpenGL ES From the Ground Up, Part 2: A Look at Simple Drawing
MONDAY, APRIL 20, 2009 OpenGL ES From the Ground Up, Part 2: A Look at Simple Drawing Okay, there's still a lot of theory to ta...
1459 0
OpenGL ES From the Ground Up, Part 1 Addendum: Alphabet Soup
MONDAY, MAY 4, 2009 OpenGL ES From the Ground Up, Part 1 Addendum: Alphabet Soup One problem with blog postings is that they are self-edited.
1130 0
|
图形学 iOS开发
OpenGL ES From the Ground Up, Part 1: Basic Concepts
FRIDAY, APRIL 17, 2009 OpenGL ES From the Ground Up, Part 1: Basic Concepts I've done a number of postings on programming OpenG...
1201 0
|
4月前
|
XML 小程序 Java
【Android App】三维投影OpenGL ES的讲解及着色器实现(附源码和演示 超详细)
【Android App】三维投影OpenGL ES的讲解及着色器实现(附源码和演示 超详细)
51 0
|
存储 编解码 算法
Opengl ES之LUT滤镜(上)
Opengl ES之连载系列
337 0
|
数据安全/隐私保护 开发者
OpenGL ES 多目标渲染(MRT)
Opengl ES连载系列
218 0
|
数据安全/隐私保护 索引
Opengl ES之纹理数组
Opengl ES连载系列
172 0