First, let's explore what is needed to animate 3D graphics. To translate or rotate the shape, we will need to have access to all the vertex points that make up the faces of the shape. If we apply movements to these vertices, the polygons that make up the faces will need to follow those vertices. So first of all we need to define the (x,y,z) values of the vertices. Then we need a list of integer indices that define the vertex mumbers from which the faces are made.
Let's take the simple case of a cube. The cube has 8 vertices and 6 faces. Each face is defined by 4 of the vertices so, if we transformed all the vertices of the faces, we would have to work with 24 vertices or 3 times as many as necessary.
From the diagram on Canvas 0, it is easily seen that the front face indices
looking from outside the cube and going counter-clockwise (CCW) are (1,2,3,4). It's critical to state all
Canvas 0: Cube Vertices Numbering Scheme
the face indices in the same rotary sense i.e. CCW, because, using that scheme, it is possible to show only the faces that are toward the viewer. Using the CCW scheme we can easily see that the face indices are as follows:
right:[2,5,8,3]
left:[1,4,7,6]
top: [4,3,8,7]
bottom:[1,6,5,2]
back:[5,6,7,8]
front:[1,2,3,4]
or more conveniently for use in javascript:
[[2,5,8,3],[1,4,7,6],[4,3,8,7],[1,6,5,2],[5,6,7,8],[1,2,3,4]]
Finally, if the side length of the cube is labeled s, then the vertex (x,y,z) values are as specified below:
1:(0,0,s); 2:(s,0,s); 3:(s,s,s); 4:(0,s,s); 5:(s,0,0); 6:(0,0,0); 7:(0,s,0); 8:(s,s,0) or more conveniently as a 2D array for use in javascript:
[[0,0,s],[s,0,s],[s,s,s],[0,s,s],[s,0,0],[0,0,0],[0,s,0},[s,s,0]]
Translation (displacement of all vertices and faces by the vector) is not at all computer time intensive. However, rotation involves calculation of the sines and cosines of the rotation angles and that is very computer time intensive. After the sines and cosines are computed, the vectors of the vertices must be multiplied by a matrix like the following:
`McdotV=((m_("xx"), m_(xy),m_(xz)),(m_(yx),m_(yy),m_(yz)),(m_(zx),m_(zy),m_(zz)))((v_x),(v_y),(v_z))`
where the `m's` are the matrix elements involving the sines and cosines of the rotation angles and the `v's` are the elements of the vector of the vertex.
To be specific, for a rotation about the z axis the M matrix looks like:
where `cz=cos phi_z` and `sz=sin phi_z` and `phi_z` is the rotation angle. Rotations about the x and y axes have similar matrices. To rotate about two or three axes, just multiply the respective matrices. Order of multiplication DOES make a difference in the final orientation of the shape.
Web Graphics Library (WebGL) uses similar parameters to those we just discussed for javascript 3D animation. A significant difference is that all the vertices of all the faces are needed for the "positions" buffer. That would seem to imply that WebGL rotates all of these vertices. Since the faces of an interesting shape like a sphere are usually 4 vertex polygons and each vertex leads to one face, this would require 4 times more computer rotation time than the javascript method discussed above.
Another diffence is that WebGL breaks the 4 vertex polygon into the two triangles that are obtained by drawing a diagonal from vertex 1 to vertex 3. Then WebGL uses
an indices matrix that points to the indices of each of these triangles separately.
WebGL DOES have some features that are not included in the javascript version of 3D graphics I have presented.
Linear Perspective
This is the feature where faces at larger distance from the viewer are shown as diminished in size. As an example, look at the hollow cylinder that is shown in Canvas 1. When the cylinder axis is along the z direction, the end farther from the viewer has a smaller diameter. The reason is simple: to the viewer, the far end subtends a smaller angle than the near end and should be depicted as smaller. The reason is that the size of the image on our retina depends on the angular subtense of the object that we are viewing. And, of course, the more distant end subtends a smaller angle. The same concept applies to the size of an image at the film plane of a camera.
Brightness Variation with Lighting and Viewing Angle
For a shape with perfectly smooth surface and plane parallel lighting, the only facets that we will be able to see are the ones where the surface normal bisects the angle between the light direction and the viewing direction (called specular reflection). If the light on the smooth surface has a larger angular distribution, then we will be able to see larger portions of the shape but the view will still be limited to the size of the light source (think of seeing the image of a angularly large lamp reflected by a plane mirror). Usually the shape we are viewing has a slightly rough surface. That means that the surface normal direction is a variable. Then the light will be scattered rather than reflected at Snell's law of reflection from the mean local surface plane. The larger the surface slope variation, the larger the distribution of scatter angles from the surface. At some point the wavelength of the light may be larger than the spacing between surface peaks and this will also cause some variation of angular distribution of light scatter. In any case, we can make the comment that the apparent brightness of a surface area element in WebGL is usually chosen to be the cosine of the angle between the surface normal and the z axis.
WebGL is faster because it uses the Graphics Processing Unit (GPU) of any computer that has a GPU and most do.
However, the learning curve for WebGL is steeper because it comes with a lot of software baggage that most people don't use anyway. I'm sure that I've not used anywhere near its full capability in this comparison. Also, for the spheres, I was unable to include both lines of longitude and latitude nor printed numbering of these lines in WebGL shapes although that may be due to my lack of experience. In javascript, these are quite easy since you need to draw lines to define outline of the face that is to be filled.