To help go over some of the ideas, and to hopefully show that they really do work, this section shows the vector geometry behind a simple molecular modeller.
The program stores each of the atomic positions, the vertices of the wireframe image,
in a list. Each vertex knows which other vertices it is connected to by having its own
list of edges. So the only problem we need to solve to draw the molecule is:
Given,
a the position vector of the camera at point A
ñ the direction in which the camera is pointing
b the position vector of the atom at point B
how do we project the point B onto a screen between the point and the camera?
If define the screen to be the plane perpendicular to the camera direction and passing
through the point with position vector a + ñ, we can
see the problem is to find the point of intersection, R, with this plane. From this
point, R, we can determine the screen coordinates which we want to plot by calculating
the distances along the x and y vectors in the above
diagram (sorry about the names - don't confuse these vectors with the x, y, z axes in
3D space).
How do you define the x and y vectors?
A camera has six degrees of freedom - you can hold it in any position you like (the
position being defined by three coodinates); you can point it in any direction you
like (the direction being defined by two coordinates); and lastly you can rotate the
camera along the direction axis. So a and ñ take
care of the first 5 degrees of freedom, but the last is a bit more involved.
The program chooses x and y automatically, so doesn't
allow you to rotate the camera, fixing it with the following definitions:
x = ñ ^ k
To generate a sphere the program scans through every pixel of the image and decides
what colour it should change the pixel to. It does this by firing a ray from the camera
through the centre of each pixel in turn - a ray being a half line, that is a line with
one end point. So the general problem is simply:
Does the line passing through the camera point, a, and a given point
on the screen, call it p say, intersect with the sphere of radius s
centred at a point with position vector c? If it does intersect choose
a colour for the pixel, if it doesn't leave the pixel with the background colour.
To solve this we need the vector equation for a line (which you know) and for a
sphere (which you can work out if you haven't seen it before...)
Let r be a general position vector. To constrain r
to the surface of a sphere centred at c with radius s, all we need to
say is that the length of the vector from c to r is always
equal to s, or in math:
|r - c| = s
Besides making it faster, easier to use, crash less often etc. etc. there are many
cases that haven't been considered which the program will mess up. The program draws
the spheres one at a time starting with the one furthest away and working forward, but
what happens if two spheres intersect? How can we allow spheres to cast shadows on
other spheres? How can we make the wireframe model intersect the spheres properly (at
the moment the sphers just draw over it!)? What happens to the x and
y vectors (defined above) when the camera is looking vertically down at
the molecule (how could the program be modified to cope)?
Other improvements like clipping algorithms to avoid the program messing up when it gets
to close to the molecule, and making the spheres reflective could also be added, but
all the improvements will involve vector geometry.