# A simple molecular modeller

To help go over some of the ideas, and to hopefully show that they really do work, this section shows the vector geometry behind a simple molecular modeller.

## Constructing the 3D model

### Constructing the wireframe image The program stores each of the atomic positions, the vertices of the wireframe image, in a list. Each vertex knows which other vertices it is connected to by having its own list of edges. So the only problem we need to solve to draw the molecule is:
Given,
a the position vector of the camera at point A
ñ the direction in which the camera is pointing
b the position vector of the atom at point B
how do we project the point B onto a screen between the point and the camera? If define the screen to be the plane perpendicular to the camera direction and passing through the point with position vector a + ñ, we can see the problem is to find the point of intersection, R, with this plane. From this point, R, we can determine the screen coordinates which we want to plot by calculating the distances along the x and y vectors in the above diagram (sorry about the names - don't confuse these vectors with the x, y, z axes in 3D space).
How do you define the x and y vectors?
A camera has six degrees of freedom - you can hold it in any position you like (the position being defined by three coodinates); you can point it in any direction you like (the direction being defined by two coordinates); and lastly you can rotate the camera along the direction axis. So a and ñ take care of the first 5 degrees of freedom, but the last is a bit more involved. The program chooses x and y automatically, so doesn't allow you to rotate the camera, fixing it with the following definitions:
x = ñ ^ k

and

y = x ^ ñ
^ meaning take the cross product, and k = (0,0,1).
Are these good definitions for x and y?
How would you have the program allow rotation of the camera as well?

### Raytracing the spheres To generate a sphere the program scans through every pixel of the image and decides what colour it should change the pixel to. It does this by firing a ray from the camera through the centre of each pixel in turn - a ray being a half line, that is a line with one end point. So the general problem is simply:
Does the line passing through the camera point, a, and a given point on the screen, call it p say, intersect with the sphere of radius s centred at a point with position vector c? If it does intersect choose a colour for the pixel, if it doesn't leave the pixel with the background colour.

To solve this we need the vector equation for a line (which you know) and for a sphere (which you can work out if you haven't seen it before...)
Let r be a general position vector. To constrain r to the surface of a sphere centred at c with radius s, all we need to say is that the length of the vector from c to r is always equal to s, or in math:
|r - c| = s

.....1

let r = (x, y, z)
and c = (c1, c2, c3)
Then by Pythagoras, the above equation becomes:
(x - c1)2 + (y - c2)2 + (z - c3)2 = s2
.....1

But we also have the constrain on r that it must lie on the line joining a and p:
r = a + t (b - a)
.....2
or in algebra:
x = a1 (1 - t) + t b1
y = a2 (1 - t) + t b2
z = a3 (1 - t) + t b3
.....2
where a = (a1, a2, a3) etc.
Thinking about it, there are three possible cases:
1. The line and sphere do not intersect, hence there does not exist a vector r anywhere in 3D space which satisfies both the equation for the sphere and the line simultaneously.
2. The line is tangent to the sphere, hence there exists one and only one vector r that satisfies both equations.
3. The line passes through the sphere, giving exactly 2 possible vectors r which simultanesouly satisfy the equations.
Do the equations agree with this? Substituting the expressions for x, y, and z from 2 into equation 1 simplifies down to a quadratic in the parameter t:
S(i = 1 to 3) (ai2 - 2ai bi + bi2) t2 + (-2ai2 + 2ai bi + 2ai ci - 2bi ci) t + (ai2 - 2ai ci + ci2) - s2 = 0
It may look unpleasant, but it's only a quadratic and so can be solved by completing the square or bunging it in the formula:
t = (-v +- sqrt(v2) - 4 w u) / 2u
where,
u = S(i = 1 to 3) ai2 - 2ai bi + bi2
v = S(i = 1 to 3) -2ai2 + 2ai bi + 2ai ci - 2bi ci
w = S(i = 1 to 3) (ai2 - 2ai ci + ci2) - s2
As if by magic, this will give us either 0, 1 or 2 solutions for t, corresponding to cases 1, 2 or 3 above. Once t is found, the position along the line is determined and the problem solved.
The shading at a given point on a sphere is done by choosing the position for a light source, and then calculating the angle that the line joining the point on the sphere to the light source makes with the normal to the surface of the sphere.

### Improvements to the program

Besides making it faster, easier to use, crash less often etc. etc. there are many cases that haven't been considered which the program will mess up. The program draws the spheres one at a time starting with the one furthest away and working forward, but what happens if two spheres intersect? How can we allow spheres to cast shadows on other spheres? How can we make the wireframe model intersect the spheres properly (at the moment the sphers just draw over it!)? What happens to the x and y vectors (defined above) when the camera is looking vertically down at the molecule (how could the program be modified to cope)?
Other improvements like clipping algorithms to avoid the program messing up when it gets to close to the molecule, and making the spheres reflective could also be added, but all the improvements will involve vector geometry.