It was Galileo who dropped two objects from The Leaning Tower of Pisa in search of an underlying relationship between the mass and acceleration of falling objects. When he carried out this experiment he found that the two objects hit the Earth at the same time even though they had different masses. After multiple experiments with objects he had a revelation that the rate at which the velocity of falling objects changes must be a constant. In modern times it is known that the acceleration of freely falling objects due to gravity when air resistance is a small enough quantity to be ignored is roughly 10 m/s^{2}. At the time of Galileo’s experiments though this fact was not known and his experiments were crucial to the development of physics. From his experiments he gained insight into an objects motion and it’s changing velocity or acceleration. Although Galileo did not have the ability to write proofs for the relationship, he did create a foundation on which Newton could express the relationship mathematically. Newton did this by inventing a simple but beautiful equation to represent the relationship. That equation is as follows: F=ma. The “F” comes into play because Newton realized that in order for these falling bodies to accelerate there must be net force acting on the objects. That force is gravity, which according to Newton’s Law of Gravitation states that the magnitude of the force between two objects is directly proportional to the mass of each object and inversely proportional to the square of the distance between the two objects.

Gravity is a relatively weak force but it is capable of acting upon objects at very great distances. Other forces exist that have peculiar functions such as holding together quarks to form nuclei, taking action when a particle decays into another particle, and holding nuclei together with electrons to form atomic systems. Between each of these forces there was believed to be complete symmetry at energies equal to or greater than 100 GV, which is equivalent to about 100 billion electron volts. The average energy in the universe was near this magnitude shortly after the big bang around t=10^{-43 }seconds. Before this time the properties of the universe were believed to be much different then afterwards. At different points in time after the big bang the forces “froze out” and became distinguished. The final forces to separate or break symmetry were the weak force and electromagnetism.

Electromagnetism has the function of holding together electrons with nuclei and molecules together. It reveals itself through forces that exist between charges. If an atomic system has its equilibrium disturbed in any way and is then left alone, it will be set in oscillation and the oscillations get impressed on the surrounding electromagnetic field, so that their frequencies may be observed with a spectroscope. A theory known as quantum field theory visualizes the force between the oscillating electrons as an exchange force arising from the exchange of virtual photons. This theory is represented using Feynman diagrams and it was the first successful quantum field theory to incorporate ideas such as particle creation and annihilation into a single, consistent framework. It can be described as a perturbation theory of the electromagnetic quantum vacuum, meaning that it consists of models that are used to find an approximate solution to a problem that cannot be solved exactly by starting with the exact solution of a similar problem. In quantum mechanics it is a group of approximation plots directly related to mathematics for describing a complicated quantum system in terms of a simpler one.

Paul Dirac was the first scientist to formulate a quantum theory that described the interaction of matter and radiation, and in the 1920s he computed the coefficient of spontaneous emission of an atom. He partook in much research of magnetic monopoles which are single poled magnetic particles, and formulated the Dirac equation which describes elementary ½ spin particles. His equations also implied the existence of antimatter and provided justification for the introduction of wave functions in Wolfgang Pauli’s theory of spin which incorporated 2×2 matrices that each signified particular spin operators. His equations, like the equations of many great scientists, made predictions that were often infinite. A collection of techniques known as renormalization were developed to work around such problems but unfortunately Dirac never accepted them.

Around the end of the 1940s improvements in microwave technology made it possible to take more precise measurements of the level shifts of the hydrogen atom, but experiments led to inexplicable discrepancies. These experimental difficulties coupled with the infinites that kept arising in computation kept the theory a bit complicated for some time.

It wasn’t until Hans Bethe developed his ideas of renormalization that the theory started to yield better results. His idea was to attach infinities to corrections of mass and charge that were fixed to a finite value by experiments. The infinities were figuratively absorbed in those constants and produced good results that were in agreement with experiment.

The next major figure in the development of Quantum Electrodynamics was Richard Feynman who reduced the theory to three basic movements. The first two being that a photon and an electron go from one point in space and time to another point in space and time, and the third being that an electron emits or absorbs a photon at a definite place and time. It was impossible to show how these things happen, but the theory can tell us about the probabilities of these events occurring. Feynman introduced convenient shorthand for particular numerical quantities which give information about the probabilities. Some examples of such shorthand are that if a photon moves from one place and time, A, to another place and time, B, the corresponding quantity is written as P(A to B). Similarly for an electron moving from a given point C to D, the quantity can be written as P(C to D). Also the quantity that tells about the probability of the emission or absorption of a photon is called ‘j’. His theory assumes that complex interactions of electrons and photons can be represented by fitting together appropriate collections of his three fundamental movements, and using the probability quantities one can calculate the probability of an interaction. Of course there are rules to the probabilities which state that if an event can happen in a variety of different ways then its probability is the sum of the probabilities of the possible ways and if a process involves a number of independent sub processes then its probability is the product of the component probabilities. For each of these possibilities in which an electron and photon interact, there is an accompanying Feynman diagram. Because of the succinct presentation of the diagrams, one is able to visualize the quantum process without stretching the imagination beyond its *elastic limit.*

An important concept in QED is that it is only a matter of time and effort to find as accurate an answer as one wants to the original question. When calculating the probability of any interactive process between electrons and photons it is a matter of first nothing all the possible ways in which the process can be constructed from the three basic movements.

Also an important change in the way probabilities are computed is introduced because of QED. The quantities which are used to represent the probabilities are not the usual real numbers used for probabilities in our everyday world; instead complex numbers which are called probability amplitudes are used. Feynman avoids exposing any reader to the mathematics of complex numbers by using a simple but accurate representation of them as arrows on paper similar to what a vector looks like. The change from probabilities to probability amplitudes modifies the mathematics without changing the basic approach, but unfortunately the change is not enough because it fails to take into account the fact that both photons and electrons can be polarized, which means that their electric field becomes oriented in some direction, therefore their orientation in space and time have to be taken into account.

Along with this issue another problem arose which was in agreement with the rules of probability but led to further infinities in the calculations. Say we start with the assumption of three basic movement actions like the ones above. The rules of quantum probability state that if we want to calculate the probability amplitude for an electron to get from A to B we must take into account all the possible ways of its movement; that means taking into account all of the Feynman diagrams with those end points. Thus there will always be a way in which the electron travels to some point, emits a photon there and then absorbs it again at some other point before moving on to B. It is possible that the electron moves in this manner in an infinite number of ways, and if we look closely at a line of movement, it breaks up into a collection of “fundamental” lines that are in turn composed of simpler lines. The main concern with the correction mentioned above is that it led to infinite probability amplitudes. In time though, the technique of renormalization was introduced to save things from completely failing.

The progress of science is absolutely astonishing and gives us insight into our own potential. By thinking about specific problems as imperatively as possible we have been able to develop new models of our surrounding world. While existing in a world whose origin cannot be explained yet is quite mystifying, at the same time it is incredibly inspiring. It is the hope of all scientists alike that someday we will be able to fully understand the world around us, so that our curiosity isn’t stopped dead in its tracks by the principle image of the series of appearances that disclose all phenomena without transcending the appearance to explain the inner workings scientifically.