Matrix Proof

 

 

Given the matrix equation \dot{u} = Au , where u = \begin{pmatrix} x \\ y \end{pmatrix} and A is a 2 \times 2 matrix, I want to show that \dot{u} = e^{At}u_{0} is the solution where e^{At} = I + At + \frac{1}{2!}A^{2}t^{2} + \frac{1}{3!}A^{3}t^{3} + ... and I is the identity matrix.

I start out by writing the above series in summation notation which gives me \displaystyle\sum_{k = 0}^{\infty} \frac{t^{k}A^{k}}{k!}. I can now take a time derivative of the sum to give me \frac{d}{dt}e^{At} = \displaystyle\sum_{k = 0}^{\infty}\frac{kt^{k - 1}A^{k}}{k!} . Since the first term of the series after differentiation is 0 , I can rewrite and reduce the sum to give me \displaystyle\sum_{k = 1}^{\infty}\frac{t^{k - 1}A^{k}}{(k - 1)!} . Now, I can pull out a single matrix term to give me A \displaystyle\sum_{k = 1}^{\infty}\frac{t^{k - 1}A^{k - 1}}{(k - 1)!} . I can now simplify the sum once again to give me A \displaystyle\sum_{k = 0}^{\infty} \frac{t^{k}A^{k}}{k!} . This is now equivalent to Ae^{At} . Assuming that u = e^{At}u_{0}  is the solution, I can differentiate it with respect to time to give me \dot{u} = \frac{d}{dt}[e^{At}]u_{0} . I just showed that \frac{d}{dt} e^{At} = Ae^{At} , which I can plug in to give me the equation \dot{u} = Ae^{At}u_{0} . Since e^{At}u_{0} = u , I am left with my original matrix equation \dot{u} = Au

Advertisements

Mixture of States Proof

A particle of mass m sits in a one-dimensional square well with infinitely high walls and width L . The particle is in a 50–50 mixture of states, half in the ground state and half in the first excited state. I want to show how to derive a formula for the complete, time-dependent wave-function of the particle.

I begin by the using the normalizing condition, that \int|\Psi|^{2}dx=1 . This is because the probability of finding the particle somewhere in space must equal 1 at all times. Because the particle is in a mixture of states, my wave function will take the form \Psi = A(\Psi_{1}+\Psi_{2}) . Combining this with the normalizing condition, I get the equation A^{2}\int(\Psi_{1}+\Psi_{2})(\Psi_{1}^{*}+\Psi_{2}^{*})dx = 1 . The individual wave-functions will take the forms \Psi_{1} = \sqrt{\frac{2}{L}}\sin(\frac{\pi x}{L})e^{-i\omega_{1}t} and \Psi_{2} = \sqrt{\frac{2}{L}}\sin(\frac{\pi x}{L})e^{-i\omega_{2}t} . I can then plug these two equations in the the above normalizing condition for a particle in mixed states, convert all sine functions to cosine functions and cancel out like terms until I get down to the simple expression that 2A^{2}=1 which implies that A = \frac{1}{\sqrt{2}} . Now I can use this constant to write down my mixed wave-function which will look as follows \Psi = \frac{1}{\sqrt{L}}(\sin(\frac{\pi x}{L})e^{-\omega_{1}t} + \sin(\frac{2\pi s}{L})e^{-i\omega_{2}t}) . Now, I want to show that the probability of finding the particle between positions \frac{1}{4}L and \frac{1}{4}L+dx , as measured from the left-hand side of the well, as a function of time, is [\frac{3}{2}+\sqrt{2}\cos(\frac{3E_{1}t}{\hbar})]\frac{dx}{L} . This is done by finding the probability amplitude which is the square modulus of the wave function. This will look as follows |\Psi_{1}+\Psi_{2}|^{2} = (\Psi_{1}+\Psi_{2})(\Psi_{1}^{*}+\Psi_{2}^{*}) . After filling in each wave function and multiplying out each term I obtain \frac{1}{L}(\sin(\frac{\pi x}{L}))^{2} + \frac{1}{L}(\sin(\frac{2\pi x}{L}))^{2} + \frac{1}{L}\sin(\frac{2\pi x}{L}) \sin(\frac{\pi x}{L}) ( e^{-i\omega_{1} t}e^{i\omega_{2} t}+e^{-i\omega_{2} t}e^{i\omega_{1} t}) . After converting the sine terms to cosines and converting the exponentials to trigonometric functions, I can plug in x for L , which will give me\frac{1}{4}L and \frac{1}{4}L+dx which is what I wanted to show.

Phase Velocity Proof

Using the relativistic energy equation E^{2} = p^{2}c^{2}+m^{2}c^{4} , I want to show that the resulting phase velocity for the de Broglie wave of an electron is greater than the speed of light.

I start by deriving an expression for Energy in terms of the phase velocity, which is the rate at which the phase of the wave propagates in space, and the momentum of the electron. Using the equation v_{p}=f\lambda , I can then make two substitutions namely that f =\frac{E}{h} which gives the energy required or released when electrons change their energy levels and \lambda=\frac{h}{p} which is the wavelength associated with a particle as postulated by Louis DeBroglie. After canceling like terms this gives me E=v_{p}p . Now I can plug this into my relativistic formula to obtain v_{p}^{2}p^{2} = p^{2}c^{2} + m^{2}c^{4} . Next I can make a substitution using the equation p=mv_{p} , and cancel out the masses. This leaves me with the bi-quadratic equation y^{2}-c^{2}y-c^{4} = 0 where y=v_{p}^{2} . This equation has only one real solution which is \pm \sqrt\frac{c^{2}+\sqrt{5c^{4}}}{2} . Taking the positive root, I obtain the velocity 3.8*10^{8} m/s which is greater than the speed of light.

Reduced Mass Proof

In a two-bodied system of hydrogen, an electron and a proton orbit each other about a shared center of mass. If I wanted to analyze the atomic motion of just the electron, the situation would become a one-body problem, and I would have to replace the mass of the electron with its corresponding reduced mass which is expressed in terms of the mass of the nucleus and the mass of the electron. In this proof I want to suppose that this reduced mass changes by a small amount \Delta \mu when the electron jumps from the energy level with quantum number n_{i}=3 to the one with quantum number n_{f} = 2 . From this I want to show that the wavelength changes by a corresponding amount \Delta \lambda that approximately satisfies \frac{\Delta \lambda}{\lambda} = -\frac{\Delta \mu}{\mu} . In this situation, I am going to assume that \Delta \mu is small in comparison with \mu .

I begin by stating that \Delta \lambda \approx \frac{d\lambda}{d\mu} \Delta \mu , which represents a change in wavelength. This is because in calculus \Delta \mu and d\mu are practically the same for small \Delta . I can now use the equation \frac{1}{\lambda} = R(\frac{1}{n_{f}^{2}} - \frac{1}{n_{i}^{2}}) where \lambda is the wavelength of an emitted photon during electronic transition from E_{i} to E_{f} . R is known as the Rydberg constant which is the physical constant relating to atomic spectra and each n refers to a quantum number in a particular energy level. I can call everything on the right of this equation b for the sake of simplification and write the formula as \lambda = R^{-1}b^{-1} , where b represents everything that was on the left side fo the equation other than the Rydberg constant. Now, I can differentiate each side of the equation with respect to \mu , and because R is a function of \mu I must use the chain rule which looks as follows: \frac{d\lambda}{d\mu} = -R^{-2}\frac{dR}{d\mu}b^{-1} . I can now approximate \frac{dR}{d\mu} as \frac{R}{\mu} . This will cause certain terms to cancel and I will be left with the formula \frac{d\lambda}{d\mu} = -\frac{1}{R\mu b} . Next, I can use the formula derived above, that \lambda = \frac{1}{Rb} , and plug this in for R which will give me \frac{d\lambda}{d\mu} = -\frac{\lambda}{\mu} . I can then write the left side of the equation as \frac{\Delta\lambda}{\Delta\mu} and rearrange terms to give me \frac{\Delta\lambda}{\lambda} = -\frac{\Delta\mu}{\mu} which is what I wanted to show.

Proof of Summation Identity

Let g_{\mu\nu} = g^{\mu\nu} be defined by the following relations g_{00} = 1; g_{kk} = -1, k = 1,2,3; g_{\mu\nu} = 0, \mu \neq \nu; and \gamma_{\nu} = \sum g_{\nu\mu}\gamma^{\mu} . I want to show that \sum \gamma_{\mu} \gamma^{\alpha} \gamma^{\mu} = -2 \gamma^{\alpha} , where \alpha = 2 . It must be noted also that the summation is over \mu = 0,1,2,3 .

I can start by writing the summation out as \gamma_{0} \gamma^{2} \gamma^{0} + \gamma_{1} \gamma^{2} \gamma^{1} + \gamma_{2} \gamma^{2} \gamma ^{2} + \gamma_{3} \gamma^{2} \gamma^{3} . This can be rewritten as \gamma^{0} \gamma^{2} \gamma^{0} - \gamma^{1} \gamma^{2} \gamma^{1} - \gamma^{2} \gamma^{2} \gamma^{2} - \gamma^{3} \gamma^{2} \gamma^{3} due to the fact that \gamma_{0} = \sum_{\mu} g_{00} \gamma^{0} which implies \gamma_{0} = \gamma^{0} , \gamma_{1} = \sum_{\mu} g_{11} \gamma^{1} which implies \gamma_{1} = -\gamma^{1} , \gamma_{2} = \sum_{\mu} g_{22} \gamma^{2} which implies \gamma_{2} = -\gamma^{2} , and \gamma_{3} = \sum_{\mu} g_{33} \gamma^{3} which implies that \gamma_{3} = -\gamma^{3} . Now, I can use another identity, namely that \gamma^{\mu} \gamma^{i} = -\gamma^{i}\gamma^{\mu} to permute some of the terms so that the expression looks like - \gamma^{0} \gamma^{0} \gamma^{2} + \gamma^{1} \gamma^{1} \gamma^{2} - \gamma^{2} \gamma^{2} \gamma^{2} + \gamma^{3} \gamma^{3} \gamma^{2} . Now, I can use two final identities, that (\gamma^{0})^{2} = 1, (\gamma^{i})^{2} = -1 to write the expression as -(\gamma^{0})^{2} \gamma^{2} + (\gamma^{1})^{2}\gamma^{2} - (\gamma^{2})^{2} \gamma^{2} + (\gamma^{3})^{2} \gamma^{2} , which is equivalent to -\gamma^{2} - \gamma^{2} + \gamma^{2} - \gamma^{2} . After canceling terms, I obtain -2 \gamma^{2} which is what I wanted to show.

Vector Identity Proof

I am going to show how to prove the following equality using Summation Notation, Kronecker Delta’s, and Levi-Civita Notation: (A \times B) \cdot (C \times D) = (A \cdot C)(B \cdot D) - (B \cdot C)(A \cdot D) . Where A , B , C , and D are three dimensional vectors. Throughout the proof, I will explain what each of these symbols means and why I am using them. I am assuming that the reader is familiar with the idea of dot products and cross products.

I begin by letting (A \times B) = X and (C \times D) = Y . X and Y are still vectors and so I can convert their dot product into summation notation. This will look like \displaystyle \sum_{i=1}^{3} X_{i} Y_{i} . What this notation means is that for each i from 1 to 3 (1 representing the x-component, 2 representing the y-component, etc.), I am multiplying respective components of A and B and then adding the product of the next components until I have done this with all 3 components. Since I am only dealing with vectors containing 3 components I will drop the i = 1 term and just write i , and I will also drop the upper summation 3 and leave it blank. I can substitute back in the respective cross products for X and Y . This will look like \displaystyle \sum_{i}(A \times B)_{i}(C \times D)_{i} . For this particular proof, I will only be dealing with the x (i = 1) component of the vectors to show the identity. I will now convert the cross products into similar summations using Levi-Civita notation.This will look like \displaystyle \sum_{i} \displaystyle \sum_{j} \displaystyle \sum_{k} \epsilon_{ijk} A_{j} B_{k} \displaystyle \sum_{l} \displaystyle \sum_{m} \epsilon_{klm} C_{l} D_{m} . In this notation, the first notation \displaystyle \sum_{i} is taking care of (A \times B) \cdot (C \times D) . The next four summations are evaluating each cross product, and the epsilon notation is what is known as Levi-Civita Notation, which is essentially a piece-wise function that assigns a 1, -1, or 0 to each component in the summation depending on the permutation of sub indices of \epsilon . For example, the cross product of two unit vectors e_{1} and e_{2} produces a new vector, namely e_{3} which is perpendicular to both. If I wanted to do this same cross product but use Levi-Civita notation and summation notation, I would use the formula e_{i} \times e_{j} = \displaystyle \sum_{k=1}^{3} \epsilon_{ijk}e_{k} . Two substitutions must now be made. The first is that \epsilon_{ijk} = \epsilon_{jki} . This comes from making permutations in the indices of the epsilon. Now, a second substitution must be made which relates a product of two epsilons to a difference in Kronecker Deltas. A Kronecker Delta is essentially a piecewise function that assigns a 0 or a 1 to a dot product between two vectors depending on the subindices of the two vectors and therefore the sub indices of the Kronecker Delta. This identity is \displaystyle\sum_{i} \epsilon_{jki} \epsilon_{ilm} = \delta_{jl} \delta_{km} - \delta_{jm} \delta_{kl} . After I make this substitution, the Sum can be reorganized so that it looks as follows: \displaystyle\sum_{j} \sum_{k} \sum_{l} \sum_{m}\delta_{jl} \delta_{km} - \delta_{jm} \delta_{kl})A_{j}B_{k}C_{l}D_{m} . After factoring in the vectors to the difference in deltas I can deal with the first sum of quantities which looks as follows: \displaystyle \sum_{j} \sum_{k} \sum_{l} \sum_{m} \delta_{jl} \delta_{km} A_{j}B_{k}C_{l}D_{m} . The only case where the Kronecker Deltas don’t equal zero is when two of their indices are the same. When they are not the same, the Deltas are equal to zero and therefore don’t contribute anything to the summation. With this in mind, I can set m = k which will cause the second delta to equal 1 and the fourth summation to be a sum over k. Since this summation already exists, the fourth sum can just be dropped. I am then left with \displaystyle \sum_{j} \sum_{k} \sum_{l} \delta_{jl} A_{j} B_{k} C_{l} D_{k} . Now, I can set l = j which will lead to the final delta equating to 1 and the third summation being dropped. Finally, I am left with \displaystyle \sum_{j} \sum_{k} (A_{j} B_{k} C_{j} D_{k}) which can be reorganized to look as follows: \displaystyle \sum_{jk} (A_{j}C_{j})(B_{k}D_{k}) . I now recognize that each of the quantities in parentheses are dot products, summed over two different indices, which is equivalent to (A \cdot C)(B \cdot D) . If I were to then follow similar steps to the second product of Deltas in the second substitution that i made above I would find that I would eventually obtain (A \cdot C)(B \cdot D) - (B \cdot C)(A \cdot D) .

Vector Identity Proof

For this post, I wanted to show how to verify the following identity \bigtriangledown \times (A \times B)] = (B \cdot \bigtriangledown)A - (A \cdot \bigtriangledown)B - B(\bigtriangledown \cdot A) + A(\bigtriangledown \cdot B) using Einstein Notation which is a shorthand of Levi Civita Notation. In this context A and B are vectors, and the gradient operator, \bigtriangledown = \hat{e_{x}} \frac{\partial}{\partial x} + \hat{e_{y}} \frac{\partial}{\partial y} +\hat{e_{z}} \frac{\partial}{\partial z} .

I can start by writing the left side of the equivalence in Einstein notation. This will look as follows: [ \bigtriangledown \times (A \times B)]_{i} = \epsilon_{ijk}D_{j}(\epsilon_{kmn} A_{m}B_{n}) where D is a derivative. This can be written as \epsilon_{ijk}\epsilon_{kmn}D_{j}(A_{m}B_{n}) . Next, I can make use of a useful identity, namely that \epsilon_{ijk}\epsilon_{kmn} = \delta_{im}\delta_{jn} - \delta_{in}\delta_{jm} . After making this substitution, and applying the product rule to A_{m}B_{n} , I obtain (\delta_{im}\delta_{jn} - \delta_{in}\delta_{jm})[(\partial_{j}A_{m})B_{n} + (\partial_{j}B_{n})A_{m}] . From this, I can factor terms and deal with each separately. Working with the first term, I obtain \delta_{im}\delta_{jn}(D_{j}A_{m})B_{n} . Since i only get contributions into the sum when two indices are equal to each other, I can set n = j which gives me \delta_{im}\delta_{jj}(D_{j}A_{m})B_{j} . In this expression, \delta_{jj} = 1 , so I can drop it. Now, I can let m = i which gives me \delta_{ii}(D_{j}A_{j})B_{j} . Since \delta_{ii} = 1 , it gets dropped, and I obtain B_{j}D_{j}A_{i} which equals (B \cdot \bigtriangledown)A . This is the first term of the right side of the initial equivalence. For the second term of the factored expression above, I start with \delta_{im}\delta_{jn}(D_{j}B_{n})A_{m} which can be rearranged as \delta_{im}\delta_{nn}(D_{n}B_{n})A_{m} . The \delta_{nn} term gets dropped and I obtain \delta_{ii}(D_{n}B_{n})A_{i} , which can be rearranged to look like (\bigtriangledown \cdot B)A . This is the fourth term of the initial equivalence. For the third term in the equivalence I start with the 2nd to last factored term from above which gives me -\delta_{in}\delta_{jm}(D_{j}A_{m})B_{n} which equals \delta_{in}\delta_{mm}(D_{m}A_{m})B_{n} . Again, since \delta_{mm} = 1 , it can be dropped and I am left with \delta_{ii}(D_{m}A_{m})B_{i} . \delta_{ii} can be dropped and I obtain -(\bigtriangledown \cdot A)B which equals -B(\bigtriangledown \cdot A) . For the final term in the equivalence, I start with -\delta_{in}\delta_{jm}(D_{j}B_{n})A_{m} . Since I only get contributions into the sum when m = j , I can make this substitution, drop \delta_{jj} , let n = i , and drop \delta_{ii} to obtain -(A \cdot \bigtriangledown)B which is the last term of the equivalence. After putting all of these terms together I get the whole right side of the equivalence and the proof is finished.