Proof of Summation Identity

Let g_{\mu\nu} = g^{\mu\nu} be defined by the following relations g_{00} = 1; g_{kk} = -1, k = 1,2,3; g_{\mu\nu} = 0, \mu \neq \nu; and \gamma_{\nu} = \sum g_{\nu\mu}\gamma^{\mu} . I want to show that \sum \gamma_{\mu} \gamma^{\alpha} \gamma^{\mu} = -2 \gamma^{\alpha} , where \alpha = 2 . It must be noted also that the summation is over \mu = 0,1,2,3 .

I can start by writing the summation out as \gamma_{0} \gamma^{2} \gamma^{0} + \gamma_{1} \gamma^{2} \gamma^{1} + \gamma_{2} \gamma^{2} \gamma ^{2} + \gamma_{3} \gamma^{2} \gamma^{3} . This can be rewritten as \gamma^{0} \gamma^{2} \gamma^{0} - \gamma^{1} \gamma^{2} \gamma^{1} - \gamma^{2} \gamma^{2} \gamma^{2} - \gamma^{3} \gamma^{2} \gamma^{3} due to the fact that \gamma_{0} = \sum_{\mu} g_{00} \gamma^{0} which implies \gamma_{0} = \gamma^{0} , \gamma_{1} = \sum_{\mu} g_{11} \gamma^{1} which implies \gamma_{1} = -\gamma^{1} , \gamma_{2} = \sum_{\mu} g_{22} \gamma^{2} which implies \gamma_{2} = -\gamma^{2} , and \gamma_{3} = \sum_{\mu} g_{33} \gamma^{3} which implies that \gamma_{3} = -\gamma^{3} . Now, I can use another identity, namely that \gamma^{\mu} \gamma^{i} = -\gamma^{i}\gamma^{\mu} to permute some of the terms so that the expression looks like - \gamma^{0} \gamma^{0} \gamma^{2} + \gamma^{1} \gamma^{1} \gamma^{2} - \gamma^{2} \gamma^{2} \gamma^{2} + \gamma^{3} \gamma^{3} \gamma^{2} . Now, I can use two final identities, that (\gamma^{0})^{2} = 1, (\gamma^{i})^{2} = -1 to write the expression as -(\gamma^{0})^{2} \gamma^{2} + (\gamma^{1})^{2}\gamma^{2} - (\gamma^{2})^{2} \gamma^{2} + (\gamma^{3})^{2} \gamma^{2} , which is equivalent to -\gamma^{2} - \gamma^{2} + \gamma^{2} - \gamma^{2} . After canceling terms, I obtain -2 \gamma^{2} which is what I wanted to show.

Advertisements

Vector Identity Proof

I am going to show how to prove the following equality using Summation Notation, Kronecker Delta’s, and Levi-Civita Notation: (A \times B) \cdot (C \times D) = (A \cdot C)(B \cdot D) - (B \cdot C)(A \cdot D) . Where A , B , C , and D are three dimensional vectors. Throughout the proof, I will explain what each of these symbols means and why I am using them. I am assuming that the reader is familiar with the idea of dot products and cross products.

I begin by letting (A \times B) = X and (C \times D) = Y . X and Y are still vectors and so I can convert their dot product into summation notation. This will look like \displaystyle \sum_{i=1}^{3} X_{i} Y_{i} . What this notation means is that for each i from 1 to 3 (1 representing the x-component, 2 representing the y-component, etc.), I am multiplying respective components of A and B and then adding the product of the next components until I have done this with all 3 components. Since I am only dealing with vectors containing 3 components I will drop the i = 1 term and just write i , and I will also drop the upper summation 3 and leave it blank. I can substitute back in the respective cross products for X and Y . This will look like \displaystyle \sum_{i}(A \times B)_{i}(C \times D)_{i} . For this particular proof, I will only be dealing with the x (i = 1) component of the vectors to show the identity. I will now convert the cross products into similar summations using Levi-Civita notation.This will look like \displaystyle \sum_{i} \displaystyle \sum_{j} \displaystyle \sum_{k} \epsilon_{ijk} A_{j} B_{k} \displaystyle \sum_{l} \displaystyle \sum_{m} \epsilon_{klm} C_{l} D_{m} . In this notation, the first notation \displaystyle \sum_{i} is taking care of (A \times B) \cdot (C \times D) . The next four summations are evaluating each cross product, and the epsilon notation is what is known as Levi-Civita Notation, which is essentially a piece-wise function that assigns a 1, -1, or 0 to each component in the summation depending on the permutation of sub indices of \epsilon . For example, the cross product of two unit vectors e_{1} and e_{2} produces a new vector, namely e_{3} which is perpendicular to both. If I wanted to do this same cross product but use Levi-Civita notation and summation notation, I would use the formula e_{i} \times e_{j} = \displaystyle \sum_{k=1}^{3} \epsilon_{ijk}e_{k} . Two substitutions must now be made. The first is that \epsilon_{ijk} = \epsilon_{jki} . This comes from making permutations in the indices of the epsilon. Now, a second substitution must be made which relates a product of two epsilons to a difference in Kronecker Deltas. A Kronecker Delta is essentially a piecewise function that assigns a 0 or a 1 to a dot product between two vectors depending on the subindices of the two vectors and therefore the sub indices of the Kronecker Delta. This identity is \displaystyle\sum_{i} \epsilon_{jki} \epsilon_{ilm} = \delta_{jl} \delta_{km} - \delta_{jm} \delta_{kl} . After I make this substitution, the Sum can be reorganized so that it looks as follows: \displaystyle\sum_{j} \sum_{k} \sum_{l} \sum_{m}\delta_{jl} \delta_{km} - \delta_{jm} \delta_{kl})A_{j}B_{k}C_{l}D_{m} . After factoring in the vectors to the difference in deltas I can deal with the first sum of quantities which looks as follows: \displaystyle \sum_{j} \sum_{k} \sum_{l} \sum_{m} \delta_{jl} \delta_{km} A_{j}B_{k}C_{l}D_{m} . The only case where the Kronecker Deltas don’t equal zero is when two of their indices are the same. When they are not the same, the Deltas are equal to zero and therefore don’t contribute anything to the summation. With this in mind, I can set m = k which will cause the second delta to equal 1 and the fourth summation to be a sum over k. Since this summation already exists, the fourth sum can just be dropped. I am then left with \displaystyle \sum_{j} \sum_{k} \sum_{l} \delta_{jl} A_{j} B_{k} C_{l} D_{k} . Now, I can set l = j which will lead to the final delta equating to 1 and the third summation being dropped. Finally, I am left with \displaystyle \sum_{j} \sum_{k} (A_{j} B_{k} C_{j} D_{k}) which can be reorganized to look as follows: \displaystyle \sum_{jk} (A_{j}C_{j})(B_{k}D_{k}) . I now recognize that each of the quantities in parentheses are dot products, summed over two different indices, which is equivalent to (A \cdot C)(B \cdot D) . If I were to then follow similar steps to the second product of Deltas in the second substitution that i made above I would find that I would eventually obtain (A \cdot C)(B \cdot D) - (B \cdot C)(A \cdot D) .

Vector Identity Proof

For this post, I wanted to show how to verify the following identity \bigtriangledown \times (A \times B)] = (B \cdot \bigtriangledown)A - (A \cdot \bigtriangledown)B - B(\bigtriangledown \cdot A) + A(\bigtriangledown \cdot B) using Einstein Notation which is a shorthand of Levi Civita Notation. In this context A and B are vectors, and the gradient operator, \bigtriangledown = \hat{e_{x}} \frac{\partial}{\partial x} + \hat{e_{y}} \frac{\partial}{\partial y} +\hat{e_{z}} \frac{\partial}{\partial z} .

I can start by writing the left side of the equivalence in Einstein notation. This will look as follows: [ \bigtriangledown \times (A \times B)]_{i} = \epsilon_{ijk}D_{j}(\epsilon_{kmn} A_{m}B_{n}) where D is a derivative. This can be written as \epsilon_{ijk}\epsilon_{kmn}D_{j}(A_{m}B_{n}) . Next, I can make use of a useful identity, namely that \epsilon_{ijk}\epsilon_{kmn} = \delta_{im}\delta_{jn} - \delta_{in}\delta_{jm} . After making this substitution, and applying the product rule to A_{m}B_{n} , I obtain (\delta_{im}\delta_{jn} - \delta_{in}\delta_{jm})[(\partial_{j}A_{m})B_{n} + (\partial_{j}B_{n})A_{m}] . From this, I can factor terms and deal with each separately. Working with the first term, I obtain \delta_{im}\delta_{jn}(D_{j}A_{m})B_{n} . Since i only get contributions into the sum when two indices are equal to each other, I can set n = j which gives me \delta_{im}\delta_{jj}(D_{j}A_{m})B_{j} . In this expression, \delta_{jj} = 1 , so I can drop it. Now, I can let m = i which gives me \delta_{ii}(D_{j}A_{j})B_{j} . Since \delta_{ii} = 1 , it gets dropped, and I obtain B_{j}D_{j}A_{i} which equals (B \cdot \bigtriangledown)A . This is the first term of the right side of the initial equivalence. For the second term of the factored expression above, I start with \delta_{im}\delta_{jn}(D_{j}B_{n})A_{m} which can be rearranged as \delta_{im}\delta_{nn}(D_{n}B_{n})A_{m} . The \delta_{nn} term gets dropped and I obtain \delta_{ii}(D_{n}B_{n})A_{i} , which can be rearranged to look like (\bigtriangledown \cdot B)A . This is the fourth term of the initial equivalence. For the third term in the equivalence I start with the 2nd to last factored term from above which gives me -\delta_{in}\delta_{jm}(D_{j}A_{m})B_{n} which equals \delta_{in}\delta_{mm}(D_{m}A_{m})B_{n} . Again, since \delta_{mm} = 1 , it can be dropped and I am left with \delta_{ii}(D_{m}A_{m})B_{i} . \delta_{ii} can be dropped and I obtain -(\bigtriangledown \cdot A)B which equals -B(\bigtriangledown \cdot A) . For the final term in the equivalence, I start with -\delta_{in}\delta_{jm}(D_{j}B_{n})A_{m} . Since I only get contributions into the sum when m = j , I can make this substitution, drop \delta_{jj} , let n = i , and drop \delta_{ii} to obtain -(A \cdot \bigtriangledown)B which is the last term of the equivalence. After putting all of these terms together I get the whole right side of the equivalence and the proof is finished.

Identity Proof of Vectors

For this post, I wanted to show how to prove the following: A \times B \cdot [(B \times C) \times (C \times A)] = [A \cdot (B \times C)]^2 .

I can begin by using the identity X \times (Y \times Z) = Y(X \cdot Z) - Z(X \cdot Y) . After applying this identity to the left side of the equation, I obtain A \times B \cdot [C[(B \times C) \cdot A] - A[(B \times C) \cdot C]] . Since B \times C gives a vector that is perpendicular to C , its dot product with C will be zero. This leaves me with A \times B \cdot [C[(B \times C) \cdot A]] . This can now be rearranged as [(B \times C) \cdot A][(A \times B) \cdot C] because C[(B \times C) \cdot A] is just a scalar. The identity can be even further arranged to [A \cdot (B \times C)][A \cdot (B \times C)] . This is exactly equivalent to  [ A \cdot (B \times C)]^2 which is what i wanted to prove.