Introduction and definitions
In n-dimensional space V (called a "manifold" in mathematics), points are specified by n
n12xassigning values to a set of n continuous real variables ,x.....x called the coordinates.
In many cases these will run from -? to +?, but the range of some or all of these can be finite.
Examples: In Euclidean space in three dimensions, we can use cartesian coordinates x, y and z,
each of which runs from -? to +?. For a two dimensional Euclidean plane, Cartesians may again be employed, or we can use plane polar coordinates r, ( whose ranges are 0 to ? and 0
to 2， respectively.
Coordinate transformations. The coordinates of points in the manifold may be assigned in a
n12x,x.....xnumber of different ways. If we select two different sets of coordinates, and n12 x ，;,,x ，;,.....x ，;there will obviously be a connection between them of the form
rrn12x ，;？f(x,x....x) r = 1, 2........n. (1)
where the f's are assumed here to be well behaved functions. Another way of expressing the same relationship is
rrn12x ，;？，;x (x,x....x) r = 1, 2........n. (2)
rrnn1212x ，;(xf(x,x....x),x....x)where denotes the n functions , r = 1, 2......n.
Recall that if a variable z is a function of two variables x and y, i.e. z = f (x, y), then the
connection between the differentials dx, dy and dz is
~f~fdz？dx；dy . (3) xy~~
Extending this to several variables therefore, for each one of the new coordinates we have
nr~x ，;rsdx ，;？dx? . r=1, 2........n. (4) ss？1~x
The transformation of the differentials of the coordinates is therefore linear and homogeneous, which is not necessarily the case for the transformation of the coordinates themselves.
Range and Summation Conventions. Equations such as (4) may be simplified by the use of two conventions:
Range Convention: When a suffix is unrepeated in a term, it is understood to take all values in the range 1, 2, 3.....n.
Summation Convention: When a suffix is repeated in a term, summation with respect to that suffix is understood, the range of summation being 1, 2, 3.....n.
With these two conventions applying, equation (4) may be written as
r~x ，;rsdx ，;？dx. (5) s~x
Note that a repeated suffix is a "dummy" suffix, and can be replaced by any convenient alternative. For example, equation (5) could have been written as
r~x ，;rmdx ，;？dx . (6) m~x
where the summation with respect to s has been replaced by the summation with respect to m.
Contravariant vectors and tensors. Consider two neighbouring points P and Q in the
rrrP Qmanifold whose coordinates are x and x + dx respectively. The vector
ris then described by the quantities dx which are the components of the vector in this
P Qcoordinate system. In the dashed coordinates, the vector is described by the components
rrdx ，; which are related to dx by equation (5), the differential coefficients being evaluated at P.
rrdx ，; The infinitesimal displacement represented by dx or is an example of a contravariant
rDefn. A set of n quantities T associated with a point P are said to be the components of a contravariant vector if they transform, on change of coordinates, according to the equation
r~x ，;rsT ，;？T . (7) s~x
where the partial derivatives are evaluated at the point P. (Note that there is no requirement that the components of a contravariant tensor should be infinitesimal.)
2rs quantities T associated with a point P are said to be the components of Defn. A set of n
a contravariant tensor of the second order if they transform, on change of coordinates, according to the equation
rs~x ，;~x ，;rsmnT ，;？T . (8) mn~x~x
Obviously the definition can be extended to tensors of higher order. A contravariant vector is the same as a contravariant tensor of first order.
Defn. A contravariant tensor of zero order transforms, on change of coordinates, according to the equation
T ，;？T , (9)
i.e. it is an invariant whose value is independent of the coordinate system used.
Covariant vectors and tensors. Let ！ be an invariant function of the coordinates, i.e. its
value may depend on position P in the manifold but is independent of the coordinate system used. Then the partial derivatives of ！ transform according to
s~！~！~x？ (10) rsrx ，;x~x ，;~~
Here the transformation is similar to equation (7) except that the partial derivative involving the two sets of coordinates is the other way up. The partial derivatives of an invariant function provide an example of the components of a covariant vector.
TDefn. A set of n quantities associated with a point P are said to be the components of a r
covariant vector if they transform, on change of coordinates, according to the equation
s~xT ，;？T . (11) rrs~x ，;
By convention, suffices indicating contravariant character are placed as superscripts, and those
rx. indicating covariant character as subscripts. Hence the reason for writing the coordinates as (Note however that it is only the differentials of the coordinates, not the coordinates
themselves, that always have tensor character. The latter may be tensors, but this is not always the case.)
Extending the definition as before, a covariant tensor of the second order is defined by the
mn~x~xT ，;？T (12) rsrsmn~x ，;~x ，;
and similarly for higher orders.
Mixed tensors. These are tensors with at least one covariant suffix and one contravariant
rTsuffix. An example is the third order tensor which transforms according to st
prn~x ，;~x~xrmT ，;？T (13) stmstnp~x~x ，;~x ，;
Another example is the Kronecker delta defined by
mn..tB，It is a tensor of the type indicated because (a) in an expression such as , which pq..m
involves summation with respect to m, there is only one non-zero contribution from the
mn..ttn..B，？BKronecker delta, that for which m = t, and so ; (b) the coordinates in any pq..mpq.. r~xr？，coordinate system are necessarily independent of each other, so that and ss~x
r~x ，;r？，;，;; so these two properties taken together imply that ss~x ，;
rn~x ，;~xrm，;，;？， . (15) snms~x~x ，;
Notes. 1. The importance of tensors is that if a tensor equation is true in one set of
T？0coordinates it is also true in any other coordinates. e.g. if (which, since m mn
and n are unrepeated, implies that the equation is true for all m and n, not just for
T ，;？0some particular choice of these suffices), then also, from the rs
transformation law. This illustrates the fact that any tensor equation is covariant,
which means that it has the same form in all coordinate systems.
2. A tensor may be defined at a single point P within the manifold, or along a curve,
or throughout a subspace, or throughout the manifold itself. In the latter cases we
speak of a tensor field.
Addition of tensors. Two tensors of the same type may be added together to give another
rrABtensor of the same type, e.g. if and are tensors of the type indicated, then we can stst
rrrC？A；B . (16) ststst
rCIt is easy to show that the quantities form the components of a tensor. st
rsA Symmetric and antisymmetric tensors. is a symmetric contravariant tensor if
rssrrssrAA？A？？A and antisymmetric if . Similarly for covariant tensors. Symmetry
rssrA？Aproperties are conserved under transformation of coordinates, e.g. if , then
mnmn~x ，;~x ，;~x ，;~x ，;mnrssrnmA ，;？A？A？，;A . (17) rsrs~x~x~x~x
srA？ANote however that for a mixed tensor, a relation such as does not transform to give rs
the equivalent relation in the dashed coordinates. The concept of symmetry (with respect to a pair of suffices which are either both subscripts or both superscripts) can obviously be extended to tensors of higher order.
Any covariant or contravariant tensor of second order may be expressed as the sum of a symmetric tensor and an antisymmetric tensor, e.g.
11rsrssrrssrA？(A；A)；(A？A) . (18) 22
Multiplication of tensors. In the addition of tensors we are restricted to tensors of a single type, with the same suffices (though they need not occur in the same order). In the multiplication of tensors there is no such restriction. The only condition is that we never
multiply two components with the same suffix at the same level in each. (This would imply summation with respect to the repeated suffix, but the resulting object would not have tensor character - see later.)
mBA and we simply write To multiply two tensors e.g. nrs
mmC？AB . (19) rsnrsn
mCIt follows immediately from their transformation properties that the quantities form a rsn
tensor of the type indicated. This tensor, in which the symbols for the suffices are all different,
mBAis called the outer product of and . nrs
mTContraction of tensors. Given a tensor , then np
mst~x ，;~x~xmrT ，;？T . (20) npstprn~x ，;~x~x ，;
Hence replacing n by m (and therefore implying summation with respect to m)
mst~x ，;~x~xmrT ，;？T mpstprm~x ，;~x~x ，;
st~x~xr？T stpr~x ，;~x
t~xsr？，T rstp~x ，; t~xs？T (21) stp~x ，; mATso we see that behaves like a tensor . The upshot is that contraction of a tensor (i.e. pmp
writing the same letter as a subscript and a superscript) reduces the order of the tensor by 2 and yields a tensor whose type is indicated by the remaining suffices.
Note that contraction can only be applied successfully to suffices at different levels. We may
ppAAof course construct, starting with a tensor say, a new set of quantities ; but these qrsqrr
do not have tensor character (as one can easily check) so are of little interest.
mmC？AB in the example above, we can form the Having constructed the outer product rsnrsn mmmmCC？AB？ABcorresponding inner products and . Each of these forms a msnmsnrmnrmn
covariant tensor of second order.
Tests for tensor character. The direct way of testing whether a set of quantities form the components of a tensor is to see whether they obey the appropriate tensor transformation law when the coordinates are changed. There is also an indirect method however, two examples of which will now be given:
rXATheorem 1. Let be the components of an arbitrary contravariant vector. Let be r rAXAanother set of quantities. If is an invariant, then form the components of a rr
rrAXXProof: Since is a tensor, it obeys the tensor transformation law. Invariance of r
s~x ，;rsrAX？，;A X ，;？，;A X (22) rssr~x s~x ，;r(A？，;A )X？0and so . (23) rsr~x
rXHence, since is an arbitrary tensor,
s~x ，;A？A ，; . QED (24) rsr~x
As an extension of this theorem, it is easy to show that any set of functions of the coordinates, whose inner product with an arbitrary covariant or contravariant vector is a tensor, are
rsrrsAXABthemselves the components of a tensor. For example, if is a tensor , then is s
a second order contravariant tensor.
rsraXXXaTheorem 2. If is invariant, being an arbitrary contravariant vector and rsrs
abeing symmetric in all coordinate systems, then are the components of a covariant tensor rs
of second order.
rsaXXProof: From our assumption about the invariance of , rs
mnrsaXX？，;a X ，;X ，; mnrs rs~x ，;~x ，;mn？，;a XX (25) rsmn~x~x
rs~x ，;~x ，;mnmnbXX?(a？，;a )XX？0. (26) Hence mnmnrsmn~x~x
mmnbX；bXX is arbitrary and the total coefficient of is , we deduce that Since mnnm
b；b？0, i.e. mnnm
rsrs~x ，;~x ，;~x ，;~x ，;a；a？，;a ；，;a mnnmrsrsmnnm~x~x~x~x rs~x ，;~x ，;？(a ，;；，;a ) (27) rssrmn~x~x
a？aon interchanging the summation variables r and s in the second term. But in all mnnm coordinate systems, hence
rs~x ，;~x ，;a？，;a . QED (28) mnrsmn~x~x
The metric tensor
The Euclidean space. Consider first the familiar Euclidean space in three dimensions, i.e. a
d space in which one can define Cartesian coordinates x, y and z so that the distance
x,y,zx；dx,y；dy,z；dzbetween two neighbouring points and is given by
2222d？(dx)；(dy)；(dz) . (29)
123x,x,xIf we choose any other coordinates to identify points in this space, the original
coordinates will be functions of these new coordinates, and their differentials will be linear
combinations of the differentials of the new coordinates. Thus in terms of the latter
2mnd？adxdx (30) mn
maxwhere the will be functions of . (For example in spherical polar coordinates mn 123222xa？r,x？(,x？！？1,a？r,a？rsin( we have and all other a's are zero.) 112233
aWe now show that is a covariant tensor of second order. The proof goes as follows: mn
aa(a) may be taken to be symmetric since each occurs only in the combination pqmn
a；a on the RHS of (30). pqqp
2mnd？adxdx is invariant, since the distance between two points does not depend (b) mn
on the coordinates used to evaluate it.
(c) By keeping one point fixed and letting the second point vary in the neighbourhood of the
rdxfirst, may be considered an arbitrary contravariant tensor.
aHence, using the theorem above, is a covariant tensor of second order. It is called the mn
metric tensor for the Euclidean 3-space. A similar tensor obviously exists in the case of a two
dimensional Euclidean space.
Riemannian space. A manifold is said to be Riemannian if there exists within it a covariant tensor of the second order which is symmetric. This tensor is called the metric tensor and
gnormally denoted by . Its significance is that it can be used to define the analogue of mn
"distance" between points, and the lengths of vectors. We will assume that all manifolds that
we will be dealing with from now on are Riemannian.
rrrx；dxxDefn. The interval ds between the neighbouring points and is given by
2mnds？gdxdx . (31) mn
gaThis is of course invariant. In the familiar Euclidean space where is just the above, mnmn 22ds？d；0, being zero only when the two points coincide. In other cases however, e.g. in
2dsds spacetime in relativity theory, may take on negative values, so that itself is not
rrdxdxnecessarily real. If ds = 0 for not all zero, the displacement is called a null
displacement. Note that there is no requirement that ds should necessarily have the physical dimensions of length.
gThe conjugate metric tensor. From the covariant metric tensor we can construct a mn mngcontravariant tensor defined by
mmngg？， . (32) npp
mnpgVTo show that is a tensor, we note that, for any contravariant vector , mmnppmmngggV？，V？V. This means that the inner product of with the arbitrary npp pmnmggVVcovariant vector is a tensor, , and so we deduce that is indeed a tensor np
gof the type indicated. It is said to be conjugate to . It is easily shown that when the mn
g？0,m?n, the conjugate tensor is also diagonal, metric tensor is diagonal, i.e. when mn nng？1/gwith each diagonal element satisfying . nn
The following theorem can be proved, but will just be quoted here: if g is the determinant of
ggthe matrix (i.e. choosing to write the components of the tensor in the form of a mnmn
matrix array), then
~~mngg？lng . (33) mnrrxx~~
mTTRaising and lowering suffices. Given a tensor , we may form another tensor rsmrs
mT？gT (34) nrsnmrs
tmtmmnmngT？ggT？，T？TNote that . (35) nrsntrstrsrs
TThe tensor may therefore be regarded as possessing a special relationship with the nrs mToriginal tensor in that either of them may be found from the other by the operation of rs
forming the inner product of the first with the metric tensor or its conjugate. For this reason, the same symbol is used (T in this instance), and we describe the above processes by saying that in (34) we have "lowered the suffix m", and that in (35) we have "raised the suffix n". The process of raising or lowering suffices can be extended to cover all the indices of a tensor.
TFor example we can raise one or both of the suffices in the tensor , generating the mn mmmnTTTcorresponding tensors , and . Notice the distinction between the two forms nn
of the mixed tensor, effected by leaving appropriate gaps in the set of indices. When the tensor is symmetric however this distinction disappears and we simply write either of these as mT. n
Flat space. A space or manifold is said to be flat if it is possible to find a coordinate system
gfor which the metric tensor is diagonal, with all diagonal elements equal to ? 1, mn
otherwise the space is said to be curved.
The familiar Euclidean space in two or three dimensions is obviously flat, the diagonal elements then being all equal to + 1. We normally assume that the ordinary three