Degrees of Freedom

Tests on Categorical Data

ROBERT H. RIFFENBURGH , in Statistics in Medicine (Second Edition), 2006

DEGREES OF FREEDOM FOR CONTINGENCY TESTS

Contingency test degrees of freedom ( df) are given by the minimum number of cells that must be filled to be able to calculate the remainder of cell entries using the totals at the side and bottom (often termed margins). For example, only one cell of a 2 × 2 table with the sums at the side and bottom needs to be filled, and the others can be found by subtraction; it has 1 df. A 2 × 3 table has 2 df. In general, an r × c table has df = (r – 1)(c – 1).

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780120887705500551

Analysis of Variance

Sheldon M. Ross , in Introductory Statistics (Third Edition), 2010

A Remark on the Degrees of Freedom

The numerator degrees of freedom of the F random variable are determined by the numerator estimator n S ¯ 2 . Since S ¯ 2 is the sample variance from a sample of size m, it follows that it has m – 1 degrees of freedom. Similarly, the denominator estimator is based on the statistic i = 1 m S i 2 . Since each of the sample variances S i 2 is based on a sample of size n, it follows that they each have n – 1 degrees of freedom. Summing the m sample variances then results in a statistic with m(n – 1) degrees of freedom.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123743886000119

Distributions

R.H. Riffenburgh , in Statistics in Medicine (Third Edition), 2012

Degrees of Freedom

Degrees of freedom, often abbreviated df, is a concept that may be thought of as that part of the sample size n not otherwise allocated. This concept relates to quite a number of aspects of statistical methods; thus, df may be explained in a number of ways. Some of these aspects are more difficult than others, and even experienced users find some of them very challenging. Do not expect to understand df fully at once. Comprehension usually starts at a rudimentary level and sophisticates slowly with use. df is related to the sample number, usually to the number of observations for continuous data methods and to the number of categories for categorical data methods. It will be enough for a start to conceive of df as a sample number adjusted for other sources of information, more specifically, the number of unrestricted and independent data entering into a calculated statistic. In the t distribution, we might think informally of n "pieces" of information available. Consider the form of t in which we use the sample mean m. Once we know m, we have n−1 pieces of information remaining which can be selected by the sampling procedure; when we have obtained n−1 observations, the nth one may be found by subtraction. (This is, of course, not how the data are obtained, but rather a more abstract mathematical allocation of information.) Because df is tied to n, the sample values converge to the population values as both n and df increase. t converges on the normal as df increases. Figure 4.5 shows the standard normal curve with two t curves superposed, one with 10 df and the other with 5 df. The fewer the df, the less accurate an estimate s is of σ, and thus the greater is the standard deviation of t (the "fatter" is the t curve).

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123848642000044

Phenomenological Cost and Penalty Interpretation of the Lagrange Formalism in Physics

Adam Moroz , in The Common Extremalities in Biology and Physics (Second Edition), 2012

5.1.2 Mechanical Degrees of Freedom

Let us first have a look at differences and similarities in mechanic (physical) and biological degrees of freedom and describe the approaches.

The mechanical degrees of freedom of motion are the simplest degrees of freedom of physical motion, and they are formalized in a very simple way. The coordinates of space and time belong to them first of all. It should also be noticed that physics has quite a strict definition of degrees of freedom. However, one should bear in mind the conceptual aspect of the definition of the degrees of freedom. This leads to a less rigid understanding of the degree of freedom since not all degrees of motion in nature, particularly in biology, obey strict definition.

Because it is interesting to bring into consideration the biological point of view on physical phenomena, let us note that the biological degrees of freedom could be characterized by extreme variety and hierarchy. One can illustrate this with a large number of examples of biological quantities, presenting this or that degree of freedom of motion of biological species, which are rather difficult or even impossible to formalize. For instance, the overall level of the immune response of the body to an infection; the quantities that describe the intensity of the breath; resulting (multipart to some extent) factor spaces representing the linear and nonlinear combinations of parameters measured in biochemistry; the extent of enzyme isomerization. Many more examples could be used. Moreover, as mentioned above, biological and, particularly, the biosocial species create new degrees of freedom of motion by which they overtake one another during competition. For example, manual labor is one such degree with a biological origin. The successfully developed degree of freedom (in labor) may determine the total victory in terms of global competition.

In physics, the degree of freedom can be treated more widely than any qualitatively different generalized direction of the motion, with some range of inherent changes quantitatively measured (i.e., such a direction that can probably be scaled). This direction is qualitatively nonreducible to other quantities and can be related to them only functionally.

In contrast to biological freedom, physical (and mechanical) degrees of freedom look more standard, and they are not fused. They can always be measured. For example, their values can be negative. If the population of a biospecies or the population density in biological kinetics have positive value and even integers, the mechanical values in mechanics cover all range of real values or could even be imaginary. This certainly influences the difference between the description of degrees of freedom in classical mechanics and that in biology.

The previous considerations have dealt with the biological parameters and variable values in biological degrees of freedom. The parameters measured for physical systems also mirror the character of changes in the physical world. For a better understanding of biological and physical phenomena, it is also important to compare the physical and biological descriptions and point out the fundamental differences between biological and physical degrees of freedom. Specifically, in this study, we are going to bring insight into the consideration of penalty/cost. This section is going to conceptually discuss the penalty–cost in economics, as well as the cost-and-penalty perspective from biology to mechanics and physics.

Discussing the optimal control examples in biochemistry and biology in the Sections 2-4, we pointed out that the cost of control or regulation is the metabolic cost for a biological system to achieve its goal. This metabolic cost could be linked to the metabolic expenses of a biosystem to keep its homeostasis on a certain level, e.g., temperature, blood pressure, and so on. These costs, or losses, could be treated as the metabolic penalty of the biosystem staying in an optimal state at another level (homeostatic) of competition (social, cenotic). So we are bearing in mind the self-regulative concept, as a penalty or cost for the system deviation from an optimal state. The penalty/cost aspects of mechanical and physical motion will be discussed further.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123851871000058

Stoichiometry of Oxide Crystals

Satoshi Uda , in Handbook of Crystal Growth (Second Edition), 2015

4.2.3.1 Degrees of Freedom in a Crystal Site

It is important to determine the element occupancy of each crystal site. These elements include constituent cations, impurity ions, antisite defects, and vacancies. The possible element occupancy at a site is examined by considering the associated degrees of freedom, and we can explain the degrees of freedom of a crystal site by employing LiNbO 3 as an example. Here the vacancy is a defect that forms in order to compensate for the charge imbalance due to the difference between the valences of the impurity ions and that of the host ion present in a site. The quantity of vacancies will be on the order of 10−4 to 10−2  mol or more depending on the population of impurity ions or antisite defects. In the following discussion, we assume that the oxygen sites are saturated with oxygen and that no oxygen vacancies are present even when LiNbO3 is exposed to an oxygen-reduced atmosphere during the growth or annealing processes. Such a reduced atmosphere is known to generate a color center in the crystal, with a concurrent change from colorless to yellow or orange, but the accompanying extent of oxygen deficiency is much less than the degree of oxygen vacancy required for the charge compensation, by at least two orders of magnitude. Thus, the possible presence of elements at each cation site will be discussed, assuming that oxygen saturation is maintained.

The degrees of freedom are obtained by subtracting the number of constraints from the number of parameters. In this case, the number of parameters at a site is the number of elements, based on the following three constraints:

1.

Mass conservation holds at each site. That is, the sum of the mole fractions of each constituent element, j (where j  =   1 to C), is unity, as in Eqn (4.36):

(4.36) X 1 + X 2 + + X C = 1

2.

If an element is present at multiple sites in a crystal, its chemical potentials at those sites are equal, thus

(4.37) μ site 1 j = μ site 2 j = .

3.

The vacancy population is calculated in such a way that overall charge neutrality is maintained in the bulk crystal.

These three constraints are necessary conditions, although additional restrictions may be added to decrease the degree of freedom at a given site.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780444563699000046

Quantum Entanglement and Information Processing

Michel H Devoret , John M Martinis , in Les Houches, 2004

2.2 Ultra-low noise : low temperature

The degrees of freedom of the quantum integrated circuit must be cooled to temperatures where the typical energy kT of thermal fluctuations is much less that the energy quantum ω 01 associated with the transition between the states |qubit=0> and |qubit=1>. For reasons which will become clear in subsequent sections, this frequency for superconducting qubits is in the 5–20GHz range and therefore, the operating temperature temperature T must be around 20mK (Recall that 1K corresponds to about 20 GHz). These temperatures may be readily obtained by cooling the chip with a dilution refrigerator. Perhaps more importantly though, the "electromagnetic temperature" of the wires of the control and readout ports connected to the chip must also be cooled to these low temperatures, which requires careful electromagnetic filtering. Note that electromagnetic damping mechanisms are usually stronger at low temperatures than those originating from electron-phonon coupling. The techniques [3] and requirements [4] for ultra-low noise filtering have been known for about 20 years. From the requirements k T ω 01 and ω 01 Δ , where Δ is the energy gap of the superconducting material, one must use superconducting materials with a transition temperature greater than about 1K.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S0924809903800367

Supersymmetric Quantum Mechanics

J.-W. van Holten , in Encyclopedia of Mathematical Physics, 2006

Anticommuting Variables

Fermionic degrees of freedom can be described in a pseudoclassical formulation by anticommuting variables ξ taking values in an infinite-dimensional Grassmann algebra:

[21] ξ ξ + ξ ξ = 0

With an anticommuting variable ξ, we can associate a derivative operator /∂ξ, which is an element of another Grassmann algebra such that

[22] { ξ , ξ } = ξ ξ + ξ ξ = 1 , 2 ξ 2 = 0

This extends the original Grassmann algebra to a Clifford algebra. Integration with respect to an anticommuting variable is defined in the same way:

[23] d ξ ξ = 1 , d ξ 1 = 0

that is, integration is the same as differentiation for anticommuting variables. With these definitions, we can represent the fermionic raising and lowering operators in terms of anticommuting variables as

[24] f ξ , f ξ

and the states by

[25] | 0 1 , | 1 ξ

Then an arbitrary state takes the form of a linear superposition

[26] | Ψ = ψ 0 | 0 + ψ 1 | 1 Ψ ( ξ ) = ψ 0 + ψ 1 ξ

and the standard positive-semidefinite inner product on the state space is represented on the wave functions by the double integral

[27] Φ | Ψ = d ξ d ξ ¯ e ξ ¯ ξ Φ * ( ξ ¯ ) Ψ ( ξ ) = ϕ 0 * ψ 0 + ϕ 1 * ψ 1

By construction, f = ξ and f = /∂ξ are conjugates with respect to this inner product:

[28] d ξ d ξ ¯ e ξ ¯ ξ Φ * ( ξ ¯ ) ξ Ψ ( ξ ) = d ξ ¯ e ξ ¯ ξ ( Φ ξ ) * ( ξ ¯ ) Ψ ( ξ )

The real (self-conjugate) forms of the fermion operators are, therefore, defined by

[29] σ 1 = ( ξ + ξ ) , σ 2 = i ( ξ ξ )

which satisfy the Pauli–Dirac anticommutation relations

[30] σ i σ j + σ j σ i = 2 δ ij

By taking the product, we obtain

[31] σ 3 = i σ 1 σ 2 = 1 2 ξ ξ = 1 2 N f N f = 1 2 ( 1 σ 3 )

Thus, we may think of the wave functions as two-component spinors, the components being labeled either by the eigenvalues of the spin operator σ 3, or equivalently by the fermion number N f , which is a projection operator on the states with negative spin.

The action of the Hamiltonian on a wave function Ψ(ξ) is represented by the integral

[32] [ H Ψ ] ( ξ ) = d ξ d ξ ¯ e ξ ¯ ( ξ ¯ ξ ) H ( ξ , ξ ¯ ) Ψ ( ξ )

where H(ξ,ξ̄) is the ordered symbol of the Hamiltonian:

[33] H ( ξ , ξ ¯ ) = ɛ f + ω ξ ξ ¯

This expression is to be considered as the classical Hamiltonian of the system. In particular, the exponent of the action

[34] S = 1 2 d t ( i ξ ¯ ξ ˙ H ( ξ , ξ ¯ ) ) = 1 2 d t ( i ξ ¯ ξ ˙ + ω ξ ¯ ξ ) + ɛ f ( t 2 t 1 )

provides the integrand for the path-integral representation of the evolution operator in the quantum theory. The proof is not given here; the reader is referred to the literature. In passing, note that as the anticommuting variables ( ξ , ξ ¯ ) are taken to be dimentionless, one actually should identify the momentum conjugate to ξ with π = i ħ ξ ¯ in the quantum theory, this is replaced by the operator −iħ/∂ξ.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B0125126662001206

Multifactor Tests on Means of Continuous Data

ROBERT H. RIFFENBURGH , in Statistics in Medicine (Second Edition), 2006

18.5. THREE- AND HIGHER-FACTOR ANALYSIS OF VARIANCE

ORIENTATION BY EXAMPLE: COOLING KIDNEYS EXTENDED TO ALL FACTORS

In the Additional Example of Section 18.2, the kidney-cooling comparison was addressed using a two-way ANOVA by treating readings on the factor of depth measurement within the kidney as replications. Because one treatment cooled from the outside and the other from the inside, depth is very relevant. Now we will reexamine the data using the depth measure in three-way ANOVA. Three-way ANOVA is conceptually a simple extension of two-way ANOVA. We have three main effect tests rather than two, three two-factor interactions rather than one, and the addition of a three-factor interaction. The main effect and two-factor interaction results are interpreted one by one in the same way as they are for two-way ANOVA. What is difficult about three-way ANOVA are means tabulation and the interpretation of the three-factor interaction. Let us provide the example and discuss these issues in context.

The SS, df, and mean squares for treatment, time, and their interaction are identical with those of a two-way analysis. We create a means table for treatment by depth, from which we calculate the SS for the depth main effect and the treatment-by-depth interaction following the patterns in Table 18.2. Number of df for depths is number of depths less one, or 2 – 1 = 1; number of df for the treatment-by-depth interaction is df for treatment × df for depth, or 1 × 1 = 1. We create a means table for time by depth, from which we calculate the SS for the time-by-depth interaction following the two-factor interaction pattern in Table 18.2. df = 1 × 3 = 3. We create a more extended means table that gives two factors along one axis and the third factor along the other axis, for example, each depth with each treatment pairing along the top and time down the side, from which we calculate the SS for the treatment-by-depth-by-time interaction as the product of number of elements in each main factor (2 × 2 × 4) times the raw SS of treatment × time × depth means – A – SS for all main effects and second-order interactions. Degrees of freedom for the third-order interaction is the product of df for main effects, that is, 1 × 1 × 3 = 3.

The reader is not carried through the actual computations, because no one is likely to perform three-factor ANOVA manually. The issue is to understand how the calculations found in a software-generated table came about. Such a software-generated table is as follows:

Source Sums of squares df Mean squares F Critical F p
Treatments 4063.8037 1 4063.8037 455.30 3.96 <0.001
Time 4461.8704 3 1487.2901 166.63 2.72 <0.001
Depth 145.0417 1 145.0417 16.25 3.96 <0.001
Treatment × time 1826.6405 3 608.8224 68.22 2.72 <0.001
Treatment × depth 201.8400 1 201.8400 22.61 3.96 <0.001
Time × depth 43.0092 3 14.3364 1.61 2.72 0.194
Treatment × time × depth 75.1142 3 25.0381 2.81 2.72 0.045
Error (residual) 714.0366 88 8.9255
Total 11531.3562 95

We note that MSE has reduced to two-thirds of its two-factor size because of removal of the variability due to the third factor. This, of course, increases all the F-values, making the tests on the original factors more sensitive. Time, treatment, and time-by-treatment interaction are all highly significant as before with the same interpretations. Depth is highly significant, as we anticipated. The treatment-by-depth interaction is also highly significant. By examining a means table, we could see that the depth effect is less in the cooled saline than in the ice treatment. The time-by-depth interaction is not significant, indicating that the pattern of reduction in mean temperature over time is only slightly different for the two depths. Finally, the third-order interaction is just barely significant. The interpretation of third- and higher-order interactions is usually not obvious and takes thought and understanding of the processes. In this case, it might best be explained by saying that the treatment-by-depth interaction changes over time.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780120887705500587

Output

William R. Sherman , Alan B. Craig , in Understanding Virtual Reality (Second Edition), 2019

Degrees of Freedom

There are six degrees of freedom in unconstrained, free movement. Thus, ultimately, the number of DOFs in a haptic display can vary from one to six. (Although in engineering parlance, each movement actuator is referred to as 1 DOF, so devices can be considered to have seven or more DOFs, but when all the movement constraints are applied, the end result will be six or less.) A common method of force display provides 3   degrees of movement in the three spatial dimensions. A 3-DOF device allows users to probe a space as they would with a stick or a single fingertip (the "tool" through which they "touch" the world). 2-DOF movement restricted to a line can be useful in systems where a device is inserted into a tube and twisted. A mouse or joystick equipped with force display can provide two-dimensional movement. Movements in space combined with rotational feedback provide up to 6   degrees of movement, as seen with the JPL/Salisbury Force Reflecting Hand Controller in Fig. 5-63 [Bejczy and Salisbury 1983].

Figure 5-63. One example of what a multi-DOF force feedback device looks like is the JPL/Salisbury Force Reflecting Hand Controller (FRHC). It was designed by Kenneth Salisbury and John Hill in the mid-1970s at the Stanford Research Institute in Menlo Park, California, under contract from NASA JPL. It is a 6-DOF force-reflecting master device that has found use in telerobotic and VR research.

Photo courtesy of NASA and Ken Salisbury.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128009659000052

KINETIC THEORY OF GASES

E.M. LIFSHITZ , L.P. , in Physical Kinetics, 1981

§13 Transport phenomena in a gas in an external field

The rotational degrees of freedom of molecules provide the mechanism whereby an external magnetic or electric field can affect transport phenomena in a gas. The effect is of the same nature in the magnetic and electric cases; we shall first discuss a gas in a magnetic field.

A rotating molecule has in general a magnetic moment, whose average value (in the quantum-mechanical sense) will be denoted by μ. The magnetic field will be assumed so weak that μB is small in comparison with the intervals in the fine structure of molecular levels. We can then neglect the influence of the field on the state of the molecule, so that the magnetic moment is calculated for the unperturbed state. For fairly high temperatures, the case we shall consider, μB is small in comparison with T also; this enables us to neglect the influence of the field on the equilibrium distribution function of the gas molecules.

The magnetic moment is parallel to the rotational angular momentum M of the molecule, and may be written

(13.1) μ = γ M .

Classical rotation of the molecule corresponds to large rotational quantum numbers; we can then neglect in M the difference between the total angular momentum (including spin) and the rotational angular momentum. The value of the constant coefficient γ depends on the nature of the molecule and the nature of its magnetic moment. For example, with a diatomic molecule having non-zero spin S,

(13.2) γ ( 2 σ / M ) μ B ,

where μ B is the Bohr magneton, and the number σ = JK is the difference between the quantum numbers J of the total angular momentum and K of the rotational angular momentum (σ takes the values S, S − 1,…, − S); in the denominator, the difference between J and K is not significant: MħJħK. In (13.2) it is assumed that the spin—axis interaction in the molecule is small in comparison with the intervals in the rotational structure of the levels (Hund's case b).

In a magnetic field B, the molecule is subjected to a torque μ × B. The vector M is then no longer constant during the "free" motion of the molecule, but varies according to

(13.3) d M / d t = μ × B = γ B × M ;

the vector M precesses about the direction of the field with angular velocity − γB. The left-hand side of the transport equation thus has an added term (∂f/∂M). M, and the equation becomes

(13.4) f t + v . f r + γ M × B . f M = C ( f ) .

The variables Γ on which the distribution function depends must also include the discrete variable σ, which determines the value of the magnetic moment, if there is such a variable, as in (13.2).

In problems of thermal conduction and viscosity, we again take a distribution close to the equilibrium one, and express it as

(13.5) f = f 0 ( 1 + χ / T ) .

We shall first show that a term in ∂f 0/∂M does not occur in the transport equation. Since f 0 depends only on the energy ∈(Γ) of the molecule, and ∂∈/∂M is equal to the angular velocity Ω, we have

(13.6) γ M × B . f 0 / M = γ M × B . Ω f 0 / .

For molecules of the rotator and spherical-top types, M and Ω are parallel, and the expression (13.6) is zero identically. In other cases, it becomes zero after averaging over the rapidly varying phases, the necessity for which has been explained in §1. When molecules of the symmetrical-top or asymmetrical-top type rotate, there is a rapid variation both of the direction of the axes of the molecule itself and of that of its angular velocity Ω. After the averaging mentioned, Ω can retain only the component Ω M along the constant vector M, and for this component the product M. B × Ω M = 0.

The remaining terms in the transport equation are transformed in the same way as in §7 or §8. For instance, in the thermal conduction problem we find the equation

(13.7) ( Γ ) c p T T v . T = γ M × B . χ M + I ( χ ) .

The solution of this equation is again to be sought in the form χ = g.T, but there are now three vectors v, M, B, not two, available to construct the vector function g(Γ). The external field creates a distinctive direction in the gas. The process of thermal conduction therefore becomes anisotropic, and the scalar coefficient κ has to be replaced by a thermal conductivity tensor καβ, which determines the heat flux by

(13.8) q α = κ α β T / x β .

The tensor καβ is calculated from the distribution function as the integral

(13.9) k α β 1 T f 0 υ α g β d Γ ;

cf. (7.5).

The general form of a tensor of rank two depending on the vector B is

(13.10) k α β = k δ α β + κ 1 b α b β + κ 2 e α β γ b γ ,

where b = B/B, e αβγ is the antisymmetric unit tensor, and κ, κ1, κ2 are scalars depending on the field strength B. The tensor (13.10) obviously has the property

(13.11) k α β ( B ) = k β α ( B ) .

The expression (13.10) corresponds to the heat flux

(13.12) q = κ T κ 1 b ( b . T ) κ 2 T × b .

The last term is what is called an odd effect, changing sign with the field.

The integral term I(χ) on the right of (13.7) is given by (6.5). The integrand contains the function f 0, which is proportional to the gas density N. Separating this factor and dividing both sides of the equation by it, we find that N appears only in the combinations B/N with the field and ▿T/N with the temperature gradient. It is therefore clear that the function f = f 0g . ▿T will depend on the parameters N and B only through the ratio B/N; the integrals (13.9) will also depend only on this quantity, and therefore so will the coefficients κ, κ1, κ2 in (13.12). The density N is proportional (at a given temperature) to the gas pressure P. Thus the thermal conductivity of a gas in a magnetic field depends on the field and the pressure only through the ratio B/P.

When B increases, the first term on the right of (13.7) increases, but the second term is unchanged. It is therefore clear that as B → ∞ the solution of the equation must be a function depending only on the direction (not the magnitude) of the field, and this function must make identically zero the term M × B. ∂χ/∂M in the equation; accordingly, the coefficients κ, κ1, κ2 tend to constant limits independent of B, as B → ∞.

The treatment of the viscosity of a gas in a magnetic field is similar. The corresponding transport equation is

(13.13) ( m υ α υ β ( Γ ) C υ δ α β ) V α β = I ( χ ) γ M × B . χ M ;

cf. (6.19). The solution is to be sought in the form χ g αβ V αβ. Instead of the two viscosity coefficients η and ζ, we must now use a tensor ηαβγδ of rank four which determines the viscous stress tensor

(13.14) σ α β = η α β γ δ V γ δ ;

by definition, the tensor ηαβγδ is symmetric in the pairs of suffixes α, β and γ, δ. With the known function χ, its components are calculated as

(13.15) η α β γ δ = m g υ α υ β f 0 g γ δ d Γ .

The viscosity tensor thus found will necessarily satisfy the condition

(13.16) η α β γ δ ( B ) = η γ δ α β ( -B ) ,

which expresses the symmetry of the kinetic coefficients.

With the vector b = B/B (and the unit tensors δαβ and e αβγ), we can construct the following independent tensor combinations having the symmetry properties of ηαβγδ:

(13.17) ( 1 ) δ α γ δ β δ + δ α δ δ β γ , ( 2 ) δ α β δ γ δ , ( 3 ) δ α γ b β b δ + δ β γ b α b δ + δ α δ b β b γ + δ β δ b α b γ , ( 4 ) δ α β b γ b δ + δ γ δ b α b β , ( 5 ) b α b β b γ b δ , ( 6 ) b α γ δ β δ + b β γ δ α δ + b α δ δ β γ + b β δ δ α γ , ( 7 ) b α γ b β b δ + b β γ b α b δ + b α δ b β b γ + b β δ b α b γ , }

where b αβ = − b βα = e αβγ b γ. In all these combinations except (4), the property (13.16) follows automatically from the symmetry with respect to the pairs of suffixes α, β and γ, δ in (4), the two terms are combined in order to satisfy the condition (13.16).

In accordance with the number of tensors (13.17), a gas in a magnetic field in general has seven independent viscosity coefficients. These may be defined as the coefficients in the following expression for the viscous stress tensor:

(13.18) σ α β = 2 η ( V α β 1 3 δ α β div V ) + ζ δ α β div V + η 1 ( 2 V α β δ α β div V+ δ α β V γ δ b γ b δ 2 V α γ b γ b g β 2 V β γ b γ b α + b α b β div V+ b α b β V γ δ b γ b δ ) + 2 η 2 ( V α γ b γ b β + V β γ b γ b α 2 b α b β V γ δ b γ b δ ) + η 3 ( V α γ b β γ + V β γ b α γ V γ δ b α γ b β b δ V γ δ b β γ b α b δ ) + 2 η 4 ( V γ δ b α γ b β b δ + V γ δ b β γ b α b δ ) + ζ 1 ( δ α β V γ δ b γ b δ + b α b β  div V ) ;

V αβ is defined in (6.12). This is so constructed that η, η1, …, η4 are coefficients of tensors which give zero on contraction with respect to the suffixes α, β ζ and ζ1 are coefficients of tensors with non-zero trace, and may be called second viscosity coefficients. Note that they contain not only the scalar div V but also V γδ b γ b δ. The first two terms in (13.18) correspond to the usual expression for the stress tensor, so that η and ζ are the ordinary viscosity coefficients.

The tensors καβ and ηαβγδ must be true tensors, since they satisfy the condition of symmetry under inversion. The abandonment of this condition (for a gas of stereoisomeric material) would therefore not lead to the presence of any new terms.

Such abandonment would, however, bring about new effects, with a heat flux q (V) due to the velocity gradients and viscous stresses σ′(T) due to the temperature gradient. These cross-effects are described by the formulae

(13.19) q γ ( v ) = c γ , α β V α β , σ α β ( T ) = a α β , γ T / x γ ,

where c γ,αβ and a αβ,γ are tensors of rank three symmetric in the pair of suffixes separated by the comma. With x ˙ a and X a chosen as in §9, the kinetic coefficients γ ab and γ ba are Tc γ,αβ and T 2 a αβ,γ. Onsager's principle thus shows that in the presence of a magnetic field we must have

(13.20) T α α β , γ ( B ) = c γ , α β ( B ) .

The general form of such tensors is

(13.21) a α β γ = a 1 b α b β b γ + a 2 b γ δ α β + a 3 ( b α δ β γ + b β δ α γ ) + a 4 ( b α γ b β + b β γ b α ) .

All the terms here are pseudotensors, and so the relations (13.19) with these coefficients are not invariant under inversion.

Let us now briefly consider transport phenomena in a gas in an electric field. We take a gas consisting of polar molecules (i.e. having a dipole moment d) of the symmetrical-top type. In an electric field, a polar molecule is acted on by a torque d × E, so that the transport equation contains a term

M ˙ . f / M = d×E . f / M .

The direction of d is along the axis of the molecule and is unrelated to that of the rotational anglular momentum M. However, as a result of averaging with respect to the rapid precession of the top's axis about the direction of the constant vector M, there remains in the above term only the component d along M, and it becomes

(13.22) γ M × E . f / M,

where γ = σd/M; the variable σ (the cosine of the angle between d and M) now takes a continuous series of values from −1 to +1. The expression (13.22) differs from the corresponding term in the magnetic case only in that B is replaced by E. Thus all the above transport equations and the conclusions drawn from them remain valid.

There is, however, a difference arising from the fact that the electric field E is a true vector, not a pseudovector, and is unaffected by time reversal. For this reason, Onsager's principle for the thermal conductivity and viscosity tensors is here expressed by

(13.23) κ α β ( E ) = κ β α ( E ) η α β δ γ ( E ) = η γ δ α β ( E ) ,

instead of (13.11) and (13.16). Correspondingly, κ2 ≡ 0 and η3 = η4 ≡ 0 in (13.10) and (13.18) (where now b = E/E).: On the other hand, cross-effects are possible not only in a stereoisomeric gas, for which (13.21) is fully valid, but also in a gas of non-stereoisomeric molecules: the expression (13.21) with a 4 ≡ 0 is then a true tensor.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780080570495500060