Singular value – Wikipedia

Square roots of the eigenvalues of the self-adjoint operator

In mathematics, in particular functional analysis, the singular values, or s-numbers of a compact operator

T:XY{displaystyle T:Xrightarrow Y}

acting between Hilbert spaces

X{displaystyle X}

and

Y{displaystyle Y}

, are the square roots of the (necessarily non-negative) eigenvalues of the self-adjoint operator

TT{displaystyle T^{*}T}

(where

T{displaystyle T^{*}}

denotes the adjoint of

T{displaystyle T}

).

The singular values are non-negative real numbers, usually listed in decreasing order (σ1(T), σ2(T), …). The largest singular value σ1(T) is equal to the operator norm of T (see Min-max theorem).

If T acts on Euclidean space

Rn{displaystyle mathbb {R} ^{n}}

, there is a simple geometric interpretation for the singular values: Consider the image by

T{displaystyle T}

of the unit sphere; this is an ellipsoid, and the lengths of its semi-axes are the singular values of

T{displaystyle T}

(the figure provides an example in

R2{displaystyle mathbb {R} ^{2}}

).

The singular values are the absolute values of the eigenvalues of a normal matrix A, because the spectral theorem can be applied to obtain unitary diagonalization of

A{displaystyle A}

as

A=UΛU{displaystyle A=ULambda U^{*}}

. Therefore,

AA=UΛΛU=U|Λ|U{textstyle {sqrt {A^{*}A}}={sqrt {ULambda ^{*}Lambda U^{*}}}=Uleft|Lambda right|U^{*}}

.

Most norms on Hilbert space operators studied are defined using s-numbers. For example, the Ky Fan-k-norm is the sum of first k singular values, the trace norm is the sum of all singular values, and the Schatten norm is the pth root of the sum of the pth powers of the singular values. Note that each norm is defined only on a special class of operators, hence s-numbers are useful in classifying different operators.

In the finite-dimensional case, a matrix can always be decomposed in the form

UΣV{displaystyle mathbf {USigma V^{*}} }

, where

U{displaystyle mathbf {U} }

and

V{displaystyle mathbf {V^{*}} }

are unitary matrices and

Σ{displaystyle mathbf {Sigma } }

is a rectangular diagonal matrix with the singular values lying on the diagonal. This is the singular value decomposition.

Basic properties[edit]

For

ACm×n{displaystyle Ain mathbb {C} ^{mtimes n}}

, and

i=1,2,,min{m,n}{displaystyle i=1,2,ldots ,min{m,n}}

.

Min-max theorem for singular values. Here

U:dim(U)=i{displaystyle U:dim(U)=i}

is a subspace of

Cn{displaystyle mathbb {C} ^{n}}

of dimension

i{displaystyle i}

.

Matrix transpose and conjugate do not alter singular values.

For any unitary

UCm×m,VCn×n.{displaystyle Uin mathbb {C} ^{mtimes m},Vin mathbb {C} ^{ntimes n}.}

Relation to eigenvalues:

Relation to trace:

If

AA{displaystyle A^{top }A}

is full rank, the product of singular values is

detAA{displaystyle {sqrt {det A^{top }A}}}

.

If

AA{displaystyle AA^{top }}

is full rank, the product of singular values is

detAA{displaystyle {sqrt {det AA^{top }}}}

.

If

A{displaystyle A}

is full rank, the product of singular values is

|detA|{displaystyle |det A|}

.

Inequalities about singular values[edit]

See also.[1]

Singular values of sub-matrices[edit]

For

ACm×n.{displaystyle Ain mathbb {C} ^{mtimes n}.}

  1. Let
  2. Let
  3. Let

Singular values of A + B[edit]

For

A,BCm×n{displaystyle A,Bin mathbb {C} ^{mtimes n}}

Singular values of AB[edit]

For

A,BCn×n{displaystyle A,Bin mathbb {C} ^{ntimes n}}

For

A,BCm×n{displaystyle A,Bin mathbb {C} ^{mtimes n}}

[2]

Singular values and eigenvalues[edit]

For

ACn×n{displaystyle Ain mathbb {C} ^{ntimes n}}

.

  1. See[3]

  2. Assume

History[edit]

This concept was introduced by Erhard Schmidt in 1907. Schmidt called singular values “eigenvalues” at that time. The name “singular value” was first quoted by Smithies in 1937. In 1957, Allahverdiev proved the following characterization of the nth s-number:[4]

This formulation made it possible to extend the notion of s-numbers to operators in Banach space.

See also[edit]

References[edit]

  1. ^ R. A. Horn and C. R. Johnson. Topics in Matrix Analysis. Cambridge University Press, Cambridge, 1991. Chap. 3
  2. ^ X. Zhan. Matrix Inequalities. Springer-Verlag, Berlin, Heidelberg, 2002. p.28
  3. ^ R. Bhatia. Matrix Analysis. Springer-Verlag, New York, 1997. Prop. III.5.1
  4. ^ I. C. Gohberg and M. G. Krein. Introduction to the Theory of Linear Non-selfadjoint Operators. American Mathematical Society, Providence, R.I.,1969. Translated from the Russian by A. Feinstein. Translations of Mathematical Monographs, Vol. 18.