[{"@context":"http:\/\/schema.org\/","@type":"BlogPosting","@id":"https:\/\/wiki.edu.vn\/en\/wiki21\/analytic-function-of-a-matrix\/#BlogPosting","mainEntityOfPage":"https:\/\/wiki.edu.vn\/en\/wiki21\/analytic-function-of-a-matrix\/","headline":"Analytic function of a matrix","name":"Analytic function of a matrix","description":"before-content-x4 Function that maps matrices to matrices after-content-x4 In mathematics, every analytic function can be used for defining a matrix","datePublished":"2020-06-25","dateModified":"2020-06-25","author":{"@type":"Person","@id":"https:\/\/wiki.edu.vn\/en\/wiki21\/author\/lordneo\/#Person","name":"lordneo","url":"https:\/\/wiki.edu.vn\/en\/wiki21\/author\/lordneo\/","image":{"@type":"ImageObject","@id":"https:\/\/secure.gravatar.com\/avatar\/c9645c498c9701c88b89b8537773dd7c?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/c9645c498c9701c88b89b8537773dd7c?s=96&d=mm&r=g","height":96,"width":96}},"publisher":{"@type":"Organization","name":"Enzyklop\u00e4die","logo":{"@type":"ImageObject","@id":"https:\/\/wiki.edu.vn\/wiki4\/wp-content\/uploads\/2023\/08\/download.jpg","url":"https:\/\/wiki.edu.vn\/wiki4\/wp-content\/uploads\/2023\/08\/download.jpg","width":600,"height":60}},"image":{"@type":"ImageObject","@id":"https:\/\/wikimedia.org\/api\/rest_v1\/media\/math\/render\/svg\/3805501c700163bd2810c82a3c9cfa740fed0bf8","url":"https:\/\/wikimedia.org\/api\/rest_v1\/media\/math\/render\/svg\/3805501c700163bd2810c82a3c9cfa740fed0bf8","height":"","width":""},"url":"https:\/\/wiki.edu.vn\/en\/wiki21\/analytic-function-of-a-matrix\/","wordCount":11245,"articleBody":" (adsbygoogle = window.adsbygoogle || []).push({});before-content-x4Function that maps matrices to matrices (adsbygoogle = window.adsbygoogle || []).push({});after-content-x4In mathematics, every analytic function can be used for defining a matrix function that maps square matrices with complex entries to square matrices of the same size.This is used for defining the exponential of a matrix, which is involved in the closed-form solution of systems of linear differential equations. (adsbygoogle = window.adsbygoogle || []).push({});after-content-x4Table of ContentsExtending scalar function to matrix functions[edit]Power series[edit]Diagonalizable matrices[edit]Jordan decomposition[edit]Hermitian matrices[edit]Cauchy integral[edit]Matrix perturbations[edit]Arbitrary function of a 2\u00d72 matrix[edit]Examples[edit]Classes of matrix functions[edit]Operator monotone[edit]Operator concave\/convex[edit]Examples[edit]See also[edit]References[edit]Extending scalar function to matrix functions[edit]There are several techniques for lifting a real function to a square matrix function such that interesting properties are maintained. All of the following techniques yield the same matrix function, but the domains on which the function is defined may differ.Power series[edit]If the analytic function f has the Taylor expansionf(x)=c0+c1x+c2x2+\u22ef{displaystyle f(x)=c_{0}+c_{1}x+c_{2}x^{2}+cdots }then a matrix function (adsbygoogle = window.adsbygoogle || []).push({});after-content-x4A\u21a6f(A){displaystyle Amapsto f(A)} can be defined by substituting x by a square matrix: powers become matrix powers, additions become matrix sums and multiplications by coefficients become scalar multiplications. If the series converges for |x|\u2264\u2016A\u2016|B\u2016{displaystyle |AB|leq |A|,|B|}.Diagonalizable matrices[edit]A square matrix A is diagonalizable, if there is an invertible matrix P such that D=P\u22121AP{displaystyle D=P^{-1},A,P} is a diagonal matrix, that is, D has the shapeD=[d1\u22ef0\u22ee\u22f1\u22ee0\u22efdn].{displaystyle D={begin{bmatrix}d_{1}&cdots &0\\vdots &ddots &vdots \\0&cdots &d_{n}end{bmatrix}}.}As A=PDP\u22121,{displaystyle A=P,D,P^{-1},} it is natural to setf(A)=P[f(d1)\u22ef0\u22ee\u22f1\u22ee0\u22eff(dn)]P\u22121.{displaystyle f(A)=P,{begin{bmatrix}f(d_{1})&cdots &0\\vdots &ddots &vdots \\0&cdots &f(d_{n})end{bmatrix}},P^{-1}.}It can be verified that the matrix f(A) does not depend on a particular choice of P.For example, suppose one is seeking \u0393(A)=(A\u22121)!{displaystyle Gamma (A)=(A-1)!} forA=[1321].{displaystyle A={begin{bmatrix}1&3\\2&1end{bmatrix}}.}One hasA=P[1\u22126001+6]P\u22121\u00a0,{displaystyle A=P{begin{bmatrix}1-{sqrt {6}}&0\\0&1+{sqrt {6}}end{bmatrix}}P^{-1}~,}forP=[1\/21\/2\u22121616]\u00a0.{displaystyle P={begin{bmatrix}1\/2&1\/2\\-{frac {1}{sqrt {6}}}&{frac {1}{sqrt {6}}}end{bmatrix}}~.}Application of the formula then simply yields\u0393(A)=[1\/21\/2\u22121616]\u22c5[\u0393(1\u22126)00\u0393(1+6)]\u22c5[1\u22126\/216\/2]\u2248[2.81140.40800.27202.8114]\u00a0.{displaystyle Gamma (A)={begin{bmatrix}1\/2&1\/2\\-{frac {1}{sqrt {6}}}&{frac {1}{sqrt {6}}}end{bmatrix}}cdot {begin{bmatrix}Gamma (1-{sqrt {6}})&0\\0&Gamma (1+{sqrt {6}})end{bmatrix}}cdot {begin{bmatrix}1&-{sqrt {6}}\/2\\1&{sqrt {6}}\/2end{bmatrix}}approx {begin{bmatrix}2.8114&0.4080\\0.2720&2.8114end{bmatrix}}~.}Likewise,A4=[1\/21\/2\u22121616]\u22c5[(1\u22126)400(1+6)4]\u22c5[1\u22126\/216\/2]=[73845673]\u00a0.{displaystyle A^{4}={begin{bmatrix}1\/2&1\/2\\-{frac {1}{sqrt {6}}}&{frac {1}{sqrt {6}}}end{bmatrix}}cdot {begin{bmatrix}(1-{sqrt {6}})^{4}&0\\0&(1+{sqrt {6}})^{4}end{bmatrix}}cdot {begin{bmatrix}1&-{sqrt {6}}\/2\\1&{sqrt {6}}\/2end{bmatrix}}={begin{bmatrix}73&84\\56&73end{bmatrix}}~.}Jordan decomposition[edit]All complex matrices, whether they are diagonalizable or not, have a Jordan normal form A=PJP\u22121{displaystyle A=P,J,P^{-1}}, where the matrix J consists of Jordan blocks. Consider these blocks separately and apply the power series to a Jordan block:f([\u03bb10\u22ef00\u03bb1\u22ee\u22ee00\u22f1\u22f1\u22ee\u22ee\u22ef\u22f1\u03bb10\u22ef\u22ef0\u03bb])=[f(\u03bb)0!f\u2032(\u03bb)1!f\u2033(\u03bb)2!\u22eff(n)(\u03bb)n!0f(\u03bb)0!f\u2032(\u03bb)1!\u22eef(n\u22121)(\u03bb)(n\u22121)!00\u22f1\u22f1\u22ee\u22ee\u22ef\u22f1f(\u03bb)0!f\u2032(\u03bb)1!0\u22ef\u22ef0f(\u03bb)0!].{displaystyle fleft({begin{bmatrix}lambda &1&0&cdots &0\\0&lambda &1&vdots &vdots \\0&0&ddots &ddots &vdots \\vdots &cdots &ddots &lambda &1\\0&cdots &cdots &0&lambda end{bmatrix}}right)={begin{bmatrix}{frac {f(lambda )}{0!}}&{frac {f'(lambda )}{1!}}&{frac {f”(lambda )}{2!}}&cdots &{frac {f^{(n)}(lambda )}{n!}}\\0&{frac {f(lambda )}{0!}}&{frac {f'(lambda )}{1!}}&vdots &{frac {f^{(n-1)}(lambda )}{(n-1)!}}\\0&0&ddots &ddots &vdots \\vdots &cdots &ddots &{frac {f(lambda )}{0!}}&{frac {f'(lambda )}{1!}}\\0&cdots &cdots &0&{frac {f(lambda )}{0!}}end{bmatrix}}.}This definition can be used to extend the domain of the matrix functionbeyond the set of matrices with spectral radius smaller than the radius of convergence of the power series.Note that there is also a connection to divided differences.A related notion is the Jordan\u2013Chevalley decomposition which expresses a matrix as a sum of a diagonalizable and a nilpotent part.Hermitian matrices[edit]A Hermitian matrix has all real eigenvalues and can always be diagonalized by a unitary matrix P, according to the spectral theorem.In this case, the Jordan definition is natural. Moreover, this definition allows one to extend standard inequalities forreal functions:If f(a)\u2264g(a){displaystyle f(a)leq g(a)} for all eigenvalues of A{displaystyle A}, then f(A)\u2aafg(A){displaystyle f(A)preceq g(A)}.(As a convention, X\u2aafY\u21d4Y\u2212X{displaystyle Xpreceq YLeftrightarrow Y-X} is a positive-semidefinite matrix.)The proof follows directly from the definition.Cauchy integral[edit]Cauchy’s integral formula from complex analysis can also be used to generalize scalar functions to matrix functions. Cauchy’s integral formula states that for any analytic function f defined on a set D \u2282 C, one hasf(x)=12\u03c0i\u222eCf(z)z\u2212xdz\u00a0,{displaystyle f(x)={frac {1}{2pi i}}oint _{C}!{frac {f(z)}{z-x}},mathrm {d} z~,}where C is a closed simple curve inside the domain D enclosing x.Now, replace x by a matrix A and consider a path C inside D that encloses all eigenvalues of A. One possibility to achieve this is to let C be a circle around the origin with radius larger than ||A|| for an arbitrary matrix norm ||\u00b7||. Then, f\u200a(A) is definable byf(A)=12\u03c0i\u222eCf(z)(zI\u2212A)\u22121dz.{displaystyle f(A)={frac {1}{2pi i}}oint _{C}f(z)left(zI-Aright)^{-1}mathrm {d} z,.}This integral can readily be evaluated numerically using the trapezium rule, which converges exponentially in this case. That means that the precision of the result doubles when the number of nodes is doubled. In routine cases, this is bypassed by Sylvester’s formula.This idea applied to bounded linear operators on a Banach space, which can be seen as infinite matrices, leads to the holomorphic functional calculus.Matrix perturbations[edit]The above Taylor power series allows the scalar x{displaystyle x} to be replaced by the matrix. This is not true in general when expanding in terms of A(\u03b7)=A+\u03b7B{displaystyle A(eta )=A+eta B} about \u03b7=0{displaystyle eta =0} unless [A,B]=0{displaystyle [A,B]=0}. A counterexample is f(x)=x3{displaystyle f(x)=x^{3}}, which has a finite length Taylor series. We compute this in two ways,Distributive law:f(A+\u03b7B)=(A+\u03b7B)3=A3+\u03b7(A2B+ABA+BA2)+\u03b72(AB2+BAB+B2A)+\u03b73B3{displaystyle f(A+eta B)=(A+eta B)^{3}=A^{3}+eta (A^{2}B+ABA+BA^{2})+eta ^{2}(AB^{2}+BAB+B^{2}A)+eta ^{3}B^{3}}Using scalar Taylor expansion for f(a+\u03b7b){displaystyle f(a+eta b)} and replacing scalars with matrices at the end: f(a+\u03b7b)=f(a)+f\u2032(a)\u03b7b1!+f\u2033(a)(\u03b7b)22!+f\u2034(a)(\u03b7b)33!=a3+3a2(\u03b7b)+3a(\u03b7b)2+(\u03b7b)3\u2192A3=+3A2(\u03b7B)+3A(\u03b7B)2+(\u03b7B)3{displaystyle {begin{aligned}f(a+eta b)&=f(a)+f'(a){frac {eta b}{1!}}+f”(a){frac {(eta b)^{2}}{2!}}+f”'(a){frac {(eta b)^{3}}{3!}}\\[.5em]&=a^{3}+3a^{2}(eta b)+3a(eta b)^{2}+(eta b)^{3}\\[.5em]&to A^{3}=+3A^{2}(eta B)+3A(eta B)^{2}+(eta B)^{3}end{aligned}}}The scalar expression assumes commutativity while the matrix expression does not, and thus they cannot be equated directly unless [A,B]=0{displaystyle [A,B]=0}. For some f(x) this can be dealt with using the same method as scalar Taylor series. For example, f(x)=1x{textstyle f(x)={frac {1}{x}}}. If A\u22121{displaystyle A^{-1}} exists then f(A+\u03b7B)=f(I+\u03b7A\u22121B)f(A){displaystyle f(A+eta B)=f(mathbb {I} +eta A^{-1}B)f(A)}. The expansion of the first term then follows the power series given above,f(I+\u03b7A\u22121B)=I\u2212\u03b7A\u22121B+(\u2212\u03b7A\u22121B)2+\u22ef=\u2211n=0\u221e(\u2212\u03b7A\u22121B)n{displaystyle f(mathbb {I} +eta A^{-1}B)=mathbb {I} -eta A^{-1}B+(-eta A^{-1}B)^{2}+cdots =sum _{n=0}^{infty }(-eta A^{-1}B)^{n}}The convergence criteria of the power series then apply, requiring \u2016\u03b7A\u22121B\u2016{displaystyle Vert eta A^{-1}BVert } to be sufficiently small under the appropriate matrix norm. For more general problems, which cannot be rewritten in such a way that the two matrices commute, the ordering of matrix products produced by repeated application of the Leibniz rule must be tracked.Arbitrary function of a 2\u00d72 matrix[edit]An arbitrary function f(A) of a 2\u00d72 matrix A has its Sylvester’s formula simplify tof(A)=f(\u03bb+)+f(\u03bb\u2212)2I+A\u2212(tr(A)2)I(tr(A)2)2\u2212|A|f(\u03bb+)\u2212f(\u03bb\u2212)2\u00a0,{displaystyle f(A)={frac {f(lambda _{+})+f(lambda _{-})}{2}}I+{frac {A-left({frac {tr(A)}{2}}right)I}{sqrt {left({frac {tr(A)}{2}}right)^{2}-|A|}}}{frac {f(lambda _{+})-f(lambda _{-})}{2}}~,}where \u03bb\u00b1{displaystyle lambda _{pm }} are the eigenvalues of its characteristic equation, |A \u2212 \u03bbI| = 0, and are given by\u03bb\u00b1=tr(A)2\u00b1(tr(A)2)2\u2212|A|.{displaystyle lambda _{pm }={frac {tr(A)}{2}}pm {sqrt {left({frac {tr(A)}{2}}right)^{2}-|A|}}.}Examples[edit]Classes of matrix functions[edit]Using the semidefinite ordering (X\u2aafY\u21d4Y\u2212X{displaystyle Xpreceq YLeftrightarrow Y-X} is positive-semidefinite andX\u227aY\u21d4Y\u2212X{displaystyle Xprec YLeftrightarrow Y-X} is positive definite), someof the classes of scalar functions can be extended to matrix functions of Hermitian matrices.[2]Operator monotone[edit]A function f is called operator monotone if and only if 0\u227aA\u2aafH\u21d2f(A)\u2aaff(H){displaystyle 0prec Apreceq HRightarrow f(A)preceq f(H)} for all self-adjoint matrices A,H with spectra in the domain of f. This is analogous to monotone function in the scalar case.Operator concave\/convex[edit]A function f is called operator concave if and only if\u03c4f(A)+(1\u2212\u03c4)f(H)\u2aaff(\u03c4A+(1\u2212\u03c4)H){displaystyle tau f(A)+(1-tau )f(H)preceq fleft(tau A+(1-tau )Hright)}for all self-adjoint matrices A,H with spectra in the domain of f and \u03c4\u2208[0,1]{displaystyle tau in [0,1]}. This definition is analogous to a concave scalar function. An operator convex function can be defined be switching \u2aaf{displaystyle preceq } to \u2ab0{displaystyle succeq } in the definition above.Examples[edit]The matrix log is both operator monotone and operator concave. The matrix square is operator convex. The matrix exponential is none of these. Loewner’s theorem states that a function on an open interval is operator monotone if and only if it has an analytic extension to the upper and lower complex half planes so that the upper half plane is mapped to itself.[2]See also[edit]^ Higham, Nick (2020-12-15). “What Is the Matrix Sign Function?”. Nick Higham. Retrieved 2020-12-27.^ a b Bhatia, R. (1997). Matrix Analysis. Graduate Texts in Mathematics. Vol.\u00a0169. Springer.References[edit] (adsbygoogle = window.adsbygoogle || []).push({});after-content-x4"},{"@context":"http:\/\/schema.org\/","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"https:\/\/wiki.edu.vn\/en\/wiki21\/#breadcrumbitem","name":"Enzyklop\u00e4die"}},{"@type":"ListItem","position":2,"item":{"@id":"https:\/\/wiki.edu.vn\/en\/wiki21\/analytic-function-of-a-matrix\/#breadcrumbitem","name":"Analytic function of a matrix"}}]}]