[{"@context":"http:\/\/schema.org\/","@type":"BlogPosting","@id":"https:\/\/wiki.edu.vn\/en\/wiki8\/basus-theorem-wikipedia\/#BlogPosting","mainEntityOfPage":"https:\/\/wiki.edu.vn\/en\/wiki8\/basus-theorem-wikipedia\/","headline":"Basu’s theorem – Wikipedia","name":"Basu’s theorem – Wikipedia","description":"before-content-x4 From Wikipedia, the free encyclopedia after-content-x4 In statistics, Basu’s theorem states that any boundedly complete minimal sufficient statistic is","datePublished":"2016-06-02","dateModified":"2016-06-02","author":{"@type":"Person","@id":"https:\/\/wiki.edu.vn\/en\/wiki8\/author\/lordneo\/#Person","name":"lordneo","url":"https:\/\/wiki.edu.vn\/en\/wiki8\/author\/lordneo\/","image":{"@type":"ImageObject","@id":"https:\/\/secure.gravatar.com\/avatar\/c9645c498c9701c88b89b8537773dd7c?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/c9645c498c9701c88b89b8537773dd7c?s=96&d=mm&r=g","height":96,"width":96}},"publisher":{"@type":"Organization","name":"Enzyklop\u00e4die","logo":{"@type":"ImageObject","@id":"https:\/\/wiki.edu.vn\/wiki4\/wp-content\/uploads\/2023\/08\/download.jpg","url":"https:\/\/wiki.edu.vn\/wiki4\/wp-content\/uploads\/2023\/08\/download.jpg","width":600,"height":60}},"image":{"@type":"ImageObject","@id":"https:\/\/wikimedia.org\/api\/rest_v1\/media\/math\/render\/svg\/4650ff25fe96b1c83f900cce77e027c57dfb1ae4","url":"https:\/\/wikimedia.org\/api\/rest_v1\/media\/math\/render\/svg\/4650ff25fe96b1c83f900cce77e027c57dfb1ae4","height":"","width":""},"url":"https:\/\/wiki.edu.vn\/en\/wiki8\/basus-theorem-wikipedia\/","wordCount":6001,"articleBody":" (adsbygoogle = window.adsbygoogle || []).push({});before-content-x4From Wikipedia, the free encyclopedia (adsbygoogle = window.adsbygoogle || []).push({});after-content-x4In statistics, Basu’s theorem states that any boundedly complete minimal sufficient statistic is independent of any ancillary statistic. This is a 1955 result of Debabrata Basu.[1]It is often used in statistics as a tool to prove independence of two statistics, by first demonstrating one is complete sufficient and the other is ancillary, then appealing to the theorem.[2] An example of this is to show that the sample mean and sample variance of a normal distribution are independent statistics, which is done in the Example section below. This property (independence of sample mean and sample variance) characterizes normal distributions. (adsbygoogle = window.adsbygoogle || []).push({});after-content-x4Table of ContentsStatement[edit]Proof[edit]Example[edit]Independence of sample mean and sample variance of a normal distribution[edit]References[edit]Statement[edit]Let (P\u03b8;\u03b8\u2208\u0398){displaystyle (P_{theta };theta in Theta )} be a family of distributions on a measurable space (adsbygoogle = window.adsbygoogle || []).push({});after-content-x4(X,A){displaystyle (X,{mathcal {A}})} and T,A{displaystyle T,A} measurable maps from (X,A){displaystyle (X,{mathcal {A}})} to some measurable space (Y,B){displaystyle (Y,{mathcal {B}})}. (Such maps are called a statistic.) If T{displaystyle T} is a boundedly complete sufficient statistic for \u03b8{displaystyle theta }, and A{displaystyle A} is ancillary to \u03b8{displaystyle theta }, then conditional on \u03b8{displaystyle theta }, T{displaystyle T} is independent of A{displaystyle A}. That is, T\u22a5A|\u03b8{displaystyle Tperp A|theta }.Proof[edit]Let P\u03b8T{displaystyle P_{theta }^{T}} and P\u03b8A{displaystyle P_{theta }^{A}} be the marginal distributions of T{displaystyle T} and A{displaystyle A} respectively.Denote by A\u22121(B){displaystyle A^{-1}(B)} the preimage of a set B{displaystyle B} under the map A{displaystyle A}. For any measurable set B\u2208B{displaystyle Bin {mathcal {B}}} we haveP\u03b8A(B)=P\u03b8(A\u22121(B))=\u222bYP\u03b8(A\u22121(B)\u2223T=t)\u00a0P\u03b8T(dt).{displaystyle P_{theta }^{A}(B)=P_{theta }(A^{-1}(B))=int _{Y}P_{theta }(A^{-1}(B)mid T=t) P_{theta }^{T}(dt).}The distribution P\u03b8A{displaystyle P_{theta }^{A}} does not depend on \u03b8{displaystyle theta } because A{displaystyle A} is ancillary. Likewise, P\u03b8(\u22c5\u2223T=t){displaystyle P_{theta }(cdot mid T=t)} does not depend on \u03b8{displaystyle theta } because T{displaystyle T} is sufficient. Therefore\u222bY[P(A\u22121(B)\u2223T=t)\u2212PA(B)]\u00a0P\u03b8T(dt)=0.{displaystyle int _{Y}{big [}P(A^{-1}(B)mid T=t)-P^{A}(B){big ]} P_{theta }^{T}(dt)=0.}Note the integrand (the function inside the integral) is a function of t{displaystyle t} and not \u03b8{displaystyle theta }. Therefore, since T{displaystyle T} is boundedly complete the functiong(t)=P(A\u22121(B)\u2223T=t)\u2212PA(B){displaystyle g(t)=P(A^{-1}(B)mid T=t)-P^{A}(B)}is zero for P\u03b8T{displaystyle P_{theta }^{T}} almost all values of t{displaystyle t} and thusP(A\u22121(B)\u2223T=t)=PA(B){displaystyle P(A^{-1}(B)mid T=t)=P^{A}(B)}for almost all t{displaystyle t}. Therefore, A{displaystyle A} is independent of T{displaystyle T}.Example[edit]Independence of sample mean and sample variance of a normal distribution[edit]Let X1, X2, …, Xn be independent, identically distributed normal random variables with mean \u03bc and variance \u03c32.Then with respect to the parameter \u03bc, one can show that\u03bc^=\u2211Xin,{displaystyle {widehat {mu }}={frac {sum X_{i}}{n}},}the sample mean, is a complete and sufficient statistic \u2013 it is all the information one can derive to estimate \u03bc, and no more \u2013 and\u03c3^2=\u2211(Xi\u2212X\u00af)2n\u22121,{displaystyle {widehat {sigma }}^{2}={frac {sum left(X_{i}-{bar {X}}right)^{2}}{n-1}},}the sample variance, is an ancillary statistic \u2013 its distribution does not depend on \u03bc.Therefore, from Basu’s theorem it follows that these statistics are independent conditional on \u03bc{displaystyle mu }, conditional on \u03c32{displaystyle sigma ^{2}}.This independence result can also be proven by Cochran’s theorem.Further, this property (that the sample mean and sample variance of the normal distribution are independent) characterizes the normal distribution \u2013 no other distribution has this property.[3]^ Basu (1955)^ Ghosh, Malay; Mukhopadhyay, Nitis; Sen, Pranab Kumar (2011), Sequential Estimation, Wiley Series in Probability and Statistics, vol.\u00a0904, John Wiley & Sons, p.\u00a080, ISBN\u00a09781118165911, The following theorem, due to Basu … helps us in proving independence between certain types of statistics, without actually deriving the joint and marginal distributions of the statistics involved. This is a very powerful tool and it is often used …^ Geary, R.C. (1936). “The Distribution of “Student’s” Ratio for Non-Normal Samples”. Supplement to the Journal of the Royal Statistical Society. 3 (2): 178\u2013184. doi:10.2307\/2983669. JFM\u00a063.1090.03. JSTOR\u00a02983669.References[edit] (adsbygoogle = window.adsbygoogle || []).push({});after-content-x4"},{"@context":"http:\/\/schema.org\/","@type":"BreadcrumbList","itemListElement":[{"@type":"ListItem","position":1,"item":{"@id":"https:\/\/wiki.edu.vn\/en\/wiki8\/#breadcrumbitem","name":"Enzyklop\u00e4die"}},{"@type":"ListItem","position":2,"item":{"@id":"https:\/\/wiki.edu.vn\/en\/wiki8\/basus-theorem-wikipedia\/#breadcrumbitem","name":"Basu’s theorem – Wikipedia"}}]}]