Basu's theorem

Theorem in statistics

In statistics, Basu's theorem states that any boundedly complete minimal sufficient statistic is independent of any ancillary statistic. This is a 1955 result of Debabrata Basu.[1]

It is often used in statistics as a tool to prove independence of two statistics, by first demonstrating one is complete sufficient and the other is ancillary, then appealing to the theorem.[2] An example of this is to show that the sample mean and sample variance of a normal distribution are independent statistics, which is done in the Example section below. This property (independence of sample mean and sample variance) characterizes normal distributions.

Statement

Let ( P θ ; θ Θ ) {\displaystyle (P_{\theta };\theta \in \Theta )} be a family of distributions on a measurable space ( X , A ) {\displaystyle (X,{\mathcal {A}})} and a statistic T {\displaystyle T} maps from ( X , A ) {\displaystyle (X,{\mathcal {A}})} to some measurable space ( Y , B ) {\displaystyle (Y,{\mathcal {B}})} . If T {\displaystyle T} is a boundedly complete sufficient statistic for θ {\displaystyle \theta } , and A {\displaystyle A} is ancillary to θ {\displaystyle \theta } , then conditional on θ {\displaystyle \theta } , T {\displaystyle T} is independent of A {\displaystyle A} . That is, T A | θ {\displaystyle T\perp \!\!\!\perp A|\theta } .

Proof

Let P θ T {\displaystyle P_{\theta }^{T}} and P θ A {\displaystyle P_{\theta }^{A}} be the marginal distributions of T {\displaystyle T} and A {\displaystyle A} respectively.

Denote by A 1 ( B ) {\displaystyle A^{-1}(B)} the preimage of a set B {\displaystyle B} under the map A {\displaystyle A} . For any measurable set B B {\displaystyle B\in {\mathcal {B}}} we have

P θ A ( B ) = P θ ( A 1 ( B ) ) = Y P θ ( A 1 ( B ) T = t )   P θ T ( d t ) . {\displaystyle P_{\theta }^{A}(B)=P_{\theta }(A^{-1}(B))=\int _{Y}P_{\theta }(A^{-1}(B)\mid T=t)\ P_{\theta }^{T}(dt).}

The distribution P θ A {\displaystyle P_{\theta }^{A}} does not depend on θ {\displaystyle \theta } because A {\displaystyle A} is ancillary. Likewise, P θ ( T = t ) {\displaystyle P_{\theta }(\cdot \mid T=t)} does not depend on θ {\displaystyle \theta } because T {\displaystyle T} is sufficient. Therefore

Y [ P ( A 1 ( B ) T = t ) P A ( B ) ]   P θ T ( d t ) = 0. {\displaystyle \int _{Y}{\big [}P(A^{-1}(B)\mid T=t)-P^{A}(B){\big ]}\ P_{\theta }^{T}(dt)=0.}

Note the integrand (the function inside the integral) is a function of t {\displaystyle t} and not θ {\displaystyle \theta } . Therefore, since T {\displaystyle T} is boundedly complete the function

g ( t ) = P ( A 1 ( B ) T = t ) P A ( B ) {\displaystyle g(t)=P(A^{-1}(B)\mid T=t)-P^{A}(B)}

is zero for P θ T {\displaystyle P_{\theta }^{T}} almost all values of t {\displaystyle t} and thus

P ( A 1 ( B ) T = t ) = P A ( B ) {\displaystyle P(A^{-1}(B)\mid T=t)=P^{A}(B)}

for almost all t {\displaystyle t} . Therefore, A {\displaystyle A} is independent of T {\displaystyle T} .

Example

Independence of sample mean and sample variance of a normal distribution

Let X1, X2, ..., Xn be independent, identically distributed normal random variables with mean μ and variance σ2.

Then with respect to the parameter μ, one can show that

μ ^ = X i n , {\displaystyle {\widehat {\mu }}={\frac {\sum X_{i}}{n}},}

the sample mean, is a complete and sufficient statistic – it is all the information one can derive to estimate μ, and no more – and

σ ^ 2 = ( X i X ¯ ) 2 n 1 , {\displaystyle {\widehat {\sigma }}^{2}={\frac {\sum \left(X_{i}-{\bar {X}}\right)^{2}}{n-1}},}

the sample variance, is an ancillary statistic – its distribution does not depend on μ.

Therefore, from Basu's theorem it follows that these statistics are independent conditional on μ {\displaystyle \mu } , conditional on σ 2 {\displaystyle \sigma ^{2}} .

This independence result can also be proven by Cochran's theorem.

Further, this property (that the sample mean and sample variance of the normal distribution are independent) characterizes the normal distribution – no other distribution has this property.[3]

Notes

  1. ^ Basu (1955)
  2. ^ Ghosh, Malay; Mukhopadhyay, Nitis; Sen, Pranab Kumar (2011), Sequential Estimation, Wiley Series in Probability and Statistics, vol. 904, John Wiley & Sons, p. 80, ISBN 9781118165911, The following theorem, due to Basu ... helps us in proving independence between certain types of statistics, without actually deriving the joint and marginal distributions of the statistics involved. This is a very powerful tool and it is often used ...
  3. ^ Geary, R.C. (1936). "The Distribution of "Student's" Ratio for Non-Normal Samples". Supplement to the Journal of the Royal Statistical Society. 3 (2): 178–184. doi:10.2307/2983669. JFM 63.1090.03. JSTOR 2983669.

References

  • Basu, D. (1955). "On Statistics Independent of a Complete Sufficient Statistic". Sankhyā. 15 (4): 377–380. JSTOR 25048259. MR 0074745. Zbl 0068.13401.
  • Mukhopadhyay, Nitis (2000). Probability and Statistical Inference. Statistics: A Series of Textbooks and Monographs. 162. Florida: CRC Press USA. ISBN 0-8247-0379-0.
  • Boos, Dennis D.; Oliver, Jacqueline M. Hughes (Aug 1998). "Applications of Basu's Theorem". The American Statistician. 52 (3): 218–221. doi:10.2307/2685927. JSTOR 2685927. MR 1650407.
  • Ghosh, Malay (October 2002). "Basu's Theorem with Applications: A Personalistic Review". Sankhyā: The Indian Journal of Statistics, Series A. 64 (3): 509–531. JSTOR 25051412. MR 1985397.
  • v
  • t
  • e
Statistics
Continuous data
Center
Dispersion
Shape
Count data
Summary tables
Dependence
Graphics
Study design
Survey methodology
Controlled experiments
Adaptive designs
Observational studies
Statistical theory
Frequentist inference
Point estimation
Interval estimation
Testing hypotheses
Parametric tests
Specific tests
  • Z-test (normal)
  • Student's t-test
  • F-test
Goodness of fit
Rank statistics
Bayesian inference
Correlation
Regression analysis
Linear regression
Non-standard predictors
Generalized linear model
Partition of variance
Categorical
Multivariate
Time-series
General
Specific tests
Time domain
Frequency domain
Survival
Survival function
Hazard function
Test
Biostatistics
Engineering statistics
Social statistics
Spatial statistics
  • Category
  • icon Mathematics portal
  • Commons
  • WikiProject