# Robert & Casella. Chpt 2, Ex. 2.21

The non-central chi-squared distribution $\chi^{2}(\lambda)$ can be defined by:

i. a mixture representation (2.2) where $g(x|y)$ is the density of $\chi^{2}_{p+2y}$ and $p(y)$ is the density of $\mathcal{P}(\lambda/2)$ and
ii. the sum of a $\chi^{2}_{p-1}$ random variable and the square of a $N(\parallel\theta\parallel,1)$

(a) Show that both representations hold

By definition of the noncentral chi-squared distribution $\chi_p^2(\lambda)$, it has density function:

$f_{\chi^2_p(\lambda)}(x)=\sum_{y=0}^\infty \frac{e^{-\lambda/2}(\lambda/2)^y}{y!}f_{\chi^2_{p+2y}}(x)$

where $f_{\chi^2_{p+2y}}(x)$ is the density of central chi-squared r.v. $\chi^2_{p+2y}$. Also note that $\frac{e^{-\lambda/2}(\lambda/2)^y}{y!}, y=0,1,\cdots,\infty$ is the pmf of Poisson r.v. $\mathcal{P}(\lambda/2)$. Thus, the noncentral chi-squared distribution has a mixture representation (2.2).

On the other hand, for any $p-1$ i.i.d normal r.v. $X_1,X_2,\cdots,X_{p-1}\sim N(0,1)$, their sum $\sum_{i=1}^{p-1}X_i^2\sim\chi^2_{p-1}$. Let r.v. $Y\sim N(\theta,1)$, then its square $Y^2\sim\chi_1^2(\theta^2)$. Therefore $\sum_{i=1}^{p-1}X_i^2+Y^2\sim \chi_p^2(\theta^2)$, which completes the proof of (a)

(b) Show that the representations are equivalent if $\lambda=\theta^{2}/2$

Note that if $\lambda=\theta^2$ (not $\theta^2/2$), then the two representations in (i) and (ii) are equivalent. There should be a typo in (b). This can be verified by simulation.

(c) Compare the corresponding algorithms that can be derived from these representations. among themselves and also with chisq for small and large values of $\lambda$

The figure below compared the algorithms defined by (i) and (ii), and the rchisq function in R. $10^5$ samples were generated for each algorithm. The degree-of-freedom was fixed as 2. The algorithms were compared with small (0.1) and large (100) non-central parameters (ncp). The algorithms based on representation (i) and (ii) performed differently in the tail. Both algorithm were more closer to rchisq for larger ncp. When ncp is small, both algorithms departed from rchisq in the tail.

#ex 2.21
#rep (i): mixture
algorithm1<-function(n, df, ncp){

y<-rpois(n, ncp/2)
rchisq(n, df+2*y, 0)

}

#rep (ii): sum of two chi-squared r.v.
algorithm2<-function(n, df, ncp){

x2<-rchisq(n, df-1, ncp=0)
y<-rnorm(n, sqrt(ncp), 1)
x2+y^2

}
par(mfrow=c(2,3))

set.seed(100)
n<-1e5
df<-2

#Q-Q plots of three non-central chi-squared generators
#two ncp are used, one larger and the other one smaller
for(ncp in c(0.1, 100)){
rv.rchisq<-rchisq(n, df, ncp)
rv.alg1<-algorithm1(n, df, ncp)
rv.alg2<-algorithm2(n, df, ncp)

qqplot(rv.alg1, rv.alg2, pch=”.”, main=paste0(“Q-Q Plot: algorithm (i) vs algorithm (ii)\nncp = “, ncp),
xlab=”Algorithm (i)”, ylab=”Algorithm (ii)”)
abline(0,1,col=”red”)
qqplot(rv.alg1, rv.rchisq, pch=”.”, main=paste0(“Q-Q Plot: algorithm (i) vs rchisq\nncp = “, ncp),
xlab=”Algorithm (i)”, ylab=”rchisq”)
abline(0,1,col=”red”)
qqplot(rv.alg2, rv.rchisq, pch=”.”, main=paste0(“Q-Q Plot: algorithm (ii) vs rchisq\nncp = “, ncp),
xlab=”Algorithm (ii)”, ylab=”rchisq”)
abline(0,1,col=”red”)
}