Srivastava et al. (2013)'s test for testing equality of two-sample high-dimensional mean vectors without assuming that two covariance matrices are the same.
Arguments
- y1
The data matrix (p by n1) from the first population. Each column represents a \(p\)-dimensional observation.
- y2
The data matrix (p by n2) from the first population. Each column represents a \(p\)-dimensional observation.
Value
A (list) object of S3
class htest
containing the following elements:
- statistic
the test statistic proposed by Srivastava et al. (2013)
- p.value
the \(p\)-value of the test proposed by Srivastava et al. (2013)
- cpn
the adjustment coefficient proposed by Srivastava et al. (2013)
Details
Suppose we have two independent high-dimensional samples: $$ \boldsymbol{y}_{i1},\ldots,\boldsymbol{y}_{in_i}, \;\operatorname{are \; i.i.d. \; with}\; \operatorname{E}(\boldsymbol{y}_{i1})=\boldsymbol{\mu}_i,\; \operatorname{Cov}(\boldsymbol{y}_{i1})=\boldsymbol{\Sigma}_i,i=1,2. $$ The primary object is to test $$H_{0}: \boldsymbol{\mu}_1 = \boldsymbol{\mu}_2\; \operatorname{versus}\; H_{1}: \boldsymbol{\mu}_1 \neq \boldsymbol{\mu}_2.$$ Srivastava et al. (2013) proposed the following test statistic: $$T_{SKK} = \frac{(\bar{\boldsymbol{y}}_1 - \bar{\boldsymbol{y}}_2)^\top \hat{\boldsymbol{D}}^{-1}(\bar{\boldsymbol{y}}_1 - \bar{\boldsymbol{y}}_2) - p}{\sqrt{2 \widehat{\operatorname{Var}}(\hat{q}_n) c_{p,n}}},$$ where \(\bar{\boldsymbol{y}}_{i},i=1,2\) are the sample mean vectors, \(\hat{\boldsymbol{D}}=\hat{\boldsymbol{D}}_1/n_1+\hat{\boldsymbol{D}}_2/n_2\) with \(\hat{\boldsymbol{D}}_i,i=1,2\) being the diagonal matrices consisting of only the diagonal elements of the sample covariance matrices. \(\widehat{\operatorname{Var}}(\hat{q}_n)\) is given by equation (1.18) in Srivastava et al. (2013), and \(c_{p, n}\) is the adjustment coefficient proposed by Srivastava et al. (2013). They showed that under the null hypothesis, \(T_{SKK}\) is asymptotically normally distributed.
References
Srivastava MS, Katayama S, Kano Y (2013). “A two sample test in high dimensional data.” Journal of Multivariate Analysis, 114, 349--358. doi:10.1016/j.jmva.2012.08.014 .
Examples
set.seed(1234)
n1 <- 20
n2 <- 30
p <- 50
mu1 <- t(t(rep(0, p)))
mu2 <- mu1
rho1 <- 0.1
rho2 <- 0.2
a1 <- 1
a2 <- 2
w1 <- (-2 * sqrt(a1 * (1 - rho1)) + sqrt(4 * a1 * (1 - rho1) + 4 * p * a1 * rho1)) / (2 * p)
x1 <- w1 + sqrt(a1 * (1 - rho1))
Gamma1 <- matrix(rep(w1, p * p), nrow = p)
diag(Gamma1) <- rep(x1, p)
w2 <- (-2 * sqrt(a2 * (1 - rho2)) + sqrt(4 * a2 * (1 - rho2) + 4 * p * a2 * rho2)) / (2 * p)
x2 <- w2 + sqrt(a2 * (1 - rho2))
Gamma2 <- matrix(rep(w2, p * p), nrow = p)
diag(Gamma2) <- rep(x2, p)
Z1 <- matrix(rnorm(n1*p,mean = 0,sd = 1), p, n1)
Z2 <- matrix(rnorm(n2*p,mean = 0,sd = 1), p, n2)
y1 <- Gamma1 %*% Z1 + mu1%*%(rep(1,n1))
y2 <- Gamma2 %*% Z2 + mu2%*%(rep(1,n2))
tsbf_skk2013(y1, y2)
#>
#>
#>
#> data:
#> statistic = -0.10736, cpn = 1.4402, p-value = 0.5427
#>