Some Continuous Edgeworth Expansions for Markov Chains With Applications to Bootstrap J
Edgeworth Expansion
An Edgeworth expansion of the distribution of Wn modifies the standard normal approximation such that the first r cumulants (typically 3 or 4) of the approximating distribution match those of Wn.
From: Philosophy of Statistics , 2011
Normal Approximations
Robert J. Boik , in Philosophy of Statistics, 2011
B.5 Theorem 11: Edgeworth Expansion
The Edgeworth expansion is based on properties of Hermite polynomials, defined in §8, and properties of characteristic and cumulant generating functions, defined in §B.1. Severini [2005, Theorem 3.8] verified that if Y is a random variable whose characteristic function, CF Y (t) satisfies
(17)
Suppose that Z ∼ N(0,1). Then the characteristic function of Z is CF Z (t) = exp{− t 2/2}. It follows from (17) that
where φ(z) = φ(z,0,1). Differentiating both sides of the above equality and using the definition of Hermite polynomials (see §8) reveals that
(18)
Also, if r ≥ 1, then
(19)
To justify Theorem 11, first use (11) and (13) to expand the cumulant generatingfunction of W n . The result is
Second, use the inversion formula (17) and the above expansion to obtain
Using (it)2 = − t 2 and expanding the exponential function yields
Lastly, use (18) to integrate term by term. The result is as follows:
The Edgeworth expansion for the cdf, F n (w) is obtained by using (19) to integrate the pdf expansion term by term.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780444518620500320
Asymptotic Expansions of the Distributions of the Least Squares Estimators in Factor Analysis and Structural Equation Modeling
Haruhiko Ogasawara , in Handbook of Statistics, 2012
Abstract
Asymptotic distributions of the least squares estimators in factor analysis and structural equation modeling are derived using the Edgeworth expansions up to order O(1/ n) under nonnormality. The estimators dealt with in this chapter are those for unstandardized variables by normal theory generalized least squares, simple or scale-free least squares, least squares with powers of diagonals and unweighted least squares, and those by unweighted least squares for standardized variables. It is shown that the formulas also hold for the corresponding estimators by maximum likelihood. Simulations are performed to see the accuracy of the formulas in factor analysis. The case of the normal theory Studentized statistics under nonnormality is discussed.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780444518750000075
Handbook of Statistics
P.K. Pathak , C.R. Rao , in Handbook of Statistics, 2013
5 Second-order correctness of the sequential bootstrap
The proof of the second-order correctness of the sequential bootstrap requires the Edgeworth expansion for dependent random variables. Along the lines of the Hall-Mammen work (1994), we first outline an approach based on cumulants. This approach assumes that a formal Edgeworth expansion is valid for pivot under the sequential bootstrap.
Let denote the number of times the ith observation from the original sample appears in the sequential bootstrap sample, . Then
(59)
in which are exchangeable random variables.
The probability distribution of is given by
(60)
for , and in which is the difference operator with unit increment. The moment generating function of is given by
(61)
The second-order correctness of the sequential bootstrap for linear statistics such as the sample sum is closely related to the behavior of the moments of the random variables . Among other things, the asymptotic distribution of each is Poisson with mean 1. In fact, it can be shown that
(62)
It follows from (62) that to order , the random variables are asymptotically independent. This implies that the Hall-Mammen-type (1994) conditions for the second-order correctness of the sequential bootstrap hold. This approach is based on the tacit assumption that formal Edgeworth-type expansions go through for the sequential bootstrap. A rigorous justification of such an approach is unavailable in the literature at the present time. Another approach which bypasses this difficulty altogether entails a slight modification of the sequential bootstrap. It is based on the observation that each in Eq. (59) is approximately a Poisson variate subject to the constraint:
(63)
i.e., there are exactly non-zero . This observation enables us to modify the sequential bootstrap so that existing techniques on the Edgeworth expansion, such as those of Babu and Bai (1996), Bai and Rao (1991,1992), Babu and Singh (1989), and others, can be employed. We refer to this modified version as the Poisson bootstrap.
The Poisson Bootstrap: The original sample is assumed to be from for greater flexibility. Let denote n independent observations from , the Poisson distribution with unit mean. If there are exactly non-zero values among , take
(64)
otherwise reject the s and repeat the procedure. This is the conceptual definition. The sample size of the Poisson bootstrap admits the representation:
(65)
in which are IID Poisson variates with mean and with the added restriction that exactly of the s are non-zero, i.e., .
A simple way to implement the Poisson bootstrap in practice is to first draw a simple random sample without replacement (SRSWOR) of size from the set of unit-indices , say . Then assign respectively to these values independently drawn from the truncated Poisson distribution with and left-truncated at (R-syntax: qpois (runif (m, dpois (0,1), 1), 1)) and set for the remaining .
It can be shown that the moment generating function of is (Theorem 2.1 in Babu et al. (1999)):
(66)
so that the distribution of can be viewed as that of IID random variables with a common moment generating function:
(67)
It is clear that is the moment generating function of the Poisson distribution with location parameter and truncated at .
This modification of the sequential bootstrap enables us to develop a rigorous proof of the second-order correctness in the sequential case. Now let be IID random variables with mean and variance . We assume that is strongly non-lattice, i.e., it satisfies Cramér's condition:
(68)
Let be a sequence of IID Poisson random variables with mean 1. We now state three main results, furnishing a rigorous justification for the second-order correctness of the sequential bootstrap. These results follow from conditional Edgeworth expansions for weighted means of multivariate random vectors (cf Babu et al., 1999).
Theorem 4
Suppose that and that the characteristic function of satisfies Cramér's condition (68). If is bounded, then
(69)
uniformly in x, given .
Smooth Functional Model: An extension of Theorem 4 to the multivariate case goes through to statistics which can be expressed as smooth functions of multivariate means. Now let be a sequence of IID random vectors with mean and dispersion matrix . Let denote the corresponding sample dispersion matrix. Then the following results hold.
Theorem 5
Suppose that is strongly non-lattice and . Let be a three-times continuously differentiable function in a neighborhood of . Let denote the vector of first-order partial derivatives at and suppose that . If is bounded, then for almost all sample sequences , we have
(70)
in which denotes the sup-norm over .
The following result is well suited for applications to studentized statistics.
Theorem 6
Let satisfy the conditions of Theorem 5. Suppose that the function is three-times continuously differentiable in the neighborhood of the origin and . If is bounded, then for almost all sample sequences , we have
(71)
For example, an immediate consequence of Theorem 6 is the second-order correctness of the following sequential bootstrap pivot:
(72)
given that .
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780444538598000011
Blind Source Separation: The Sparsity Revolution
Jerome Bobin , ... Mohamed Jalal Fadili , in Advances in Imaging and Electron Physics, 2008
C The Algorithmic Viewpoint
a. Approximating Independence . In the ICA setting, the mixing matrix is square and invertible. Solving a BSS problem is equivalent to looking for a demixing matrix B that maximizes the independence of the estimated sources: . In that setting, maximizing the independence of the sources (with respect to the KL divergence) is equivalent to maximizing the non-Gaussianity of the sources. Since the seminal article by Comon (1994), a variety of ICA algorithms have been proposed. They all merely differ in the way they devise assessable quantitative measures of independence. Some popular approaches that have given "measures" of independence are presented below:
- •
-
Information maximization (see Bell and Sejnowski, 1995; Nadal and Parga, 1994): Bell and Sejnowski showed that maximizing the information of the sources is equivalent to minimizing the measure of independence based on the KL divergence in Eq.(5).
- •
-
Maximum likelihood: Maximum likelihood (ML) has also been proposed to solve the BSS issue. The ML approach (Cardoso, 1997; Parra and Pearlmutter, 1997; Pham et al., 1992) has been showed to be equivalent to information maximization (InfoMax) in the ICA framework.
- •
-
Higher-order statistics: As noted previously, maximizing the independence of the sources is equivalent to maximizing their non-Gaussianity under a strict decorrelation constraint. Because Gaussian random variables have vanishing higher-order cumulants, devising a separation algorithm based on higher-order cumulants should provide a way of accounting for the non-Gaussianity of the sources. A wide range of algorithms have been proposed based on the use of higher-order statistics (Hyvarinen et al., 2001; Belouchrani et al., 1997; Cardoso, 1999, and references therein). Historical papers (see Comon, 1994 ) proposed ICA algorithms that use approximations of the KL divergence (based on truncated edgeworth expansions). Interestingly, those approximations explicitly involve higher-order statistics.
Lee et al. (1998) showed that most ICA-based algorithms are similar in theory and in practice.
b. Limits of ICA . Despite its theoretical strength and elegance, ICA has several limitations:
- •
-
Probability density assumption: Even implicit, ICA algorithm requires information on the sources distribution. As stated in Lee et al. (1998), whatever the contrast function to minimize (mutual information, ML, higher-order statistics), most ICA algorithms can be equivalently restated in a natural gradient form (Amari, 1999; Amari and Cardoso, 1996). In such a setting, the "demixing" matrix B is estimated iteratively: B ← B + μΔB where the natural gradient of B is given by:
(8)
where the function h is applied elementwise: and is the current estimate of . Interestingly, the so-called score function h in Eq.(8) is closely related to the assumed pdf of the sources (see Amari and Cardoso, 1996; Amari and Cichocki, 2002). Assuming that all the sources are generated from the same probability density function f S , the so-called score function h is defined as follows:
(9)
As expected, the way the "demixing" matrix (and thus the sources) is estimated closely depends on the way the sources are modeled (from a statistical point of view). For instance, separating platykurtic (distribution with negative kurtosis) or leptokurtic (distribution with positive kurtosis) sources requires completely different score functions. Even if ICAis shown in Amari and Cardoso to be quite robust to "mismodeling," the choice of the score function is crucial with respect to the convergence (and rate of convergence) of ICA algorithms. Some ICA-based techniques (see Koldovsky and Oja, 2006) emphasized adapting the popular FastICA algorithm to adjust the score function to the distribution of the sources. They particularly emphasize modeling sources the distribution of which belongs to specific parametric classes of distributions such as generalized Gaussian: f S (S) ∝ ∏ ij exp(− μ|s ij | θ ). 1
- •
-
Noisy ICA: Only a few works have already investigated the problem of noisy ICA (see Davies, 2004; Koldovsky and Tichavsky, 2006). As pointed out by Davies (2004), noise clearly degenerates the ICA model: it is not fully identifiable. In the case of additive Gaussian noise as stated in Eq.(2), using higher-order statistics yields an efficient estimate of the mixing matrix A = B − 1 (higher-order statistics are blind to additive Gaussian noise; this property does not hold for non-Gaussian noise). Further, in the noisy ICA setting, applying the demixing matrix to the data does not yield an efficient estimate of the sources. Furthermore, most ICA algorithms assume the mixing matrix A to be square. When there are more observations than sources (m > n), a dimension reduction step is preprocessed. When noise perturbs the data, this subspace projection step can dramatically deteriorate the performance of the separation stage.
The next section introduces a new way of modeling the data to avoid most of the aforementioned limitations of ICA.
1 Sparsity in Blind Source Separation
In the above paragraph, we pointed out that BSS is overwhelmingly a question of contrast and diversity. Indeed, devising a source separation technique consists of finding an effective way of disentangling between the sources. From this viewpoint, statistical independence is a kind of "measure" of diversity between signals. Within this paradigm, we can wonder if independence is a natural way of differentiating between signals.
As a statistical property, independence is a non-sense in a non-asymptotic study. In practice, one must deal with finite-length signals, sometimes with a few samples. Furthermore, most real-world data are modeled by stationary stochastic processes. Let us consider the images in Figure 1.
Natural pictures are clearly nonstationary. As these pictures are slightly correlated, independence fails in differentiating between them. Hopefully, the human eye (more precisely the different levels of the human visual cortex) is able to distinguish between those two images. Then, what makes the eye so effective in discerning between visual "signals"?
The answer may come from neurosciences. Indeed, for a decades, many researchers (Barlow, 1961; Field, 1999; Hubel and Wiesel, 1981; 2 Olshausen and Field, 2006; Simoncelli and Olshausen, 2001, and references therein) in this field have endeavored to provide some exciting answers: the mammalian visual cortex seems to have learned via the natural selection of individuals, an effective way of coding the information in natural scenes. Indeed, the first level of the mammalian visual cortex (termed V1) seems to verify several interesting properties: (1) it tends to "decorrelate" the responses of visual receptive fields (following Simoncelli and Olshausen, 2001; an efficient coding cannot duplicate information in more than one neuron), (2) owing to a kind of "economy/compression principle," saving neurons' activity yields a sparse activation of neurons for a given stimulus (this property can be considered as a way of compressing information).
Furthermore, the primary visual cortex is sensitive to particular stimuli (visual features) that surprisingly look like oriented Gabor-like wavelets (see Field, 1999). It gives support to the crucial part played by contours in natural scenes. Furthermore, each stimulus tends to be coded by a few neurons. Such a way of coding information is often referred to as sparse coding. These few elements of neuroscience motivate the use of sparsity as an effective way of compressing signal's information, thus extracting its very essence.
Inspired by the behavior of our visual cortex, seeking a sparse code may provide an effective way of differentiating between "different" signals. Here, "different" signals are signals with different sparse representations.
a. A Pioneering Work in Sparse BSS . The seminal paper of Zibulevsky and Pearlmutter (2001) introduced sparsity as an alternative to standard contrast functions in ICA. In this work, the authors proposed to estimate the mixing matrix A and the sources S in a fully Bayesian framework. Each source {si } i = 1,…,n is assumed to be sparsely represented in the basis Ф:
(10)
As the sources are assumed to be sparse, the distribution of their coefficients in Ф is a "sparse" (i.e., leptokurtic) prior distribution:
(11)
where gγ (αi [k]) = |αi [k]| γ with γ ≤ 1. 3 Zibulevsky proposed to estimate A and S via a maximum a posteriori (MAP) estimator. The optimization task is then run using a Newton-like algorithm: the relative newton algorithm (RNA; see Zibulevski, 2003 for more details). This new sparsity-based method paved the way for the use of sparsity in BSS. Note that several other works emphasized the use of sparsity in a parametric Bayesian approach (Hyvarinen et al., 2001 and references therein). Recently, sparsity has emerged as an effective tool for solving underdetermined source separation issues (Bronstein et al., 2005; Georgiev et al., 2005; Li et al., 2006; Vincent, 2007 and references therein). This chapter concentrates on overdetermined BSS (m ≥ n). Inspired by the work of Zibulevsky, we present a novel sparsity-based source separation framework providing new insights into BSS.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/S1076567008006058
Higher-order approximations for interval estimation in binomial settings
Ana-Maria Staicu , in Journal of Statistical Planning and Inference, 2009
To obtain the Edgeworth expansion for the coverage probability of the one-sided confidence intervals it requires a two-step approach. First, we write the stochastic expansion of in powers of the score statistic, . Then we expand the coverage probability of the corresponding upper limit confidence intervals, by using the expansion for the cumulative distribution function of ; see the Appendix for more details. Alternatively one could use the expansion of with respect to found in Section 2.1, followed by the corresponding Edgeworth expansion of the distribution of . Continuing the theorems presented in Brown et al. (2002) and Cai (2005) we state the following proposition.
Proposition 2.1
The coverage probability of the upper limit confidence interval satisfies
where , is defined by (A.3) in the Appendix, and . Here denotes the fractional part of and we assume that is not an integer.
Read full article
URL:
https://www.sciencedirect.com/science/article/pii/S037837580900086X
Source: https://www.sciencedirect.com/topics/mathematics/edgeworth-expansion
0 Response to "Some Continuous Edgeworth Expansions for Markov Chains With Applications to Bootstrap J"
Post a Comment