Monthly Archives: August 2012

Robins and Wasserman Respond to a Nobel Prize Winner

Robins and Wasserman Respond to a Nobel Prize Winner
James Robins and Larry Wasserman

Note: This blog post is written by two people and it is cross-posted at Normal Deviate and Three Toed Sloth.

Chris Sims is a Nobel prize winning economist who is well known for his work on macroeconomics, Bayesian statistics, vector autoregressions among other things. One of us (LW) had the good fortune to meet Chris at a conference and can attest that he is also a very nice guy.

Chris has a paper called On an An Example of Larry Wasserman. This post is a response to Chris’ paper.

The example in question is actually due to Robins and Ritov (1997). A simplified version appeared in Wasserman (2004) and Robins and Wasserman (2000). The example is related to ideas from the foundations of survey sampling (Basu 1969, Godambe and Thompson 1976) and also to ancillarity paradoxes (Brown 1990, Foster and George 1996).

1. The Model

Here is (a version of) the example. Consider iid random variables

\displaystyle  (X_1,Y_1,R_1),\ldots, (X_n,Y_n,R_n).

The random variables take values as follows:

\displaystyle  X_i \in [0,1]^d,\ \ \ Y_i \in\{0,1\},\ \ \ R_i \in\{0,1\}.

Think of {d} as being very, very large. For example, {d=100,000} and {n=1,000}.

The idea is this: we observe {X_i}. Then we flip a biased coin {R_i}. If {R_i=1} then you get to see {Y_i}. If {R_i=0} then you don’t get to see {Y_i}. The goal is to estimate

\displaystyle  \psi = P(Y_i=1).

Here are the details. The distribution takes the form

\displaystyle  p(x,y,r) = p_X(x) p_{Y|X}(y|x)p_{R|X}(r|x).

Note that {Y} and {R} are independent, given {X}. For simplicity, we will take {p(x)} to be uniform on {[0,1]^d}. Next, let

\displaystyle  \theta(x) \equiv p_{Y|X}(1|x) = P(Y=1|X=x)

where {\theta(x)} is a function. That is, {\theta:[0,1]^d \rightarrow [0,1]}. Of course,

\displaystyle  p_{Y|X}(0|x)= P(Y=0|X=x) = 1-\theta(x).

Similarly, let

\displaystyle  \pi(x)\equiv p_{R|X}(1|x) = P(R=1|X=x)

where {\pi(x)} is a function. That is, {\pi:[0,1]^d \rightarrow [0,1]}. Of course,

\displaystyle  p_{R|X}(0|x)= P(R=0|X=x) = 1-\pi(x).

The function {\pi} is known. We construct it. Remember that {\pi(x) = P(R=1|X=x)} is the probability that we get to observe {Y} given that {X=x}. Think of {Y} as something that is expensive to measure. We don’t always want to measure it. So we make a random decision about whether to measure it. And we let the probability of measuring {Y} be a function {\pi(x)} of {x}. And we get to construct this function.

Let {\delta>0} be a known, small, positive number. We will assume that

\displaystyle  \pi(x)\geq \delta

for all {x}.

The only thing in the the model we don’t know is the function {\theta(x)}. Again, we will assume that

\displaystyle  \delta \leq \theta(x) \leq 1-\delta.

Let {\Theta} denote all measurable functions on {[0,1]^d} that satisfy the above conditions. The parameter space is the set of functions {\Theta}.

Let {{\cal P}} be the set of joint distributions of the form

\displaystyle  p(x) \, \pi(x)^r (1-\pi(x))^{1-r}\, \theta(x)^y (1-\theta(x))^{1-y}

where {p(x)=1}, and {\pi(\cdot)} and {\theta(\cdot)} satisfy the conditions above. So far, we are considering the sub-model {{\cal P}_\pi} in which {\pi} is known.

The parameter of interest is {\psi = P(Y=1)}. We can write this as

\displaystyle  \psi = P(Y=1)= \int_{[0,1]^d} P(Y=1|X=x) p(x) dx = \int_{[0,1]^d} \theta(x) dx.

Hence, {\psi} is a function of {\theta}. If we know {\theta(\cdot )} then we can compute {\psi}.

2. Frequentist Analysis

The usual frequentist estimator is the Horwitz-Thompson estimator

\displaystyle  \hat\psi = \frac{1}{n}\sum_{i=1}^n \frac{ Y_i R_i}{\pi(X_i)}.

It is easy to verify that {\hat\psi} is unbiased and consistent. Furthermore, {\hat\psi - \psi = O_P(n^{-\frac{1}{2}})}. In fact, let us define

\displaystyle  I_n = [\hat\psi - \epsilon_n,\ \hat\psi + \epsilon_n]


\displaystyle  \epsilon_n = \sqrt{\frac{1}{2n\delta^2}\log\left(\frac{2}{\alpha}\right)}.

It follows from Hoeffding’s inequality that

\displaystyle  \sup_{P\in{\cal P}_\pi} P(\psi \in I_n)\geq 1-\alpha

Thus we have a finite sample, {1-\alpha} confidence interval with length {O(1/\sqrt{n})}.

Remark: We are mentioning the Horwitz-Thompson estimator because it is simple. In practice, it has three deficiencies:

  1. It may exceed 1.
  2. It ignores data on the multivariate vector {X} except for the one dimensional summary {\pi(X)}.
  3. It can be very inefficient.

These problems are remedied by using an improved version of the Horwitz-Thompson estimator. One choice is the so-called locally semiparametric efficient regression estimator (Scharfstein et al., 1999):

\displaystyle  \hat\psi = \int {\rm expit}\left(\sum_{m=1}^k \hat\eta_m \phi_m(x) + \frac{\hat\omega}{\pi(x)}\right)dx

where {{\rm expit}(a) = e^a/(1+e^a)}, {\phi_{m}\left( x\right)} are basis functions, and {\hat\eta_1,\ldots,\hat\eta_k,\hat\omega} are the mle’s (among subjects with {R_i=1}) in the model

\displaystyle  \log\left( \frac{P(Y=1|X=x)}{P(Y=0|X=x)}\right) = \sum_{m=1}^k \eta_m \phi_m(x) + \frac{\omega}{\pi(x)}.

Here {k} can increase slowly with {n.} Recently even more efficient estimators have been derived. Rotnitzky et al (2012) provides a review. In the rest of this post, when we refer to the Horwitz-Thompson estimator, the reader should think “improved Horwitz-Thompson estimator.” End Remark.

3. Bayesian Analysis

To do a Bayesian analysis, we put some prior {W} on {\Theta}. Next we compute the likelihood function. The likelihood for one observation takes the form {p(x) p(r|x) p(y|x)^r}. The reason for having {r} in the exponent is that, if {r=0}, then {y} is not observed so the {p(y|x)} gets left out. The likelihood for {n} observations is

\displaystyle  \prod_{i=1}^n p(X_i) p(R_i|X_i) p(Y_i|X_i)^{R_i} = \prod_i \pi(X_i)^{R_i} (1-\pi(X_i))^{1-R_i}\, \theta(X_i)^{Y_i R_i} (1-\theta(X_i))^{(1-Y_i)R_i}.

where we used the fact that {p(x)=1}. But remember, {\pi(x)} is known. In other words, {\pi(X_i)^{R_i} (1-\pi(X_i))^{1-R_i}} is known. So, the likelihood is

\displaystyle  {\cal L} (\theta) \propto \prod_i \theta(X_i)^{Y_i R_i} (1-\theta(X_i))^{(1-Y_i)R_i}.

Combining this likelihood with the prior {W} creates a posterior distribution on {\Theta} which we will denote by {W_n}. Since the parameter of interest {\psi} is a function of {\theta}, the posterior {W_n} for {\theta} defines a posterior distribution for {\psi}.

Now comes the interesting part. The likelihood has essentially no information in it.

To see that the likelihood has no information, consider a simpler case where {\theta(x)} is a function on {[0,1]}. Now discretize the interval into many small bins. Let {B} be the number of bins. We can then replace the function {\theta} with a high-dimensional vector {\theta = (\theta_1,\ldots, \theta_B)}. With {n < B}, most bins are empty. The data contain no information for most of the {\theta_j}‘s. (You might wonder about the effect of putting a smoothness assumption on {\theta(\cdot )}. We’ll discuss this in Section 4.)

We should point out that if {\pi(x) = 1/2} for all {x}, then Ericson (1969) showed that a certain exchangeable prior gives a posterior that, like the Horwitz-Thompson estimator, converges at rate {O(n^{-1/2})}. However we are interested in the case where {\pi(x)} is a complex function of {x}; then the posterior will fail to concentrate around the true value of {\psi}. On the other hand, a flexible nonparametric prior will have a posterior essentially equal to the prior and, thus, not concentrate around {\psi}, whenever the prior {W} does not depend on the the known function {\pi(\cdot)}. Indeed, we have the following theorem from Robins and Ritov (1997):

Theorem. (Robins and Ritov 1997). Any estimator that is not a function of {\pi(\cdot)} cannot be uniformly consistent.

This means that, at no finite sample size, will an estimator {\hat\psi} that is not a function of {\pi} be close to {\psi} for all distributions in {{\cal P}}. In fact, the theorem holds for a neighborhood around every pair {(\pi,\theta)}. Uniformity is important because it links asymptotic behavior to finite sample behavior. But when {\pi} is known and is used in the estimator (as in the Horwitz-Thompson estimator and its improved versions) we can have uniform consistency.

Note that a Bayesian will ignore {\pi} since the {\pi(X_i)'s} are just constants in the likelihood. There is an exception: the Bayesian can make the posterior be a function of {\pi} by choosing a prior {W} that makes {\theta(\cdot)} depend on {\pi(\cdot)}. But this seems very forced. Indeed, Robins and Ritov showed that, under certain conditions, any true subjective Bayesian prior {W} must be independent of {\pi(\cdot)}. Specifically, they showed that once a subjective Bayesian queries the randomizer (who selected {\pi}) about the randomizer’s reasoned opinions concerning {\theta (\cdot)} (but not {\pi(\cdot)}) the Bayesian will have independent priors. We note that a Bayesian can have independent priors even when he believes with probabilty 1 that {\pi \left( \cdot \right)} and {\theta \left( \cdot \right) } are positively correlated as functions of {x} i.e. {\int \theta \left( x\right) \pi \left( x\right) dx>\int \theta \left(x\right) dx} {\int \pi \left( x\right) dx.} Having independent priors only means that learning {\pi \left(\cdot \right)} will not change one’s beliefs about {\theta \left( \cdot \right)}. So far, so good. As far as we know, Chris agrees with everything up to this point.

4. Some Bayesian Responses

Chris goes on to raise alternative Bayesian approaches.

The first is to define

\displaystyle  Z_i = \frac{R_i Y_i}{\pi(X_i)}.

Note that {Z_i \in \{0\} \cup [1,\infty)}. Now we ignore (throw away) the original data. Chris shows that we can then construct a model for {Z_i} which results in a posterior for {\psi} that mimics the Horwitz-Thompson estimator. We’ll comment on this below, but note two strange things. First, it is odd for a Bayesian to throw away data. Second, the new data are a function of {\pi(X_i)} which forces the posterior to be a function of {\pi}. But as we noted earlier, when {\theta} and {\pi} are a priori independent, the {\pi(X_i)'s} do not appear in the posterior since they are known constants that drop out of the likelihood.

A second approach (not mentioned explicitly by Chris) which is related to the above idea, is to construct a prior {W} that depends on the known function {\pi}. It can be shown that if the prior is chosen just right then again the posterior for {\psi} mimics the (improved) Horwitz-Thompson estimator.

Lastly, Chris notes that the posterior contains no information because we have not enforced any smoothness on {\theta(x)}. Without smoothness, knowing {\theta(x)} does not tell you anything about {\theta(x+\epsilon)} (assuming the prior {W} does not depend on {\pi}).

This is true and better inferences would obtain if we used a prior that enforced smoothness. But this argument falls apart when {d} is large. (In fairness to Chris, he was referring to the version from Wasserman (2004) which did not invoke high dimensions.) When {d} is large, forcing {\theta(x)} to be smooth does not help unless you make it very, very, very smooth. The larger {d} is, the more smoothness you need to get borrowing of information across different values of {\theta(x)}. But this introduces a huge bias which precludes uniform consistency.

5. Response to the Response

We have seen that response 3 (add smoothness conditions in the prior) doesn’t work. What about response 1 and response 2? We agree that these work, in the sense that the Bayes answer has good frequentist behavior by mimicking the (improved) Horwitz-Thompson estimator.

But this is a Pyrrhic victory. If we manipulate the data to get a posterior that mimics the frequentist answer, is this really a success for Bayesian inference? Is it really Bayesian inference at all? Similarly, if we choose a carefully constructed prior just to mimic a frequentist answer, is it really Bayesian inference?

We call Bayesian inference which is carefully manipulated to force an answer with good frequentist behavior, frequentist pursuit. There is nothing wrong with it, but why bother?

If you want good frequentist properties just use the frequentist estimator. If you want to be a Bayesian, be a Bayesian but accept the fact that, in this example, your posterior will fail to concentrate around the true value.

6. Summary

In summary, we agree with Chris’ analysis. But his fix is just frequentist pursuit; it is Bayesian analysis with unnatural manipulations aimed only at forcing the Bayesian answer to be the frequentist answer. This seems to us to be an admission that Bayes fails in this example.

7. References

Basu, D. (1969). Role of the Sufficiency and Likelihood Principles in Sample Survey Theory. Sankya, 31, 441-454.

Brown, L.D. (1990). An ancillarity paradox which appears in multiple linear regression. The Annals of Statistics, 18, 471-493.

Ericson, W.A. (1969). Subjective Bayesian models in sampling finite populations. Journal of the Royal Statistical Society. Series B, 195-233.

Foster, D.P. and George, E.I. (1996). A simple ancillarity paradox. Scandinavian journal of statistics, 233-242.

Godambe, V. P., and Thompson, M. E. (1976), Philosophy of Survey-Sampling Practice. In Foundations of Probability Theory, Statistical Inference and Statistical Theories of Science, eds. W.L.Harper and A.Hooker, Dordrecht: Reidel.

Robins, J.M. and Ritov, Y. (1997). Toward a Curse of Dimensionality Appropriate (CODA) Asymptotic Theory for Semi-parametric Models. Statistics in Medicine, 16, 285–319.

Robins, J. and Wasserman, L. (2000). Conditioning, likelihood, and coherence: a review of some foundational concepts. Journal of the American Statistical Association, 95, 1340-1346.

Rotnitzky, A., Lei, Q., Sued, M. and Robins, J.M. (2012). Improved double-robust estimation in missing data and causal inference models. Biometrika, 99, 439-456.

Scharfstein, D.O., Rotnitzky, A. and Robins, J.M. (1999). Adjusting for nonignorable drop-out using semiparametric nonresponse models. Journal of the American Statistical Association, 1096-1120.

Sims, Christopher. On An Example of Larry Wasserman. Available at:

Wasserman, L. (2004). All of Statistics: a Concise Course in Statistical Inference. Springer Verlag.


What Every Statistician Should Know About Computer Science (and Vice-Versa)

Statisticians are woefully ignorant about computer science (CS).

And computer scientists are woefully ignorant about statistics.

O.k. I am exaggerating. Nonetheless, it is worth asking: what important concepts should every statistician know from computer science?

I have asked several friends from CS for a list of the top three things from CS that statisticians should know. While there wasn’t complete agreement, here are the three that came up:

  1. Computational complexity classes. In particular, every statistician should understand what P and NP mean (and why you get $1,000,000 from the Clay Mathematics Institute if you prove that {P\neq NP}.). Understanding the fact that searching through all submodels in a variable selection problem is NP hard will convince you that solving a convex relaxation (a.k.a. the lasso) is a really good idea.
  2. Estimating computing time. In CS and machine learning, it is expected that one will estimate the number of operations needed to carry out an algorithm. You write a paper with an algorithm and then say something like: this will take {O(N d^3)} computing time. In statistics, we are pretty loose about this. Some do it, some don’t.

  3. Hashing. One colleague assured me that hashing is critical if statisticians really want to be part of the “big data” world. (Last week, Siva was kind enough to give me a quick tutorial on hashing.)

Is there anything I am missing? What would you put in the top three?

And how about the other way around? For a computer scientist with an interest in statistical learning, what are the three most important concepts that they should know from statistics?

Here are my nominations:

  1. Concentration of measure.
  2. Maximum likelihood.

  3. Minimax theory.

I think this list is not what most statisticians would come up with.

What would you put on the list?

A Workshop on Topology and Machine Learning

Dear Readers:

You may recall my previous post on Topological Data Analysis. I am happy to report that, in the interest of helping to advance this area of research, we (= Sivaraman Balakrishnan, Alessandro Rinaldo, Don Sheehy, Aarti Singh and me) are organizing a workshop on the topic. (To be honest, Sivaraman is doing all the hard work.)

Here are the details:

Call for Papers: Algebraic Topology and Machine Learning
In conjunction with Neural Information Processing Systems (NIPS 2012)

December 7 or 8 (TBD), 2011, Lake Tahoe, Nevada, USA.


Topological methods and machine learning have long enjoyed fruitful interactions as evidenced by popular algorithms like ISOMAP, LLE and Laplacian Eigenmaps which have been borne out of studying point cloud data through the lens of topology/geometry. More recently several researchers have been attempting to understand the algebraic topological properties of data. Algebraic topology is a branch of mathematics which uses tools from abstract algebra to study and classify topological spaces. The machine learning community thus far has focused almost exclusively on clustering as the main tool for unsupervised data analysis. Clustering however only scratches the surface, and algebraic topological methods aim at extracting much richer topological information from data.

The goals of this workshop are:

  1. To draw the attention of machine learning researchers to a rich and emerging source of interesting and challenging problems.
  2. To identify problems of interest to both topologists and machine learning researchers and areas of potential collaboration.
  3. To discuss practical methods for implementing topological data analysis methods.
  4. To discuss applications of topological data analysis to scientific problems.

We also invite submissions in a variety of areas, at the intersection of algebraic topology and learning, that have witnessed recent activity. Areas of focus for submissions include but are not limited to:

  1. Statistical approaches to robust topological inference.
  2. Novel applications of topological data analysis to problems in machine learning.
  3. Scalable methods for topological data analysis.

Submission. Submissions should be an extended abstract not exceeding 4 pages (excluding references) in NIPS format. Style files and other instructions are available at:

Please email all submissions to:

The deadline for submissions is: September 16th, 2012 Organizers will review, select submissions and notify authors by Oct 7, 2012. Accepted submissions will be presented as a talk (20 min) or a poster.

Important Dates:

Sep 16 – Deadline for submissions
Oct 7 – Notification of acceptance
Dec 8 (Tentative) – Workshop

Registration: Participants should refer to the NIPS 2012 website

for information on how to register for the workshop. Please direct any questions or comments to the organizers at


Sivaraman Balakrishnan, Carnegie Mellon University
Alessandro Rinaldo, Carnegie Mellon University
Don Sheehy, INRIA Saclay – Ile-de-France
Aarti Singh, Carnegie Mellon University
Larry Wasserman, Carnegie Mellon University

P-values Gone Wild and Multiscale Madness

I came across a very interesting paper by E. J. Masicampo and Daniel Lalande called A peculiar prevalence of p values just below .05. The link is here. (I saw it referenced at marginal revolution.

1. Peculiar Prevalence

I recommend reading the paper to get the full details. The quick summary is that they collected 3,627 p-values from journals. They found a statistical anomaly in the form of an excess just below 0.05. (They fit a parametric distribution to the p-values and demonstrated a significant departure right below 0.05).

There are some obvious explanations. First, selection bias. Studies with p-values above 0.05 are less likely to be published. Second, the tweaking effect. If you do a study and get a p-value just above .05 you might be tempted to tweak the analysis a bit until you get the p-value just below 0.05. I am not suggesting malfeasance. This could be done quite subconsciously.

You might say: well I would have predicted this would happen anyway. Perhaps. But it is quite interesting to see a carefully done study document and quantify the effect.

E.J. Masicampo was kind enough to share the data with me. Here is a histogram of the p-values.

The jump just below 0.05 is quite noticeable.

2. Multiscale Madness

Now I want to raise a meta-statistical issue. Fitting a parametric model and looking for a big residual is certainly a reasonable approach. But can we do this nonparametrically?

The density of p-values is a mixture of the form

\displaystyle  h(p) = (1-\pi) u(p) + \pi g(p)

where {\pi} is the fraction of non-null effects, {u(p)=1} is the uniform density (the density of {p} under the null) and {g(p)} is the density of p-values for the non-null effects. The unknowns are {\pi} and {g(p)}. It is not unrealistic to assume that {g(p)} is decreasing. This implies that {h} is some decreasing density.

So, statistically, we might want to look for suspicious non-decreasing parts of the density. A good tool for doing this is the distribution free method developed in Dumbgen and Walther (2008). Briefly, the idea is this.

Consider data {X_1,\ldots, X_n} from a density {h}. Let

\displaystyle  X_{(1)} \leq \cdots \leq X_{(n)}

be the order statistics. Now construct all the local order statistics

\displaystyle  \frac{X_{(i)}-X_{(j)}}{X_{(k)}-X_{(j)}}

for {j \leq i \le k}. These local order statistics are combined into a multiscale test statistic {T_{jk}}. Then we find the set of intervals

\displaystyle  {\cal D}^+ = \{ (X_{(j)},X_{(k)}):\ T_{jk} > c_{jk}(\alpha)\}

which are the intervals where the density apparently increases. It is possible to explicitly calculate the critical values {c_{jk}(\alpha)} . One can then claim that all the intervals in {{\cal D}^+} must have an increase in the density {h}. The probability that this claim is wrong as at most {\alpha}. This procedure is exact: it is finite sample and distribution free. (Of course, one can construct intervals of decrease as well but for our problem, {{\cal D}^+} is the set of interest.)

The procedure is implemented in the R package modehunt, written by Kaspar Rufibach and Guenther Walther. When applied to the p-values (I am ignoring a small technical issue, namely, that the data are rounded) we get the following:

The horizontal lines show the intervals with apparent increases in the density. The result confirms the findings of Masicampo and Lalande. There seems to be suspicious increases just below 0.05.

To quote Arte Johnson,
Verrry Interesting!

3. References

Dumbgen, L. and Walther, G. (2008). Multiscale Inference about a density. The Annals of Statistics, 36, 1758–1785.

Masicampo, E.J. and Lalande, D. (2012). A peculiar prevalence of p values just below .05. The Quarterly Journal of Experimental Psychology.

—Larry Wasserman

What To Teach?

The new term is almost here. This year I am (once again) teaching Intermediate Statistical Theory which is a first year graduate course in mathematical statistics.

This course used to be populated mainly by students in the first year of the Statistics Ph.D. program (about 5-10 students). Last year I had over 100 students! Most were from Computer Science and related fields. The world has changed. Statistics is sexy.

In the olden days I followed Casella and Berger. I covered the traditional topics like: basic probability, convergence, sufficiency, estimation, testing, confidence intervals, large sample theory.

In response to changes on our field I have been gradually changing the topics. I threw out: unbiased estimation, most of sufficiency, ancillarity, completeness, the Rao-Blackwell theorem, most powerful tests, etc. I added: Hoeffding’s inequality, VC theory, minimax theory, nonparametric smoothing, the bootstrap, prediction and classification, model selection and causation.

The field of statistics is changing quickly. If we don’t change our courses accordingly we run the risk of becoming irrelevant. On the other hand, we don’t want to abandon teaching core statistical principles and just teaching the latest fads.

I am wondering what other people do. Have you changed your courses? Did I throw out too much? Are there other things I should include?

Confidence Intervals for Misbehaved Functionals

Confidence Intervals for Misbehaved Functionals
Larry Wasserman

Suppose you want to estimate some quantity {\theta} which is a function of some unknown distribution {P}. In other words, {\theta = T(P)} where {T} maps distributions to real numbers. For example, {T(P)} could denote the median of {P}.

Ideally, a {1-\alpha} confidence interval {C_n} based on an iid sample {Y_1,\ldots, Y_n} should satisfy

\displaystyle  P^n(\theta\in C_n) \geq 1-\alpha

for every distribution {P}. The hard part is finding a non-trivial confidence interval that actual satisfies this condition for every {P}. You can always set {C_n=(-\infty,\infty)} but that’s an example of a trivial confidence interval.

In some cases (such as the median) it is possible to find a non-trivial confidence interval. In some cases (such as the mean, when the sample space is the whole real line) it is impossible. Today I want to discuss a paper by David Donoho (Donoho 1988) that discusses, some in-between cases. In these problems, there exist non-trivial, one-sided confidence intervals.

1. Difficult Functionals

Let {{\cal P}} denote all distributions on the real line. A functional is a map {T:{\cal P}\rightarrow \mathbb{R}}. Consider the following functional: {T(P)} is the number of modes of the density {p} of {P}. (If {P} does not have a density we can define {T(P) = \lim_{h\rightarrow 0} T(P\star \Phi_h)} where {P\star \Phi_h} denotes {P} convolved with a Normal with mean 0 and standard deviation {h}.)

Now look at these two plots:

The density {p} one the left has 2 modes so {T(P)=2}. The density {q} one the right has 1,000 modes so {T(Q)=1,000}. Wait! You don’t see 1,000 modes on the second density? The reason is that they are very, very tiny. It’s possible to increase the number of modes drastically without changing the distribution very much. That is, we can find {Q} arbitrarily close to {P} but such that {T(Q) > T(P)}. Functionals with this property are very difficult to estimate. In fact, as Donoho proves, no non-trivial two-sided confidence interval exists for such functionals.

More formally, suppose that {T} takes values in {{\cal T}\subset \mathbb{R}}. Then we’ll say that {T} is difficult if

\displaystyle  {\rm graph}(T)\ {\rm is\ dense\ in\ }{\rm epigraph}(T)


\displaystyle  {\rm graph}(T) = \{ (P,T(P)):\ P\in {\cal P}\}


\displaystyle  {\rm epigraph}(T) = \{ (P,t)):\ P\in {\cal P}\ {\rm and}\ t \geq T(P)\}.

Donoho calls this the dense graph condition. Figure 2 of Donoho’s paper explains everything in a nice picture. Basically, if {T} is difficult, you can increase {T(P)} by changing {P} slightly but you can’t decrease it. (Think of the mode example.)

He then proves the following theorem:

Theorem: [Donoho, 1988] Let {T} be a difficult functional. Let {B= \sup \{t: t\in {\cal T}\}}. If

\displaystyle  \sup_P P(B\notin C_n)=1


\displaystyle  \inf_P P(T(P)\in C_n) =0.

In words, if {C_n} has a non-trivial upper bound for some {P}, then it is has coverage probability 0.

Digression: Rob Tibshirani and I independently proved a similar result. (Tibshirani and Wasserman 1988). We called these bad functionals, sensitive parameters. At the time I was a graduate student at the University of Toronto and Rob was a brand new faculty member there. Just before our paper went to press, David’s paper came out. We managed to add some reference to him when we got the galley proofs. For some reason, the typesetter decided to take every mention of “Donoho” and change it to “Donohue” without consulting us. Thus, our paper has several references to a mysterious person named Donohue. End Digression.

Other examples of difficult functionals are {L_q} norms {T(P)=\left(\int |p^{(k)}|^p \right)^{1/q}}, the entropy {T(P) = \int p \log p} and the Fisher information {T(P) = \int (p')^2/p}.

2. One Sided Intervals

The good news is that we can still say something about the functional {T(P)}. Construct a {1-\alpha} confidence {A_n} set for the distribution function. Let {c_n} be the smallest value of {T(P)} as {P} varies in {A_n}. We then have that

\displaystyle  \inf_P P( T(P) \in [c_n,\infty)) \geq 1-\alpha.

That is, we get a non-trivial one-sided confidence interval. So we can’t upper bound the number of modes but we can say things like: the 95 percent confidence interval rules out 4 or fewer modes.

What makes this work is that you can’t decrease {T(P)} without changing {P} so much that it becomes statistically distinguishable from the original distribution.

Pretty, pretty cool.

3. Higher Dimensions

Things get rougher in high dimensions. Even in two-dimensions there are problems. Think of a two-dimensional distribution {P} with two well separated modes. So {T(P)=2}. Now let {Q} be identical to {P} except that we add a very thin ridge connecting the two modes. This turns 2 modes into 1 mode. Then {Q\approx P} but we have decreased {T(P)}. So in this case, even one-sided inference is not possible.

It might be possible to modify {T} so that we can get non-trivial confidence intervals. For example, perhaps we can define {T} in such a way so that, in this last example, {Q} is still considered to have 2 modes. This would be a nice project for a graduate student.

4. References

Donoho, D.L. (1988). One-sided inference about functionals of a density. The Annals of Statistics, 16, 1390-1420.

Tibshirani, R. and Wasserman, L.A. (1988). Sensitive parameters. Canadian Journal of Statistics, 16, 185-192.

RIP RIP (Restricted Isometry Property, Rest In Peace)

R.I.P. R.I.P.
Restricted Isometry Property, Rest In Peace
Larry Wasserman

The restricted isometry property RIP is a critical property in the area of compresses sensing. (See Candes and Tao 2005, Candes, Romberg, and Tao 2006, Donoho 2006). But the RIP assumption has leaked into regression. In this post, I’ll argue that RIP is a theoretical distraction for research in regression.

1. The Birth of RIP

The typical linear regression problem goes like this. We observe {(X_1,Y_1),\ldots, (X_n,Y_n)} where

\displaystyle  Y_i = \beta^T X_i + \epsilon_i

and {\epsilon_i} has mean 0. Let {d} be the dimension of the covariate {X_i=(X_{i1},\ldots, X_{id})}.

First suppose that {d<n}. Recall that the least squares estimator {\hat\beta} minimizes

\displaystyle  \sum_i (Y_i - \beta^T X_i)^2.

Equivalently, it minimizes

\displaystyle  ||\mathbb{Y} - \mathbb{X}\beta||^2

where {\mathbb{Y} =(Y_1,\ldots,Y_n)} and {\mathbb{X}} is the {n\times d} design matrix where each row consists of one {X_i}.

If {d < n} (and if some conditions hold), then {\mathbb{X}^T\mathbb{X}} is invertible, and the least squares estimator is {\hat\beta = (\mathbb{X}^T\mathbb{X})^{-1}\mathbb{X}^T\mathbb{Y}} is a consistent estimator of {\beta}. It is also minimax. And if we appropriately threshold {\hat\beta}, it is sparsistent (a term invented by Pradeep Ravikumar) which means that

\displaystyle  P({\rm support}(\hat\beta)= {\rm support}(\beta)) \rightarrow 1

as {n\rightarrow \infty}, where {{\rm support}(\beta) = \{j:\ \beta_j \neq 0\}}.

When {d>n}, {\mathbb{X}^T\mathbb{X}} is not invertible. In this case, the now standard estimator is the lasso estimator, (Tibshirani 1996) which is the value {\hat\beta} that minimizes

\displaystyle  \sum_i (Y_i - \beta^T X_i)^2 + \lambda ||\beta||_1 = ||\mathbb{Y} - \mathbb{X}\beta||^2 + \lambda ||\beta||_1

where {||\beta||_1 = \sum_{j=1}^d |\beta|_j}.

Is {\hat\beta} consistent or sparsistent? Yes, if we assume the restricted isometry property. Let {s< d} and let {||\beta||_0} denote the number of nonzero coordinates of {\beta}. We say that {\mathbb{X}} satisfies the {\delta_s} RIP, if

\displaystyle  (1-\delta_s) ||\beta||^2 \leq ||\mathbb{X}\beta ||^2 \leq (1+\delta_s) ||\beta||^2

for every vector {\beta} for which {||\beta||_0 \leq s}.

There are various other conditions besides RIP. Van De Geer and Buhlmann (2009) discuss the relationships between these conditions. But they are all very similar and they all roughly say the same thing: the covariates are nearly independent. Zhao and Yu (2007) and Meinshausen and Yu (2009) show that such conditions are necessary to estimate {\beta}. This is not surprising. If two covariates are highly correlated, you will never be able to distinguish them.

The important point is, RIP is a very strong assumption.

2. Compressed Sensing Versus Regression

But is RIP a reasonable assumption? It depends on the context. In regression problems, no. In compressed sensing, yes.

In a regression problem, nature hands you random data {(X_1,Y_1),\ldots, (X_n,Y_n)}. For example, {X_i} is a vector of gene expression levels on subject {i} and {Y_i} is some disease outcome. As anyone who analyzes real data can attest, RIP will not come close to holding. Highly correlated covariates (collinearity) are the rule, not the exception. It makes no sense to assume something like RIP. For that matter, we should not assume that the linear model {Y_i = \beta^T X_i + \epsilon_i} is correct. That would be crazy. So {\beta} does not even have a meaning anyway.

Compressed sensing is different. In these applications, the regression {Y_i = \beta^T X_i + \epsilon} refers to data coming from some device. We actually design and build the device in such a way that the linear assumption is true. Moreover, part if the device design is that we get to make up the {X_i}‘s (not nature). In particular, we can choose the design matrix. Thus, we can force the RIP to hold. In this engineering applications, linearity and RIP are not just convenient mathematical assumptions. They are design features. They are real.

3. The Lasso Without RIP

We can, however, still make sense of the lasso thanks to a result by Juditsky and Nemirovski (2000) which was later extended by Greenshtein and Ritov (2004). To avoid technical complications, let us assume all the random variables are bounded. Thus, {|Y_i| \leq B} and {|X_{ij}| \leq B}. We do not assume that {\mathbb{E}(Y|X=x)} is linear. Nonetheless, we can still define the best, sparse linear predictor {\beta_*} as follows. For any {\beta}, define the predictive risk

\displaystyle  R(\beta) = \mathbb{E}(Y - \beta^T X)^2

where {(X,Y)} is a new pair.

Let {\beta_*} minimize {R(\beta)} subject to {||\beta||_1 \leq L}. The lasso estimator {\hat\beta} minimizes {\hat R(\beta)} subject to {||\beta||_1 \leq L} where

\displaystyle  \hat R(\beta) = \frac{1}{n}\sum_{i=1}^n (Y_i - X_i^T\beta)^2.

Then the lasso succeeds, in the sense that

\displaystyle  P(R(\hat\beta) > R(\beta_*) + \epsilon) \leq \exp\left( -\left[ \frac{n\epsilon^2}{2B^4(1+L)^4} - \log 2 - 2 \log d\right]\right).

That is, with high probability, the lasso predicts almost as well as the best sparse, linear predictor.

Digression: Note that this is an oracle inequality (or a regret bound). That is, we restrict the set of procedures (sparse linear predictors) but put no restriction on the distribution. A minimax bound here would be useless. The class of all distributions is too large. This is a common situation. You either restrict the class of procedures and seek a regret bound or restrict the class of distributions and get a minimax bound. End Digression.

We get all of this without assuming RIP. We don’t even assume the true regression function is linear. In fact, the result is distribution free. We are basically assuming nothing.

4. Conclusion

My concern is that there are many statistics and machine learning papers discussing regression (not compressed sensing) that make the RIP assumptions. (For the record, I am guilty.) Similar assumptions pop in other problems such as estimating undirected graphs.

I feel that we (as a field) have gone off in a bad theoretical direction. In regression research we should let the RIP rest in peace.

5. References

E. J. Candes and T. Tao. (2005). Decoding by Linear Programming, IEEE Trans. Inf. Th., 51, 4203-4215.

Candes, E.,Romberg, J. and Tao, T. (2006). Stable signal recovery from incomplete and inaccurate measurements. Comm. Pure Appl. Math., 59, 1207-1223.

Donoho, D. L. (2006). Compressed Sensing. IEEE Transactions on Information Theory, 52, 1289-1306.

Greenshtein, E. and Ritov, Y.A. (2004). Persistence in high-dimensional linear predictor selection and the virtue of overparametrization. Bernoulli, 10, 971-988.

Juditsky, A. and Nemirovski, A. (2000). Functional aggregation for nonparametric regression. Ann. Statist., 28, 681-712.

Meinshausen, N. and Yu, B. (2009). Lasso-type recovery of sparse representations for high-dimensional data. Ann. Statist., 37, 246-270.

Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. J. Royal. Statist. Soc B., 58, 267-288.

Van De Geer, S.A. and Buhlmann, P. (2009). On the conditions used to prove oracle results for the Lasso. Electronic Journal of Statistics, 3, 1360–1392.

Zhao, P. and Yu, B. (2007). On model selection consistency of Lasso. Journal of Machine Learning Research, 7 2541.

Mixture Models: The Twilight Zone of Statistics

Mixture Models: The Twilight Zone of Statistics
Larry Wasserman

Mixture models come up all the time and they are obviosly very useful. Yet they are strange beasts.

1. The Gaussian Mixture

One of the simplest mixture models is a finite mixture of Gaussians:

\displaystyle  p(x;\psi) = \sum_{j=1}^k w_j\, \phi(x; \mu_j,\Sigma_j).

Here, {\phi(x; \mu_j,\Sigma_j)} denotes a Gaussian density with mean vector {\mu_j} and covariance matrix {\Sigma_j}. The weights {w_1,\ldots,w_k} are non-negative and sum to 1. The entire list of parameters is

\displaystyle  \psi = (\mu_1,\ldots, \mu_k,\Sigma_1,\ldots,\Sigma_k,w_1, \ldots, w_k).

One can also consider {k}, the number of components, to be another parameter.

2. The Wierd Things That Happen With Mixtures

Now let’s consider some of the wierd things that can happen.

Infinite Likelihood. The likelihood function (for the Gaussian mixture) is infinite at some points in the parameter space. This is not necessarily deadly; the infinities are at the boundary and you can use the largest (finite) maximum in the interior as an estimator. But the infinities can cause numerical problems.

Multimodality of the Likelihood. In fact, the likelihood has many modes. Finding the global (but not infinite) mode is a nightmare. The EM algorithm only finds local modes. In a sense, the MLE is not really a well-defined estimator because we can’t really find it. In machine learning, there has been a bunch of papers trying to find estimators for mixture models that can be found in polynomial time. For example, see this paper.

Multimodality of the Density. You might think that a mixture of {k} Gaussians would have {k} modes. But, in fact, it can have less than {k} or more than {k}. See Carreira-Perpinan and Williams (2003) and Edelsbrunner, Fasy and Rote (2012).

Nonidentifability. A model {\{ p(x;\theta):\ \theta\in\Theta\}} is identifiable if

\displaystyle  \theta_1 \neq \theta_2\ \ \ {\rm implies}\ \ \ p(x;\theta_1) \neq p(x;\theta_2). \ \ \ \ \ (1)

Mixture models are nonidentifiable in two different ways. First, there is nonidentifiability due to permutation of labels. This is a nuisance but not a big deal. A bigger issue is local nonidentifiability. Suppose that

\displaystyle  p(x; \eta,\mu_1,\mu_2) = (1-\eta)\phi(x;\mu_1,1) + \eta\phi(x;\mu_2,1).

When {\mu_1=\mu_2=\mu}, we have that {p(x; \eta,\mu_1,\mu_2) = \phi(x;\mu)}. The parameter {\eta} has disappeared. Similarly, when {\eta=1}, the parameter {\mu_2} disappears. This means that there are subspaces of the parameter space where the family is not identifiable. The result is that all the usual theory about the distribution of the MLE, the distribution of the likelihood rato statistic, the properties of BIC etc. becomes very, very complicated.

Irregularity. Mixture models do not satisfy the usual regularity conditions that make parametric models so easy to deal with. Consider the following example from Chen (1995). Let

\displaystyle  p(x;\theta) = \frac{2}{3} \phi(x;-\theta,1) + \frac{1}{3} \phi(x;2\theta,1).

Then {I(0) =0} where {I(\theta)} is the Fisher information. Moreover, no estimator of {\theta} can converge faster than {n^{-1/4}}. Compare this to a Normal family {\phi(x;\theta,1)} where the Fisher information is {I(\theta)=n} and the maximum likelihood estimator converges at rate {n^{-1/2}}.

Nonintinuitive Group Membership. Mixtures are often used for finding clusters. Suppose that

\displaystyle  p(x) = (1-\eta)\phi(x;\mu_1,\sigma^{2}_1) + \eta\phi(x;\mu_2,\sigma^{2}_2)

with {\mu_1 < \mu_2}. Let {Z=1,2} denote the two components. We can compute {P(Z=1|X=x)} and {P(Z=2|X=x)} explicitly. We can then assign an {x} to the first component if {P(Z=1|X=x) > P(Z=2|X=x)}. It is easy to check that, with certain choices of {\sigma_1,\sigma_2}, that all large values of {x} get assigned to component 1 (i.e. the leftmost component). Technically this is correct, yet it seems to be an unintended consequence of the model.

Improper Posteriors. Suppose we have a sample from the simple mixture

\displaystyle  p(x;\mu) = \frac{1}{2} \phi(x;0,1) + \frac{1}{2} \phi(x;\mu,1).

Then any improper prior on {\mu} yields an improper posterior for {\mu} regardless of the how large the sample size is. Also, in Wasserman (2000) I showed that the only priors that yield posteriors in close agreement to frequentist methods are data-dependent priors.

3. So What?

So what should we make of all of this? I find it interesting that such a simple model can have such complicated behavior. I wonder if many people use mixture models without realizing all the potential complications.

I have decided that mixtures, like tequila, are inherently evil and should be avoided at all costs.

4. References

Carreira-Perpinan and Williams, C. (2003). On the number of modes of a Gaussian mixture. Scale Space Methods in Computer Vision, 625–640.

Chen, J. (1995) Optimal rate of convergence for finite mixture models. The Annals of Statistics, 23, 221–233.

Edelsbrunner, Fasy and Rote (2012). Add Isotropic Gaussian Kernels at Own Risk: More and More Resiliant Modes in Higher Dimensions. ACM Symposium on Computational Geometry (SoCG 2012).

Kalai, Adam, Moitra, Ankur and Valiant, Gregory. (2012). Disentangling Gaussians. Commun. ACM, 55, 113-120.

Wasserman, L. (2000). Asymptotic inference for mixture models by using data-dependent priors. Journal of the Royal Statistical Society: Series B, 62, 159–180.