**TO CONDITION, OR NOT TO CONDITION, THAT IS THE QUESTION**

Between the completely conditional world of Bayesian inference and the completely unconditional world of frequentist inference lies the hazy world of conditional inference.

The extremes are easy. In Bayesian-land you condition on all of the data. In Frequentist-land, you condition on nothing. If your feet are firmly planted in either of these idyllic places, read no further! Because, conditional inference is:

*The undiscovered Country, from whose bourn
No Traveller returns, Puzzles the will,
And makes us rather bear those ills we have,
Than fly to others that we know not of.*

**1. The Extremes **

As I said above, the extremes are easy. Let’s start with a concrete example. Let be a sample from . Suppose we want to estimate ; for example, could be the mean of .

**Bayesian Approach:** Put a prior on . After observing the data compute the posterior for . This induces a posterior for given . We can then make statements like

The statements are conditional on . There is no question about what to condition on; we condition on all the data.

**Frequentist Approach:** Construct a set . We require that

where is the distribution corresponding to taking samples from . We the call a confidence set. No conditioning takes place. (Of course, we might want more than just the guarantee in the above equation, like some sort of optimality; but let’s not worry about that here.)

(I notice that Andrew often says that frequentists “condition on ”. I think he means, they do calculations for each fixed . At the risk of being pedantic, this is not conditioning. To condition on requires that be a random variable which it is in the Bayesian framework but it is not a random variable in the frequentist framework. But I am probably just nit picking here.)

**2. So Why Condition? **

Suppose you are taking the frequentist route. Why would you be enticed to condition? Consider the following example from Berger and Wolpert (1988).

I write down a real number . I then generate two random variables as follows:

where and and iid and

Let denote the distribution of . The set of distributions is .

I show Fred the frequentist and and he has to infer . Fred comes up with the following confidence set:

Now, it is easy to check that, no matter what value takes, we have that

Fred is happy. is a 75 percent confidence interval.

To be clear: if I play this game with Fred every day, and I use a different value of every day, we will find that Fred traps the true value 75 percent of the time.

Now suppose the data are . Fred reports that his 75 percent confidence interval is . Fred is correct that his procedure has 75 percent coverage. But in this case, many people are troubled by reporting that is a 75 percent confidence interval. Because with these data, we know that must be 18. Indeed, if we did a Bayesian analysis with a prior that puts positive density on each , he would find that .

So, we are 100 percent certain that and yet we are reporting as a 75 percent confidence interval.

There is nothing wrong with the confidence interval. It is a procedure, and the procedure comes with a frequency guarantee: it will trap the truth 75 percent of the time. It does not agree with our degrees of belief but no one said it should.

And yet Fred thinks he can retain his frequentist credentials and still do something which intuitively feels better. This is where conditioning comes in.

Let

The statistic is an ancillary: it has a distribution that does not depend on . In particular, for every . The idea now is to report confidence, conditional on . Our new procedure is:

If report with confidence level 1.

If report with confidence level 1/2.

This is indeed a valid conditional confidence interval. Again, imagine we play the game over a long sequence of trials. On the subsequence for which , our interval contains the true value 100 percent of the time. On the subsequence for which , our interval contains the true value 50 percent of the time.

We still have valid coverage and a more intuitive confidence interval. Our result is identical the Bayesian answer if the Bayesian uses a flat prior. It is nearly equal to the Bayesian answer if the Bayesian uses a proper but very flat prior.

(This is an example where the Bayesian has the upper hand. I’ve had other examples on this blog where the frequentist does better than the Bayesian. To readers who attach themselves to either camp: remember, there is plenty of ammunition in terms of counterexamples on BOTH sides.)

Another famous example is from Cox (1958). Here is a modified version of that example. I flip a coin. If the coin is HEADS I give Fred . If the coin is TAILS I give Fred where . What should Fred’s confidence interval for be?

We can condition on the coin, and report the usual confidence interval corresponding to the appropriate Normal distribution. But if we look unconditionally, over replications of the whole experiment, and minimize the expected length of the interval, you get an interval that has coverage less than for HEADS and greater than for TAILS. So optimizing unconditionally pulls us away from what seems to be the correct conditional answer.

**3. The Problem With Conditioning **

There are lots of simple examples like the ones above where, psychologically, it just feels right to condition on something. But simple intuition is misleading. We would still be using Newtonian physics if we went by our gut feelings.

In complex situations, it is far from obvious if we should condition or what we should condition on. Let me review a simplified version of Larry Brown’s (1990) example that I discussed here. You observe

where

, and each is a vector of length . Suppose further that the covariates are independent. We want to estimate .

The “best” estimator (the maximum likelihood estimator) is obtained by conditioning on all the data. This means we should estimate the vector by least squares. But, the least squares estimator is useless when .

From the Bayesian point of view we compute we compute the posterior

which, for such a large , will be useless (completely dominated by the prior).

These estimators have terrible behavior compared to the following “anti-conditioning” estimator. Throw away all the covariates except the first one. Now do linear regression using only and the first covariate. The resulting estimator is then tightly concentrated around with high probability. In this example, throwing away data is much better than conditioning on the data. There are some papers on “forgetful Bayesian inference” where one conditions on only part of the data. This is fine but then we are back the the original question: what do we condition on?

There are many other example such as this one.

**4. The Answer **

It would be nice if there was a clear answer such as “you should always condition” or “you should never condition.” But there isn’t. Do a Google Scholar search on conditional inference and you will find an enormous literature. What started as a simple, compelling idea evolved into a complex research area. Much of these conditional methods are very sophisticated and rely on second order asymptotics. But it is rare to see anyone use conditional inference in complex problems, with the exception of Bayesian inference which some will argue goes for a definite, psychologically satisfying answer at the expense of thinking hard about the properties of the resulting procedures.

Unconditional inference is simple and avoids disasters. The cost is that we can sometimes get psychologically unsatisfying answers. Conditional inference yields more psychologically satisfying answers but can lead to procedures with disastrous behavior.

There is no substitute for thinking. Be skeptical of easy answers.

*Thus Conscience does make Cowards of us all,
And thus the Native hue of Resolution
Is sicklied o’er, with the pale cast of Thought,*

** References **

Berger, J.O. and Wolpert, R.L. (1988). *The likelihood principle*, IMS.

Brown, L. D. (1990). An Ancillarity Paradox Which Appears in Multiple Linear Regression. *Ann. Statist*. 18, 471-493. link to paper.

Cox, D.R. (1958). Some problems connected with statistical inference. *The Annals of Mathematical Statistics*, 29, 357-372.

## 7 Comments

Thanx for the nice post. Two comments:

1. Cox example seems different from the first example by Berger and Wolpert.

In the first example, you see A, so you can condition on it. In the second example, you don’t observe the event you need to condition on. Or did you mean that Fred tells you the result of the coin flip? (in that case it seems obvious, at least to me, that you should condition on the result).

2. In the second case, the ML estimator is known to be ‘only’ asymptotically optimal, but since for d>n is so far from the asymptotic regime, no wonder that another estimator would perform better.

I’m a bit more confused about the bayesian estimator. It seems to me that the issue is not bayesian vs. not bayesian, but whether or not to look at one variable or all. You could put a prior only on beta_1 and get a bayesian estimator which will converge rapidly to the true value.

Also, suppose that the set of beta is indeed generated from the prior which you assume. In this case, will the simple estimator steel beat the bayesian estimator (the latter should be optimal in this case, no?)

In the Cox example you do see the coin flip.

In the regression case, you refer t drawing beta from a prior.

The beta is not drawn fro any prior.

Note that if you do a Bayes analysis with a flat prior you get the

least squares estimator.

That’s why some kind of long run coverage alone may not suffice to answer questions of relevance in particular cases. In the kind of example of your #2, the statistic is incomplete. In Cox and Mayo (2010), we try to identify a rationale: http://www.phil.vt.edu/dmayo/personal_website/ch%207%20cox%20&%20mayo.pdf

Aris Spanos has a different treatment. Dashing, so I may be missing something.

Another example would be the case of CIs after selection (say, using a testing approach).

This is more of a question for which I hope you could write a post. If you have a population prior, with an infinite number of individuals in the population, how can a few new measurements via Bayes’ rule change such a reliable prior? Or are frequentists wrong in saying there is such population information? I studied frequentist (test) statistics in psychology for 4 years and Bayesian statistics in AI for 5, but this is a practicality I can’t really get my head around. I know that in the end Bayesian and frequentist statistics reconcile about this but it would be nice to get some insights from an expert, if you know what I mean and if you’re interested of course, thanks.

I am not sure I understand your question.

Can you give a bit more detail?

The confidence set should be C(Y_1,Y_2), na?