STEIN’S PARADOX

Something that is well known in the statistics world but perhaps less well known in the machine learning world is Stein’s paradox.

When I was growing up, people used to say: do you remember where you were when you heard that JFK died? (I was three, so I don’t remember. My first memory is watching the Beatles on Ed Sullivan.)

Similarly, statisticians used to say: do you remember where you were when you heard about Stein’s paradox? That’s how surprising it was. (I don’t remember since I wasn’t born yet.)

Here is the paradox. Let . Define the risk of an estimator to be

An estimator is *inadmissible* if there is another estimator with smaller risk. In other words, if

with strict inequality at at least one .

Question: Is admissible.

Answer: Yes.

Now suppose that where now , and

Question: Is admissible.

Answer: Yes.

Now suppose that where now , and

Question: is admissible.

Answer: No!

If you don’t find this surprising then either you’ve heard this before or you’re not thinking hard enough. Keep in mind that the coordinates of the vector are independent. And the could have nothing to do with each other. For example, mass of the moon, price of coffee and temperature in Rome.

In general, is inadmissible if the dimension of satisfies .

The proof that is inadmissible is based on defining an explicit estimator that has smaller risk than . For example, the James-Stein estimator is

It can be show that the risk of this estimator is strictly smaller than the risk of , for all . This implies that is inadmissible. If you want to see the detailed calculations, have a look at Iain Johnstone’s at this site which he makes freely available on his website.

Note that the James-Stein estimator shrinks towards the origin. (In fact, you can shrink towards any point; there is nothing special about the origin.) This can be viewed as an empirical Bayes estimator where has a prior of the form and is estimated from the data. The Bayes explanation gives some nice intuition. But it’s also a bit misleading. The Bayes explanation suggests we are shrinking the means together because we expect them *a priori* to be similar. But the paradox holds even when the means are not related in any way.

Some intuition can be gained by thinking about function estimation. Consider a smooth function . Suppose we have data

where and . Let us expand in an orthonormal basis: . To estimate we need only estimate the coefficients . Note that . This suggests the estimator

But the resulting function estimator is useless because it is too wiggly. The solution is to smooth the estimator; this corresponds to shrinking the raw estimates towards 0. This adds bias but reduces variance. In other words, the familiar process of smoothing, which we use all the time for function estimation, can be seen as “shrinking estimates towards 0” as with the James-Stein estimator.

If you are familiar with minimax theory, you might find the Stein paradox a bit confusing. The estimator is minimax, that is, it’s risk achieves the minimax bound

This suggests that is a good estimator. But Stein’s paradox tells us that is inadmissible which suggests that it is a bad estimator.

Is there a contradiction here?

No. The risk of is a constant. In fact, for all where is the dimension of . The risk of the James-Stein estimator is less than the risk of , but, as . So they have the same *maximum risk*.

On the one hand, this tells us that a minimax estimator can be inadmissible. On the other hand, in some sense it can’t be “too far” from admissible since they have the same maximum risk.

Stein first reported the paradox in 1956. I suspect that fewer and fewer people include the Stein paradox in their teaching. (I’m guilty.) This is a shame. Paradoxes really grab students’ attention. And, in this case, the paradox is really fundamental to many things including shrinkage estimators, hierarchical Bayes, and function estimation.

## 31 Comments

Thanks Larry for bringing more attention to this. If your hunch that fewer people are teaching Stein’s paradox is correct, I think that’s awful! I’m lucky to have learned a lot about Stein’s paradox from my colleague Carl Morris, who was a student of Stein and pioneered the empirical Bayes approach/interpretation with Brad Efron. We teach it every year in our first year graduate inference course, relating it to shrinkage estimation, hierarchical models, regression toward the mean, and Stein’s Unbiased Risk Estimate (SURE).

The empirical Bayes explanation gives good intuition, as you mentioned, but I think it’s more than that. The Efron-Morris paper “Stein’s Estimation Rule and Its Competitors — An Empirical Bayes Approach” (JASA 1973, http://faculty.chicagobooth.edu/nicholas.polson/teaching/41900/efron-morris2.pdf ), has a proof that I consider stunningly beautiful. They give a rigorous proof of Stein’s result for the one-level model (no prior imposed on the theta_j), by first assuming the two-level model where the theta_j are themselves Normal with a common mean and variance. That sounds like a paradox in its own right: how can one assume such nice additional structure and then have any hope of obtaining the fully general result that assumes nothing about the theta_j? But they do exactly that, by using the notion of completeness of a statistic to reduce from the Bayes risk to the frequentist risk.

Then their classic “baseball paper” (JASA 1975, http://faculty.chicagobooth.edu/nicholas.polson/teaching/41900/efron-morris1.pdf ) showed that the gains from shrinkage estimation can be very substantial. The hierarchical model perspective, together with thinking carefully about the loss function, help clarify when it would make sense in practice to combine very different problems into a shrinkage estimator.

Also, I think it’s a bit misleading to suggest that the minimax estimator (which is also the MLE and has various other nice properties) is not “too far” from admissible just based on the supremum of the risk. The risk function increases in the squared norm of theta, starting at 2 and asymptotically approaching (but never reaching) k. If k is even moderately large, there will be a wide range of parameter values where the improvement in risk is dramatic.

This comment is getting long, but I also wanted to mention that Stigler’s paper (Stat Sci 1990, http://projecteuclid.org/DPubS?service=UI&version=1.0&verb=Display&handle=euclid.ss/1177012274 ) gives a neat connection between Stein’s paradox and regression.

Hopefully my hunch is wrong!

Thanks for the references

Larry

Brad Efron and Carl Morris’s 1977 Scientific American paper is an awesome intro on Stein Paradox for anyone who is uninitiated in statistics like me. http://www-stat.stanford.edu/~ckirby/brad/other/Article1977.pdf

Indeed, this is where I first learned about Stein’s paradox. To this day I can recall being outraged by the non-intuitive fact that you can decrease the risk of estimating several things even if they have nothing to do with each other by doing this sort of thing. (I should say that at the time I was mostly doing astronomy…I knew only the statistics that astronomers learned in graduate school, and it was a decade before I became interested in Bayesian ideas and in fact it was before I knew that they existed! It was later that Carl Morris came to Texas and I had the opportunity to learn about it from him, and even later that I learned about shrinkage estimators, hierarchical Bayes models and so on).

I do introduce Stein’s paradox to my grad students, and I have a friend over in the medical school who has been using shrinkage estimators in his work on hospital outcomes and who has given guest lectures using the Efron-Morris article in Scientific American to my sophomore honors students in my course on Bayesian inference and decision theory (I talked about this course at Jim Berger’s 60th birthday party in San Antonio). He is particularly enamored of Brad and Carl’s toxoplasmosis example, also in the SA article.

I didn’t learned about it school but we learned about something that reminds a lot to this paradox, the fact that the MLE for the variance on a normal distribution was not the one with LSE (the LSE having the denominator N+1 instead N). When I asked why people would not use the LSE insead the unbiased esitmator (denominator N-1) my professor kinda waved the question with an “Not everything that shines is Gold”.

So I guess the reason why people still uses MLE instead Steins or other LSE estimators is because the formers might produce big errors occasionally despite having less error on average; in other words, people prefer to be safe than sorry and having small advantages on average might not be worth be horribly wrong sometimes… Nobody wants to be a victim of Murphy’s law.

Fran: I think that the squared loss (and in fact any symmetric loss) is inappropriate for variance estimation. If you use 1/(N+1), you favour small negative errors over slightly larger positive errors, but the former are more problematic in most situations because the variance is bounded by zero from below.

I didn’t come from a traditional statistics training, so I personally stumbled across it on my own through Efron’s popular science article in scientific american. It certainly blew my mind when I first tried to wrap my head around it.

If we’re going to start teaching data analysts (whether they’re machine learners, computer scientists, or statisticians) how to work with high dimensional data (which seems to be paying the bills these days), Stein’s paradox should really be foundational and not obscure. Perhaps then we’d stop seeing entire fields doing analyses with 100,000 independent MLE estimates / hypothesis tests.

I’d be curious to hear your views on the relevance (or lack thereof) of Wald’s complete class theorem to statistical inference.

I don’t think about complete class theorems at all

Huh. Stein was working on necessary and sufficient conditions for admissibility (i.e., generic conditions for complete classes) around the same time that he found his inadmissibility result…

Larry, you wrote: “The proof that is inadmissible is based on defining an explicit estimator that has smaller risk than .”

I had thought that Stein’s original proof was nonconstructive, and only later did he and James come up with the James-Stein estimator that you discuss here. (I admit that I haven’t read Stein’s original paper so my impression may be incorrect).

I should have said “a proof” rather than

“the proof”

Another interesting thing to mention about the paradox is that the James-Stein estimator is also inadmissible, since it’s dominated by the positive-part James-Stein estimator, which is also inadmissible — as far as I know, no-one has come up with an admissible estimator that dominates the sample average.

Have you considered viewing the Stein result as a criticism of the quadratic loss, or of admissibility?

In your example with the mass of the moon etc, is it obvious we should be using the specified loss, if it rewards shrinking together totally unrelated quantities? Similarly, does insisting on admissibility, instead of not “too far” from admissible, really reflect the class of estimators we’re prepared to use in practice?

Of course, these concerns don’t rule out using shrinkage estimators sometimes.

True. That loss assumes you are interested in the overall error.

You might end up estimating some components poorly.

It is, nonetheless, still a rather surprising phenomenon.

True, but somehow averaging over errors though not assuming similarity assumes not terribly different (or exchangeable?)

“Similarly, statisticians used to say: do you remember where you were when you heard about Stein’s paradox? That’s how surprising it was. (I don’t remember since I wasn’t born yet.)”

Um, you weren’t born yet when you first heard about Stein’s Paradox?

Are there alternative loss functions that take care of Stein’s paradox and don’t introduce new paradoxes of their own?

Lamentably in (statistical) signal processing applications, we do not teach this at all. This is all the more surprising given that shrinkage estimators are used routinely.

I would add that if $||X|| < k – 2$, then it "shrinks" past the origin.

True. In practice, one uses the

“positive part” shrinkage estimator which avoids this

problem.

Just one more quick comment: if we use a prior $N_n(0,\tau^2 I)$ for $\theta$, a simple computation shows that the Bayes estimator with quadratic loss is $X \frac{\tau^2}{\tau^2 + 1}$. The complete class Theorem of Wald tell us that this Bayes estimator is admissible. Now, since we can take $\tau$ to be a huge number (say a Google), and that makes this admissible estimator almost surely as close to $X$ as we may want, should we consider the James-Stein estimator as a real improvement over just $X$?

I’m not sure what you mean by saying they are “almost surely close.”

I guess you mean that X is not Bayes but is the limit of Bayes rules

(which is how one proves it is minimax).

Nonetheless, the risk function

of X and the James-Stein estimator are very different near the origin.

But, with a very flat prior, we would put low prior probability near the origin

so we would not be impressed by improvement in that region.

In other words, `shrinking towards the origin’ and `using a very flat prior’

are at odds with each other.

If we’re interested in shrinkage estimators one might say that we are implicitly

interested in a prior with mass near the origin.

“one might say that we are implicitly interested in a prior with mass near the origin”

Not really. Such a prior is one way, but not the only way, to justify shrinkage estimators. We may use e.g. lasso estimators, or their close relatives, not because we have prior belief that several coefficients are truly zero or close to zero, but because we have a loss that rewards estimates that contain several zero or near-zero terms.

Thank you for your reply, Larry. What I meant by “almost surely close” is that we would be certain that for any given realization of the experiment, the numbers $X$ and $X \frac{\tau^2}{\tau^2 + 1}$ are not much different. I mean, in practice, both the admissible $X \frac{\tau^2}{\tau^2 + 1}$ and the inadmissible $X$ give us essentially the same estimate. I’m thinking about a huge $\tau$, and I won’t consider the limit, because that would take us “outside of the complete class”, in a sense. Just to be completely clear, the estimator $ 0.9999999999999999999999 X$ is admissible. Your contrast between “shrinking towards the origin” and “using a very flat prior” is interesting.

Sorry to bother you again. You know that the James-Stein estimator can be constructed shirinking towards an arbitrary point, and not just the origin. If you wear an applied statistician’s hat, how do you interpret the particular chosen point? How should we choose it? Thanks.

By empirical Bayes usually

I’m not so sure about shrinking togehter estimations of the moon’s mass, Rome’s temperature, etc. For Stein’s result the means need not be related in any way, but the centered distibutions must be equal (having then, homogenous variances). Personally, besides needing to be interested in overall error, I would neither shrink estimations if there is no clue that they have a similar distirbution (up to position).

Anyway as you say, regularized regression like the Lasso and every smoothing technique can be thought of as shrinking. So the Stein paradox remains useful, surprising and even slightly unbelievable (Sometimes I still battle with it before being convinced again of it’s truth; for that use, the first comment’s link from Stigler is really very clear).

Thanks for the post

My intuition is that the first thing to think very carefully about here has to be the squared loss function. Obviously robustniks don’t like it because it is too much dominated by large deviations, or in other words by “how bad an estimator exactly is given that it’s useless anyway” instead of focussing on where the estimator can be of some use. Don’t know how strongly the robustness (or overweighting large deviations) issue is related to the issue here.

I mean, this works for shrinking toward *any* point, so the X is basically biased at random to get its variance down (which has nothing to do with the true value we want to estimate). So the variance seems to be overrated by this loss function.

Does anything like this happen for L1-loss?

Not that I think that shrinking never helps, but…

Whats special about 1 and 2 dimensional space that prevents this technique from working there? I know Brown has a giant paper about it which I never read…

How true! I was in college, sitting in my room on 11th St in Boulder, CO and read a Sci Am piece on Stein’s Paradox!

(I am also old enough to remember where I was when JFK was shot.) 😦

## 2 Trackbacks

[…] On Stein’s paradox https://normaldeviate.wordpress.com/… […]

[…] total sense. And once you get it, you’ll have a much deeper understanding of everything from nonparametric smoothing to empirical Bayes methods. Check out this wonderful, totally non-technical paper on Stein’s […]