Peter Grunwald gave a talk in the statistics department on Monday. Peter does very interesting work and the material he spoke about is no exception. Here are my recollections from the talk.

The summary is this: Peter and John Langford have a very cool example of Bayesian inconsistency, much different than the usual examples of inconsistency. In the talk, Peter explained the inconsistency and then he talked about a way to fix the inconsistency.

All previous examples of inconsistency in Bayesian inference that I know of have two things in common: the parameter space is complicated and the prior does not put enough mass around the true distribution. The Grunwald-Langford example is much different.

Let be a countable parameter space. We start with the very realistic assumption that the model is wrong. That is, the true distribution is not in . It is generally believed in this case that the posterior concentrates near , the distribution in closest (in Kullback-Leibler distance) to . In Peter and John’s example, the posterior edoes not concentrate around . What is surprising, is that this inconsistency holds, even though the space is countable and even though the prior puts positive mass on . If this doesn’t surprise you, it should.

On the other hand, there are papers like Kleijn and van der Vaart (The Annals of Statistics, 2006, pages 837–877) that show that the posterior does indeed concentrate around . So what is going on?

The key is that in the Grunwald-Langford example, the space is not convex. (More precisely, the projection of onto does not equal the projection of onto the convex hull of .)

You can fix the problem by replacing with its convex hull. But this is not a good fix. To see why, suppose that each corresponds to some classifier in some set . If correspond to two different classifiers , the mixture might not correspond to any classifier in . Forming mixtures might take you out of the class you are interested in.

Instead, Peter has a better fix. Instead of using the posterior distribution, use the generalized posterior where is the prior, is the likelihood and is a constant that can change with sample size . It turns out that there is a constant such that the generalized posterior is consistent as long as .

The problem is that we don’t know . Now comes a key observation. Suppose you wanted to predict a new observation. In Bayesian inference, one usually uses the predictive distribution which is a mixture with weights based on the posterior. Consider instead predicting using where is randomly chosen from the posterior. Peter calls these “mixture prediction” and “randomized prediction.” If the posterior is concentrated, then these two types of prediction are similar. He uses the difference between the mixture prediction and the randomized prediction to estimate . (More precisely, he builds a procedure that mimics a generalized posterior based on a good .)

The result is a “fixed-up” Bayesian procedure that automatically repairs itself to avoid inconsistency. This is quite remarkable.

Unfortunately, Peter has not finished the paper yet so we will have to wait a while for all the details. (Peter says he might start is own blog so perhaps we’ll be able to read about the paper on his blog.)

Postscript: Congratulations to Cosma for winning second place in the best science blog contest.

— Larry Wasserman

## 5 Comments

Larry, I have thought about this for a long time – but this comment is from the hip.

I think it was called “Looking for the Jaborwocki” but the idea I encountered before entering biostatistics was that a model (or a representation of something) may well imply a Jaborwocki (no cognasent being should or even could doubt this if they understood the representation) but they should not be disppointed if they could not find the Jaborwocki in what was being represented.

Assuming the universe is finite, implications of non-finite models need not apply to anything that will happen in the particular universe I happen to inhabit. This could be put as “if it can’t be simmulated (a necessarily finite approximation of a probability model) one need not worry about it in any brute realities they may need to address.

If I was convinced I was not in some sense wrong, I would not post this – but I am yet to be convinced this sort of thing must concern me.

Cheers

Keith

Dear Keith,

This is of course an important issue.

In the example Larry refers to, things actually do go wrong terribly also in small samples (in fact I did some simulations)

- you have a reasonable but not perfect approximation to the true distribution with a very high prior (say, 1/2)

and you have many much worse approximations with much smaller priors. Yet these bad approximations keep getting almost all posterior mass.

The extension to countably infinite models is only there to state the result in a way that says: ‘no matter how many data you observe, the phenomenon will never go away’. But it’s certainly relevant for small samples as well (other, non-Bayesian methods do pick up the best approximation to the ‘truth’ very fast).

The point of my new paper is to have the best of both worlds – perform as well as the Bayesian methods when the model is correct, and as well as these other methods if the model is wrong.

Best wishes,

Peter

Thanks for the clarification and the especially clear picture.

For those that may be interested, I also found a paper that elabourates related concerns as I was raising:

“Asymptotics of Maximum Likelihood without the LLN or CLT or Sample Size Going to Infinity” Charles J. Geyer

Thanks to arxiv: http://arxiv.org/abs/1206.4762

Hi Larry and others,

Thanks for this 100% accurate (!) recollection of my talk.

A preliminary version of part of this work was just accepted for the ALT (Algorithmic Learning Theory) Conference 2012.

I just put the paper, called

The Safe Bayesian: learning the learning rate via the mixability gap

on my webpage: http://homepages.cwi.nl/~pdg/ftp/alt12longer.pdf

The paper includes the ‘picture that says it all’.

I’m still struggling with writing a longer version explaining all the connections to Tsybakov exponents etc.

Thanks Peter

—LW

## One Trackback

[...] Wasserman, on his new blog, Normal Deviate [here], which also has a nice precise of Peter Grunwald’s talk on “Self-repairing Bayesian Statistics”. Like this:LikeBe the first to like [...]