Thanks

]]>Thanks

I was not able to get the paper.

LW

The way I see it, Birnbaum’s results are about equivalences of realizations of experiments; when expressed with the right set theoretical tools (equivalence relations over a well defined space of realizations), it seems to me that the tiny letters saying “Hey, this theorem only applies to one person (one prior) at a time.” are really there. Please, take a look at our short revision paper

http://proceedings.aip.org/resource/2/apcpcs/1073/1/96_1

especially Example 3. With two different priors, how would you come with a (necessarily reflexive) equivalence relation over the space of realizations?

Best regards,

Paulo.

P.S. I enjoy reading the blog. Please, keep posting.

]]>Look forward to such examples, but I agree with Brian, I don’t see a problem here with likelihood as being the minimal sufficient statistic but rather how to work with it. It might make it clearer to write down a likelihood for each of the one hundred observations (the full likelihood bieng the multiple of these). Each one is a well defined function of the 100,001 unknown paramaters and if you had 100,001+ of these – what to do would be fairly straight forward.

This brings me to comment why I believe sufficiency itself is bogus.

Fisher’s original motivation was to summarize say two studies so that with just the summaries, a combined analysis could be done that was as good as having the raw data from both studies. Likelihood does that for _estimation_ but not for testing the fit of the model. The fit of model checked by the joint raw data might easily lead one to reject the model and the likelihoods under the rejected model will not necessarily be sufficient for the new model.

And today we can just archive the data for later re-use and hence summaries serve no purpose. (David Cox corrected me on that once saying they are useful for spliting up information for instance into that for estimation and that for testing fit.)

]]>But *which* Bayes estimator? Estimators are just ways to summarize the posterior – so there are many one could use. Some estimators may end up giving sparse estimates, even when the prior and/or posterior give little (or zero) posterior support to sparseness in the true underlying parameters.

Just like there’s no “hidden label” in Birnbaum (which is a great point!) there’s nothing in Bayes that says one has to use the posterior mean/median/mode.

]]>Fair enough. But there are examples where

(i) the likelihood function contains no information

(ii) yet there exist good estimators.

In fact, I am preparing a post on this right now.

—LW