Stephen Ziliak Rejects Significance Testing

In an opinion piece in the Financial Post, Stephen Ziliak goes into the land of hyperbole, declaring that all significance testing is junk science. It starts like this:

I want to believe as much as the next person that particle physicists have discovered a Higgs boson, the so-called “God particle,” one with a mass of 125 gigaelectronic volts (GeV). But so far I do not buy the statistical claims being made about the discovery. Since the claims about the evidence are based on “statistical significance” – that is, on the number of standard deviations by which the observed signal departs from a null hypothesis of “no difference” – the physicists’ claims are not believable. Statistical significance is junk science, and its big piles of nonsense are spoiling the research of more than particle physicists.

He goes on to say:

Statistical significance stinks. In statistical sciences from economics to medicine, including some parts of physics and chemistry, the ubiquitous “test” for “statistical significance” cannot, and will not, prove that a Higgs boson exists, any more than it can prove the reality of God, the existence of a good pain pill, or the validity of loose monetary policy.

While I have said many times in this blog that I, too, think significance testing is mis-used, it is ridiculous to jump to the conclusion that “Statistical significance is junk science.” Ironically, Mr. Ziliak is engaging in exactly the same all-or-nothing thinking that he is criticizing.

You name any statistical method: confidence intervals, Bayesian inference, etc. and it is easy to find people mis-using it. The fact that people mis-use or misunderstand a statistical method does not render it dangerous. The blind and misinformed use of any statistical method is dangerous. Statistical ignorance is the enemy. Mr. Ziliak’s singular focus on the evils of testing seems more cultish than scientific.

Advertisement

11 Comments

  1. Ken
    Posted June 14, 2013 at 6:18 am | Permalink

    He is, after all, an economist.

    I’m not certain what he is planning on replacing statistical significance with. Judging by a court case he has been involved in, his belief is that you should be able to just look at the numbers and decide for yourself whether that it is an important difference. Problem is in most drug trials there will be lots of important looking differences that aren’t statistically significant, and in reality aren’t different.

  2. Keith O'Rourke
    Posted June 14, 2013 at 7:48 am | Permalink

    True, but like drugs some techniques may be more prone to mis-use than others.

  3. bayesrules
    Posted June 14, 2013 at 9:04 am | Permalink

    Physicists are well aware (from experience) of the problems that John Ioannidis points out in his well-known paper, which is why they routinely require 5 sigmas in these experiments. As I understand it, the Higgs result is confirmed by two independent experiments, each of which achieved 5 sigmas. I’m pretty sure that a reasonable Bayesian analysis of the same data would also convincingly show that some sort of Higgs has been discovered.

    • george
      Posted June 14, 2013 at 4:03 pm | Permalink

      Requiring 5 sigmas doesn’t address Ziliak’s concerns – that statistical significance is the wrong thing to assess.

      NB I do agree that Ziliak misses the bigger point, that even if one views statistical significance as the wrong thing, it can still be useful.

    • rj444
      Posted June 14, 2013 at 4:29 pm | Permalink

      increasing the stringency of the statistical significance filter is the wrong way to go about addressing the problems Ioannidis raises, because it worsens Ioannidis’s _other_ complaint about effect overestimation.

      I don’t like giving Ioannidis credit for these critcisims because they’re not novel, but unfortunately everyone is most familiar with. Put out papers with provocative titles and everyone reads them I guess …

      • bayesrules
        Posted June 16, 2013 at 11:18 am | Permalink

        Not sure which of Ioannidis’ issues you are thinking of. If it’s sampling to a foregone conclusion, that’s out, because in the Higgs experiments the amount of data to be collected was set in advance.

        I don’t like statistical significance either.

  4. R Kramer
    Posted June 14, 2013 at 9:53 am | Permalink

    As someone working with statistics daily, I sympathize Ziliak’s disenchantment, but I agree with Dr. W: don’t blame the tool, blame the artless wielders of the tool.

  5. Posted June 14, 2013 at 10:27 am | Permalink

    As Nate Silver would put it: he is a hedgehog.

  6. Posted June 14, 2013 at 6:11 pm | Permalink

    So glad you’ve posted this. He sent this to me a couple of days ago, but I think I was trying to ignore it. We need to get the FT to have us write “On the Pseudoscience of Stephen Ziliak”. Anyway, I footnote his article on my blog today

    http://errorstatistics.com/2013/06/14/p-values-cant-be-trusted-except-when-used-to-argue-that-p-values-cant-be-trusted/

    I think this is over the edge, even for him. Must be needing to stir the pot of p-value hysteria.
    http://errorstatistics.com/2012/08/25/did-higgs-physicists-miss-an-opportunity-by-not-consulting-more-with-statisticians/

  7. Emre
    Posted June 14, 2013 at 9:51 pm | Permalink

    Dear Larry, please don’t feed the troll. From what I can tell from a 5 minute google research and his 2009 JSM talk, Stephen Ziliak has repeatedly made similar comments and even wrote a book about the idea. I have not yet read his book but it doesn’t look like he offers any solutions to the problems.

One Trackback

  1. […] Stephen Ziliak Rejects Significance Testing (normaldeviate.wordpress.com) […]

%d bloggers like this: