They are similar but not the same.

The bootstrap samples n observations from the empirical distribution.

(Actually, in a testing problem, the empirical has to be corrected to

be consistent to the null hypothesis).

The type I error goes to 0 as the sample size goes to infinity.

In the permutation test, the type I error is less than alpha.

No large sample approximation needed.

–LW

http://arxiv.org/abs/1207.6076 ]]>

It depends on the setting.

Typically, that corresponds to testing

if a coefficient is 0 in a regression

but a “truly” distribution-free version is not

obvious.

I am just curious about how to do “two samples t-tests” when there are covariates? Are there some modern methods? ]]>

It is exact no matter how many random permutations you use.

Exact means: Pr(type I error ) <= alpha

The only assumptions are i.i.d.

No stronger than usual

Should it be clear as to what you mean by exact and approximate here?

Above for the permutation test you used a random sample of just 10,000 while for the naive bootstrap – if the sample size was small enough – one could enumerate all possible sample paths of iid draws from the empirical distribution taken as the true unknown distribution and be exact.

I would suggest the assumptions of the permutation test are strong (i.e. the labels were random and nothing else was) but ofen are ensured to not be too wrong under the (strict Fisher) null.

The combining idea is interesting – given uniform dependent p-values and all the functions that could be entertained to combine them.

]]>