The jackknife estimator and simulation studies of estimators
Dawe, Thomas C.
MetadataShow full item record
Many important estimators in statistics have the property that the bias of the estimator can be written in the form of a Taylor series expansion in powers of 1/n, where n is the sample size. This thesis investigates a bias reduction procedure that involves the reuse of a random sample of independent and identically distributed random variables. This procedure is called the jackknife estimator. If the parameter to be estimated is Θ, then the jackknife requires n+1 estimates of Θ, one estimate that involves the entire sample, and n estimates that are based on a subsample of size n-1 with the i-th observation being successively deleted. The jackknife estimator is then taken to be a linear combination of these estimates and has the property that it removes the first order term of a bias in the form of a Taylor series expansion. This thesis contains a review of the literature on the jackknife estimator. This review includes a discussion of the various generalizations and extensions of the jackknife, large and small sample properties of the jackknife, and a survey of the various applications of the jackknife. This thesis concludes with an extensive simulation study comparing the bias and mean squared error of several competitive estimators of the scale parameter of the extreme value Type I distribution for maximum values. In particular, the sampling distributions of the maximum likelihood estimator and the jackknife estimator of this scale parameter are tabulated; it is shown that the sampling distributions are approximately normal for sample sizes fifty or more. The construction of confidence intervals for 0 using these estimators is also investigated.