Design Effects and Effective Sample Size

From Displayr
(Redirected from Design Effect)
Jump to navigation Jump to search

The specific way that data is collected is a determinant of how statistical testing should be computed. Most statistical tests have been developed under the assumption that the data has been collected by Simple Random Sampling with a 100% Response Rate. This is rarely true. When untrue, there are two corrections which may make statistical inference more accurate:

  1. Weighting can be used to adjust the estimates.
  2. The statistical tests and other methods of inference can be adjusted to take into account the extent to which the sampling error will likely to have been affected. Most commonly, the variances computed using formulas and methods that assume simple random sampling, are still used, but the variance is multiplied by a constant to take into account the extent of the departure from the variance of simple random sampling. This constant is called the design effect, or just deff for short.

The design effect (deff)

The design effect, often called just deff, quantifies the extent to which the expected sampling error in a survey departs from the sampling error that can be expected under simple random sampling.

Consider a survey of 300 people which concludes that 50% of people agree with some statement or other. The variance of the estimated proportion, [math]\displaystyle{ p }[/math] of 50% given the sample size, [math]\displaystyle{ n }[/math], of 300, and assuming simple random sampling is given by:

[math]\displaystyle{ s^2_{srs} = \frac{p(1-p)}{n}=\frac{.5(1-.5)}{300}=0.000833333 }[/math]

However, if the data was not collected by simple random sampling then this formula produces an incorrect estimate for the variance. Lots of different formulas exist for estimating the variance of proportions where the data is not collected from a simple random sample (see [1]). Consider the case where one of these formulas is appropriate and the variance of the estimated proportion is computed as 0.002083333. The design effect is then computed as: [math]\displaystyle{ D_{eff}=\frac{s^2_{correct}}{s^2_{srs}}= \frac{0.002083333}{0.000833333} = 2.5 }[/math]

Thus, the design effect is a constant that can be used to correct estimated sampling variance.

The design effect can be equivalent defined as the the actual sample size divided by the effective sample size. Thus, where the true sampling variance is twice that computed under the assumption of simple random sampling the design effect is 2.0.

In some situations the design effect of a study may be known (e.g., it may have been computed by a statistician). In such situations, the simple formulas assuming simple random sampling can be used with the resulting variance being modified by the design effect.

Effective sample size

Many researchers use sample size as a guide to understanding the likely sampling error in surveys. Opinions polls, textbooks and the like often present tables showing how standard errors, margins of error or confidence intervals vary by sample size. For example, with a sample size of 300 an estimate of 50% will often be presented as having a margin of error of 5.7% (which is computed as [math]\displaystyle{ 1.96\times\sqrt{s^2_{srs}}\times 100 }[/math]).

Where the design effect is other than 1 the both the tables and the intuitive understanding that most researchers have about the effect of sample size becomes incorrect. The effective sample size is a an estimate of the sample size that a survey conducted using simple random sampling would have required to achieve the same sampling error as computed in the study that did not employ simple random sampling. The effective sample size in this example is computed as:

Effective size [math]\displaystyle{ = \frac{n}{D_{eff}} = \frac{300}{2.5} = 120 }[/math]

Kish's approximate formula for computing effective sample size

In many situations the correct design effect is not computed, either because it is too complicated, too computationally expensive or there is insufficient information for it to be computed. A formula developed by Kish [2] is widely used for computing the effective sample size in such instances:

Effective size [math]\displaystyle{ = \frac{(\sum^n_{i=1} w_i)^2}{\sum^n_{i=1}w^2_i} }[/math]

where [math]\displaystyle{ w_i }[/math] is the weight of the ith respondent.

Although it is better to use this approximation than to use the actual sample size, this formula is only a rough approximation and can be highly inaccurate. In particular:

  • This formula is very inaccurate in situations where the weight is correlated with variables being used in an analysis. In such situations, it under-estimates the effective sample size.
  • This formula ignores clustering. Where clustering exists in the data, it over-estimates the effective sample size.
  • This formula computes a single effective sample size. In reality, the effective sample size should differ for all the statistics in a study.

References

Template:Reflist