What do you do when analyzing data is fun, but you don't have any new data? You make it up.

This simulation tests the type I error rates of two-sample t-test in R and SAS. It demonstrates efficient methods for simulation, and it reminders the reader not to take the result of any single hypothesis test as gospel truth. That is, there is always a risk of a false positive (or false negative), so determining truth requires more than one research study.

A type I error is a false positive. That is, it happens when a hypothesis test rejects the null hypothesis when in fact it is not true. In this simulation the null hypothesis is true by design, though in the real world we cannot be sure the null hypothesis is true. This is why we write that we "fail to reject the null hypothesis" rather than "we accept it." If there were no errors in the hypothesis tests in this simulation, we would never reject the null hypothesis, but by design it is normal to reject it according to *alpha*, the significance level. The de facto standard for alpha is 0.05.

## R

First, we run a simulation in R by repeatedly comparing randomly-generated sets of normally-distributed values using the two-sample t-test. Notice the simulation is vectorized: there are no "for" loops that clutter the code and slow the simulation.

# type I error alpha.p <- 0.05 # number of simulations n.simulations <- 1000 # number of observations in each simulation n.obs <- 100 # a vector of test results type.one.error<-replicate(n.simulations, t.test(rnorm(n.obs),rnorm(n.obs), var.equal=TRUE)$p.value)<alpha.p # type I error for the whole simulation mean(type.one.error) # Store cumulative results in data frame for plotting sim <- data.frame( n.simulations = 1:n.simulations, type.one.error.rate = cumsum(type.one.error) / seq_along(type.one.error)) # alternative plot using ggplot2 require(ggplot2) ggplot(sim, aes(x=n.simulations, y=type.one.error.rate)) + geom_line() + xlab('Number of simulations') + ylab('Cumulative type I error rate') + ggtitle('Simulation of type I error in t-test') + geom_abline(intercept = alpha.p, slope=0, col='red') + theme_bw()

## SAS

Likewise, here is the equivalent code to do the same in SAS. Notice the simulation is implemented not as a slow SAS macro. Instead, it uses the BY statement in PROC TTEST.

/* Create a data set with 1000 simulations. Each simulation has 100 observations in each of two groups. */ data normal; length simulation 4 i 3; /* save space and time */ do simulation = 1 to 1000; do i = 1 to 100; group='A'; /* The values are normally distributed */ x = rand('normal'); output; group='B'; x = rand('normal'); output; end; end; run; /* Run two-sample t-test once for each simulation, and output to a data set called ttests. */ ods _all_ close; ods output ttests=ttests; proc ttest plots=none data=normal; by simulation; class group; var x; run; data ttests; set ttests; /* Limit the rows */ if variances='Equal'; /* Define the error as a boolean */ type_one_error = probt<0.05; /* cumulative error */ retain cumulative_error_count; format cumulative_error_rate percent10.2; label cumulative_error_rate = 'Cumulative error rate'; if simulation eq 1 then cumulative_error_count = 0; cumulative_error_count+type_one_error; cumulative_error_rate = cumulative_error_count /simulation; run; /* Summarize the type I error rates for this simulation */ ods html; proc freq data=ttests; table type_one_error/nocum; run; /* Draw a line plot */ proc sgplot data=ttests; series x=simulation y=cumulative_error_rate; refline 0.05 /axis=y lineattrs=(color=red); run;

## Sawtooth

Did you notice the sawtooth pattern in the error rate? The incidence of a false positive is relatively rare, and when it happens, there is a spike in the error rate. Then for each simulation in which there is no false positive, the rate drops by a steady rate because the count of simulations (the denominator) is an integer.

## Conclusion

This article was developed on Ubuntu 16.04 with R 3.4 and Windows 7 with SAS 9.4.

See also the article: Type I error rates in test of normality by simulation .

Sir, This is a wonderful article showing the 'true random' nature of the hypothesis testing setup and thus, the results. You could alleviate the false positive rate by considering the Bonferroni correction - wherein, new_alpha = (alpha/Number_of_simulations). While the Bonferroni correction is considered to be extremely conservative in some annals, it still serves a purpose of mitigating the false positive rate. Thanks again for your wonderful analysis and code example.

ReplyDelete