Professional Poster

Use of Nonparametric Statistical Methods for Reliable IPE Program Evaluation with Small Sample Sizes

Thursday, August 6, 2020, 10:00 am - 10:00 am EDT

Background

Development of interprofessional education (IPE) at an institution may occur gradually, and the size of participant cohorts may naturally fluctuate. Fewer participants in a given period translates to small sample sizes, impacting program evaluation and the ability to detect an effect of an IPE intervention. More flexible methods of statistical analysis are required if IPE data fail to meet underlying assumptions necessary for reliable results using the standard asymptotic method. To demonstrate this, we conducted applied robust parametric and nonparametric analyses and compared results from three methods (Bootstrap, Exact, and Monte Carlo) that can be used to better evaluate data generated from a small sample IPE initiative.

 

Methods

Analyses of program evaluation data from the Leadership Legacy fall 2019 cohort (N=10) were conducted using the parametric bootstrapped dependent t-test and the nonparametric Wilcoxon signed-rank test using Exact and Monte Carlo methods on sum assessment scores related to participant knowledge gains in the educational training requirements and scopes of practice of colleagues from other professions pre/post program participation.

 

Results

Nonparametric Exact and Monte Carlo (10,000 random samples) tests obtained more reliable estimates in our small sample data set than did parametric bootstrap tests. Wilcoxon signed rank tests using Exact and Monte Carlo p-values revealed a statistically significant increase in knowledge gains in educational training requirements following participation in the interprofessional education program, Z = -1.66, p < .05, N = 20, with a medium effect size (r = .37). Median score on educational training requirements increased from pre-program (Mdn = 31) to post-program (Mdn = 35). Similarly, Wilcoxon signed rank tests using Exact and Monte Carlo p-values revealed a statistically significant increase in knowledge gains in scopes of practice following participation in the interprofessional education program, Z = -2.136, p < .05, N = 20, with a medium to large effect size (r = .48). Median score on scopes of practice increased from pre-program (Mdn = 29) to post-program (Mdn = 33).

 

Conclusion

IPE evaluators intending to more reliably characterize programmatic effects should use nonparametric Exact and Monte Carlo tests for small sample sizes (N ≤ 50). For relatively larger sample sizes, the bootstrapped parametric dependent t-test may be appropriate.

 

Implications

Programs earlier in their journey of implementing IPE or those with small sample sizes during a particular time frame may benefit from utilization of nonparametric Exact and Monte Carlo methods and, in relatively larger data sets, bootstrapping procedures.