Advertisement

Effects of Collapsing Data from Crossover Designs

  • John W. Cotton
Chapter
  • 177 Downloads
Part of the Recent Research in Psychology book series (PSYCHOLOGY)

Abstract

Much behavioral research using within-subject designs employs special balancing methods to control for carryover effects and for period effects due to practice and fatigue. Unfortunately, analysis of data from these studies often does not make full use of the information available. Rather, data are analyzed after collapsing scores for a given treatment into a total for each subject. This method of data analysis is shown to produce bias in estimation of treatment effects and of the standard errors of such estimates. Therefore, both the nominal significance levels and the nominal power values for these analyses may be in error. A numerical example illustrates these difficulties. Recommendations are given for changes in experimental design and analysis methods that can obviate these difficulties.

Keywords

Carryover Effect Period Effect Crossover Design Generalize Additive Model Uniform Design 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Almudevar, T., & Tibshirani, R. (1990). GAIM. Version 1.0. Toronto: S. N. Tibshirani Enterprises.Google Scholar
  2. Aptech Systems (1988). GAUSS. The GAUSS system version 2.0. Kent, WA: Aptech Systems.Google Scholar
  3. Berenblut, I.I. (1964). Change-over designs with complete balance for first residual effects. Biometrics, 20, 707–712.CrossRefGoogle Scholar
  4. Berenblut, I.I. (1967). The analysis of change-over designs with complete balance for first residual effects. Biometrics, 23, 578–580.CrossRefGoogle Scholar
  5. Bulgren, W.G. (1974). Probability integral of the doubly non central t-distribution with degrees of freedom n and non-centrality parameters δ and ⋋. In H.L. Harter & D.B. Owen (Eds.), Selected tables in mathematical statistics (Vol. II, pp. 1–138). Providence, RI: American Mathematical Society.Google Scholar
  6. Chambers, J.M., & Hastie, T. (1991). Statistical methods in S. Pacific Grove, CA: Wadsworth.Google Scholar
  7. Cheng, C.S., & Wu, C.F. (1980). Balanced repeated measurements designs. Annals of Statistics, 8, 1272–1283.CrossRefGoogle Scholar
  8. Cheng, C.S., & Wu, C.F. (1983). Corrigendum. Annals of Statistics, 11, 349.CrossRefGoogle Scholar
  9. Cohen, A.J., Trehub, S.E., & Thorpe, L.A. (1989). Effects of uncertainty on melodic information processing. Perception & Psychophysics, 46, 18–28.CrossRefGoogle Scholar
  10. Cotton, J.W., & Othman, A.R. ( 1991, November). Modeling perception of temperature change using the generalized additive model. Paper presented at the meeting of the Psychonomic Society, San Francisco.Google Scholar
  11. Cox, D.R. (1951). Some systematic experimental designs. Biometrika, 38, 312–323.Google Scholar
  12. D’Amato, M.R. (1970). Experimental psychology. Methodology, psychophysics, and learning. New York: McGraw-Hill.Google Scholar
  13. Dodge, Y., Fedorov, V.V., & Wynn, H.P. (1988). Optimal design and analysis of experiments. Amsterdam: North Holland.Google Scholar
  14. Hastie, T., & Tibshirani, R.J. (1990). Generalized additive models. London: Chapman and Hall.Google Scholar
  15. Hedayat, A., & Afsarinejad, K. (1978). Repeated measurements designs, II. Annals of Statistics, 6, 619–628.CrossRefGoogle Scholar
  16. Jones, B. & Kenward, M. G. (1989). Design and analysis of cross-over trials. London: Chapman and Hall.Google Scholar
  17. McBurney, D.H. (1983). Experimental psychology. Belmont, CA: Wadsworth.Google Scholar
  18. Milliken, G.A., & Johnson, D.E. (1984). Analysis of messy data. Vol. I: Designed experiments. Belmont, CA: Lifetime Learning.Google Scholar
  19. Morrin, K.A. (1989). Individual differences in the ability to coordinate information across independent cognitive domains. Master’s thesis. Santa Barbara, CA: University of California, Santa Barbara.Google Scholar
  20. Ratkowsky, D.A., Evans, M.A., & Alldredge, J.R. (1993). Cross-over experiments. Design, analysis, and application. New York: Marcel Dekker.Google Scholar
  21. Refinetti, R. (1989). Magnitude estimation of warmth: Intra- and intersubject variability. Perception & Psychophysics, 46, 81–84.CrossRefGoogle Scholar
  22. SAS Institute (1988). SAS/STAT user’s guide. Release 6.03 edition. Cary, NC: SAS Institute, Inc.Google Scholar
  23. Searle, S.R. (1987). Linear methods for unbalanced data. New York: Wiley.Google Scholar
  24. Shoben, E.J., Sailor, K.M., & Wang, M-Y (1989). The role of expectancy in comparative judgments. Memory & Cognition, 17, 18–26.CrossRefGoogle Scholar
  25. Tiku, M.L. (1974). Doubly noncentral F distributions — Tables and applications. In H.L. Harter & D.B. Owen (Eds.). Selected tables in mathematical statistics (Vol. II, pp. 139–149 ). Providence, RI: American Mathematical Society.Google Scholar
  26. Williams, E.J. (1949). Experimental designs balanced for the estimation of residual effects of treatments. Australian Journal of Scientific Research, A2, 149–168.Google Scholar

Copyright information

© Springer-Verlag New York, Inc. 1994

Authors and Affiliations

  • John W. Cotton
    • 1
  1. 1.Departments of Education and PsychologyUniversity of CaliforniaSanta BarbaraUSA

Personalised recommendations