The “Dependent Variable Problem”: How Do We Know What Caused Desired Change?

  • Ann NevileEmail author
  • Nicholas Biddle
Living reference work entry


Governments typically introduce new programs because they want to achieve some sort of positive change. While policy makers may have a clear idea of what they want to achieve, there is often less certainty about the most effective way of achieving desired policy goals or even whether achieving a policy goal is possible given current financial constraints and/or institutional arrangements. Even when desired policy goals are achieved, it may be difficult to isolate which factor (or combination of factors) had the most significant impact. This difficulty is known as the dependent variable problem.

After introducing and defining the dependent variable problem, the chapter identifies the sort of research and policy questions that are explicitly concerned with causality and the different methods or techniques that policy makers can use to increase their understanding of what factor (or combination of factors) is facilitating, or impeding, desired change. The chapter discusses the strengths and limitations of randomized controlled trials where outcomes for two groups, one of whom underwent intervention, are compared, quasi-experimental approaches, and cross-case comparisons of sequential events. The chapter makes the point that, in the real world, causality is often complex, nonlinear, and/or influenced by macro-level factors beyond the control of policy makers responsible for a particular policy or program. For this reason, policy makers need to think carefully about the particular combination of techniques capable of generating the sort of information that will help them understand causal linkages.


Causal inference Randomized control trials Cross-case comparison Sequential change Regulatory interventions 


  1. American Economic Association (AEA n.d.).
  2. Bastagli, F. 2008. Conditionality in public policy targeted to the poor: Promoting resilience? Social Policy & Society 8 (1): 127–140.CrossRefGoogle Scholar
  3. Bennett, A., and C. Elman. 2006. Complex causal relations and case study methods: The example of path dependence. Political Analysis 14 (3): 250–267.CrossRefGoogle Scholar
  4. Braithwaite, V. 1995. Games of engagement: Postures within the regulatory community. Law and Policy 17 (3): 225–255.CrossRefGoogle Scholar
  5. Braithwaite, J. (2016). Restorative justice and responsive regulation: The question of evidence. RegNet working paper no. 51. School of Regulation and Global Governance, . Accessed 2 Apr 2019.
  6. ———. 2017. Types of responsiveness. In Regulatory theory: Foundations and applications, ed. P. Drahos, 117–132. Canberra: ANU Press.CrossRefGoogle Scholar
  7. Brown, H. 2013. Racialized conflict and policy spillover effects: The role of race in the contemporary U.S. welfare state. American Journal of Sociology 119 (2): 394–443.CrossRefGoogle Scholar
  8. Cairney, P. 2013. What is evolutionary theory and how does it inform policy studies? Policy & Politics 41 (2): 279–298.CrossRefGoogle Scholar
  9. Capano, G. 2009. Understanding policy as an epistemological and theoretical problem. Journal of Comparative Policy Analysis: Research and Practice 11 (1): 7–31.CrossRefGoogle Scholar
  10. Capano, G., and M. Howlett. 2009. Introduction: The determinants of policy change; Advancing the debate. Journal of Comparative Policy Analysis: Research and Practice 11 (1): 1–5.CrossRefGoogle Scholar
  11. Crépon, B., E. Duflo, M. Gurgand, R. Rathelot, and P. Zamora. 2013. Do labor market policies have displacement effects? Evidence from a clustered randomized experiment. The Quarterly Journal of Economics 128 (2): 531–580.CrossRefGoogle Scholar
  12. Dupuis, J., and R. Biesbroek. 2013. Comparing apples and oranges: The dependent variable problem in comparing and evaluating climate change adaptation policies. Global Environmental Change 23 (6): 1476–1487.CrossRefGoogle Scholar
  13. Eisenstadt, N. 2011. Providing a sure start: How government discovered early childhood. Bristol: The Policy Press.CrossRefGoogle Scholar
  14. Farrow, K., S. Hurley, and J. Sturrock. 2015. Grand Ali29 August 2019bis: How declining public sector capability affects services for the disadvantaged.
  15. Frey, B.S., and A. Stutzer. 2005. Beyond outcomes: Measuring procedural utility. Oxford Economic Papers 57 (1): 90–111.CrossRefGoogle Scholar
  16. Gerring, J. 2006. Single-outcome studies: A methodological primer. International Sociology 21 (5): 707–734.CrossRefGoogle Scholar
  17. Haydu, J. 1998. Making use of the past: Time periods as cases to compare and as sequences of problem solving. American Journal of Sociology 51 (3): 688–701.Google Scholar
  18. Kaplan, T.J. 1986. The narrative structure of policy analysis. Journal of Policy Analysis and Management 5 (4): 761–778.CrossRefGoogle Scholar
  19. McCarney, R., J. Warner, S. Iliffe, R. van Haselen, M. Griffin, and P. Fisher. 2007. The Hawthorn Effect: A randomised controlled trial. BMC Medical Research Methodology.
  20. McKenzie, D. 2012. A pre-analysis plan checklist. Accessed 29 Aug 2019.
  21. Mossberger, K., and H. Wolman. 2003. Policy transfer as a form of prospective policy evaluation: Challenges and recommendations. Public Administration Review 63 (4): 428–440.CrossRefGoogle Scholar
  22. Nevile, A. 2013a. Reframing rights as obligations: Implications for service users’ ability to exercise their rights. Australian Journal of Human Rights 19 (2): 147–164.CrossRefGoogle Scholar
  23. ———. 2013b. The curse of accountability: Assessing relationships in the delivery of employment services. The Economic and Labour Relations Review 24 (1): 64–79.CrossRefGoogle Scholar
  24. Nevile, A., E. Malbon, A. Kay, and G. Carey. 2019. The implementation of complex social policy: Institutional layering and unintended consequences in the National Disability Insurance Scheme. Australian.Google Scholar
  25. Paloyo, A.R., S. Rogan, and P. Siminski. 2016. The effect of supplemental instruction on academic performance: An encouragement design experiment. Economics of Education Review 55: 57–69.CrossRefGoogle Scholar
  26. Parker, C., and V.L. Nielsen. 2017. Compliance: 14 questions. In Regulatory theory: Foundations and applications, ed. P. Drahos, 217–232. Canberra: ANU Press.CrossRefGoogle Scholar
  27. Pearl, J., and D. Mackenzie. 2018. The book of why: The new science of cause and effect. New York: Basic Books.Google Scholar
  28. Pollitt, C. 1995. Improvement strategies. In Quality improvement in European public services, ed. C. Pollitt and G. Bouckaert, 131–161. London: SAGE.Google Scholar
  29. Rayner, J. 2009. Understanding policy change as a historical problem. Journal of Comparative Policy Analysis: Research and Practice 11 (1): 83–96.CrossRefGoogle Scholar
  30. Tarrow, S. 2010. The strategy of paired comparison: Toward a theory of practice. Comparative Political Studies 43 (2): 230–259.CrossRefGoogle Scholar
  31. Torgerson, D., and C. Togerson. 2008. Designing randomised trials in health, education and the social sciences: An introduction. Houndmills. Palgrave Macmillan.Google Scholar
  32. van der Heijden, J., and J. Kuhlmann. 2017. Studying incremental institutional change: A systematic critical meta-review of the literature from 2005–2015. Policy Studies Journal 87 (3): 509–538.Google Scholar

Copyright information

© The Author(s), under exclusive licence to Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Centre for Social Research and MethodsThe Australian National UniversityCanberraAustralia

Section editors and affiliations

  • Helen Dickinson
    • 1
  1. 1.University of New South WalesCanberraAustralia

Personalised recommendations