Is there evidence for CPD?

Responding to a new review that finds no evidence of impact of CPD

Is there evidence for CPD?

David Weston imageDavid Weston is Chief Executive of the Teacher Development and Chair of the Department for Education (England)’s CPD Expert Group.

In a notable recent systematic review Filges et al. summarise their findings as “little evidence of the effectiveness of continuing professional development (CPD)”. What does this mean? In this blog, I’ll reflect on this statement and look a little deeper at this interesting new review and its implications.

Exploring the study

The ‘no effect’ is found by combining findings from the only 26 experimental studies into CPD that could be found that met the authors’ quality criteria – these are the criteria set by the Campbell Collaboration. These studies were filtered to look at impacts of CPD on 

  • social and emotional development (9 studies), and 
  • language and literacy (17 studies).

The chosen studies had to be experimental – i.e. in which a large number of teachers and schools sign up and are allocated randomly to either a ‘treatment’ group who receive the intervention or a control group who do not receive it. The authors only included studies that had been carried out in the highest quality way, which had to be free of any potential biases (e.g. excluding studies where an evaluator was also involved in delivery or worked for the same organisation). 

The authors defined CPD ‘treatment’ as any formal intervention focused on enhancing a specific area of professionals’ knowledge and skills, led by a ‘training entity’ which could include formal training inputs, coaching/mentoring and professional learning communities. Most programmes included a combination of workshops with opportunities to be observed and receive feedback, sometimes using video. Most programmes lasted a year.

The impact of this is compared to the control group, most of whom are described as engaging with  ‘business as usual’ – i.e. whatever training was already being provided to the teachers and whatever informal support and personal learning was already occurring. In some cases the control group instead received either a more limited version of the intervention, or had access to coaches, video and support materials but without formal training.

With only 9 and 17 studies included respectively, the authors were not able to explore if any particular feature of CPD made it more or less effective, so there is no attempt to explore the quality of programme design and implementation, nor is there information in the studies exploring the professional environment or leadership within the schools taking part.

What does this mean?

My view is that the headline finding of this report could perhaps be re-written rather less dramatically as:

In the 26 highest-quality studies of formal CPD programmes in social and emotional development and language & literacy, there was, on average, no significant difference in outcomes for students compared to what teachers were already receiving.

For me, there are two important messages to reflect on here:

  1. There are far too few studies of CPD which meets the highest possible quality criteria. We really do need more and better research in this area. There’s a strong case to conduct these sorts of reviews with extremely strict inclusion criteria and comparing the findings critically against reviews with lower bars – Filges et al explore these comparisons systematically and carefully. On the other hand, we shouldn’t dismiss systematic reviews that take different approaches, such as Kraft et al (2018), Mandaag et al (2017), Kennedy (2016) and meta-reviews like Cordingley et al (2015) and we need to weigh up findings from all these approaches.
  2. Studies tend to focus on whether a whole intervention works, or not. There are too few studies that try several different CPD designs and compare them. This is an important point – we’re trying to infer what works, whether it works and how it works from a too-limited range of evidence.

I strongly welcome this new study which raises important questions. This takes us to the question of how to interpret the headline about a lack of evidence of effectiveness.

What’s the evidence for CPD?

Is there evidence that CPD works? Is it helpful for school leaders to pursue effort in this area? I would say yes – albeit acknowledging my own inevitable biases as someone who has previously spoken and written on the topic and someone who is employed by an organisation that provides services based on an assumption that it does. That said, I think we can reasonably draw evidence from four sources:

  1. Broader systematic reviews: there have been a number of other reviews which exclude fewer papers and have concluded that well-designed CPD can lead to a positive impact on outcomes. As mentioned above, I’d include, for example, Kraft et al (2018)’s review and Cordingley et al (2015)’s meta-review. Clearly, there are dangers in reviews that include studies with more flaws and potential biases, but there are also dangers in trying to draw a conclusion about efficacy while excluding most of the literature.
  2. Environmental studies: mainly drawn from teacher survey data, studies such as Johnson et al (2012), Kraft & Papay (2014) and Helal & Coelli (2016) find a correlation between the amount of CPD and improvement in teacher impact on pupils. Some of these find tentative evidence that it is the CPD that caused the improvement, not the other way around.
  3. School improvement literature: reviews that explore school turnaround by looking at the actions of school leaders, such as Meyers & Hitt (2017), find that successful turnaround (including improvement in student attainment) is associated with a focus on professional development:

    Not only do turnaround principals ensure that professional-development opportunities are available (Jacobson et al., 2007), they strategically ensure them through establishing common planning periods, providing professional-development or additional release time, and disseminating research materials to staff, as necessary (Aladjem et al., 2010)” [Meyers & Hitt (2017)]
  4. School leadership literature: systematic reviews such as Robinson et al (2008) and Liebowitz & Porter (2019) find a statistically significant impact from leaders’ focus on instructional leadership, including a focus on teacher professional development.

Returning to the Filges et al review, it must be noted that lack of high-quality evidence of impact of CPD is not the same as high quality evidence of lack of impact of CPD. It may be that we have to assemble a case for CPD from a variety of sources but there is clearly a highly plausible case to be made. I think it would also be hard to argue a plausible case that leaders shouldn’t focus on professional development.

At the Teacher Development Trust, our charitable mission is to make schools places where teachers get better, faster. While it’s great to have a real debate about the evidence around professional learning, we want to move away from the idea that development is something that is done to teachers, in the same way we no longer think of education as something done to students. The debate about evidence gets us quickly wrapped up in packages and interventions, seeing schools and teachers as mere passive recipients. But as Kraft and Papay (2014) demonstrated, the improvement of teaching is as much about the professional environment as it is about the content of training. If we’re going to have more schools where teachers keep improving, we need to think both more deeply and more broadly, making staff learning just as much of a priority as student learning. Only then can we achieve powerful professional learning that helps students succeed and teachers thrive.

References

Filges, TTorgerson, CGascoine, LDietrichson, JNielsen, CViinholt, BAEffectiveness of continuing professional development training of welfare professionals on outcomes for children and young people: A systematic reviewCampbell Systematic Reviews201915:e1060. https://doi.org/10.1002/cl2.1060

Kraft MA, Blazar D, Hogan D. The Effect of Teacher Coaching on Instruction and Achievement: A Meta-Analysis of the Causal Evidence. Review of Educational Research [Internet]. 2018;88 (4) :547-588.

Maandag, D, Helms-Lorenz, M, Lugthart, E, Verkade, A & van Veen, K 2017, Features of effective professional development interventions in different stages of teacher’s careers: NRO report. Teacher education department of the University of Groningen

Kennedy, M. M. (2016) ‘How Does Professional Development Improve Teaching?’, Review of Educational Research, 86(4), pp. 945–980. doi: 10.3102/0034654315626800.

Cordingley, P., Higgins, S., Greany, T., Buckler, N., Coles-Jordan, D., Crisp, B., Saunders,
L., Coe, R. Developing Great Teaching: Lessons from the international reviews into
effective professional development. Teacher Development Trust. 2015.

Johnson SM, Kraft MA, Papay JP. How context matters in high-need schools: The effects of teachers’ working conditions on their professional satisfaction and their students’ achievement. Teachers College Record [Internet]. 2012;114 (10) :1-39

Kraft MA, Papay JP. Can Professional Environments in Schools Promote Teacher Development? Explaining Heterogeneity in Returns to Teaching Experience. Educational Effectiveness and Policy Analysis [Internet]. 2014;36 (4) :476-500.

Helal M, Coelli M. How Principals Affect Schools. Melbourne Institute Working Paper No. 18/16. 2016.

Coby V. Meyers & Dallas Hambrick Hitt (2017) School Turnaround Principals: What Does Initial Research Literature Suggest They Are Doing to Be Successful?, Journal of Education for Students Placed at Risk (JESPAR), 22:1, 38-56, DOI: 10.1080/10824669.2016.1242070

Robinson, Viviane & Hohepa, Margie & Lloyd, Claire. (2009). School Leadership and Student Outcomes: Identifying What Works and Why Best Evidence Synthesis Iteration (BES).

Liebowitz, D. D. and Porter, L. (2019) ‘The Effect of Principal Behaviors on Student, Teacher, and School Outcomes: A Systematic Review and Meta-Analysis of the Empirical Literature’, Review of Educational Research, 89(5), pp. 785–827. doi: 10.3102/0034654319866133.

×