Making a difference: a practical approach to evaluating the impact of professional development

This blog was written by Vivienne Porritt (@LCLL_Director), Assistant Director for School Partnerships at the Institute of Education (IOE). It is one of the articles in our National Teacher Enquiry Network December end of term newsletter (sign up here).

vivienne porritt

Evaluating the impact of continuing professional development (CPD) is an aspect that many educational leaders struggle with and are looking for practical and simple ways to achieve this.  The Logical Chain (Ofsted, 2006) noted that: ‘Few schools evaluated the impact of CPD on teaching and learning successfully’, a situation which appears not to have improved much according to more recent inspection evidence (Ofsted, 2010) and research. The latest Ofsted inspection schedule places increasing importance not on what school leaders and teachers do but on the difference they make by their actions and improvement strategies – and this is seen to be raising the bar in terms of a tough approach

So why is impact evaluation seen as hard?

In a recent webinar for the Teacher Development Trust, Russell Hobby, General Secretary of the National Association of Headteachers, suggested:

There’s a gap between practitioners and researchers ……. in the sense that I suspect that not all school leaders do know what effective CPD is, or are confident that they know which aspects of CPD will help them deliver their priorities.

If teachers and leaders have yet to apply fully the wealth of research evidence  into what effective professional learning and development look like, it’s not surprising that existing knowledge as to how to evaluate the impact of CPD isn’t used, in spite of  significant evidence based on research in schools.

An overview of approaches to impact evaluation is given by Bubb and Earley (2007) who cite Kirkpatrick’s work on impact evaluation (1959) which identified impact on four levels: reactions; learning; behaviour; outcomes.  Thomas Guskey (2000) introduced a significant focus on evaluating CPD through the impact it had on learning outcomes for young people.  Guskey sees impact as being achieved at five potential levels:

  1. participants’ reactions
  2. participants’ learning
  3. organisation support and change
  4. participants’ use of new knowledge and skills
  5. student learning outcomes

Crucially, he argues that we need to pay attention to all five levels of impact if the goal of improving classroom learning is to be achieved, especially levels 2 – 5.  And this is the central goal of impact evaluation, to me, and I’m sure to you.  I am much less concerned about whether the sandwiches were good and how many people turned up to the session  or even what people think they might do as a result of engaging in professional development.  I think we have to see these kind of measures for what they are – a summary of people’s reactions to what they are engaging with (Level 1) and the information is much more useful to those leading or facilitating a session so they can improve it for the future.  This hasn’t really anything to do with evaluation of impact and I think it is time to get tough with ourselves as to what this concept is.

Following Guskey, Goodall et al investigated the range of evaluative practices for CPD.  Using Guskey’s levels as a framework, they found that schools lacked experience, skills and tools to evaluate the impact of CPD.  Not surprisingly then,

“The impact of CPD on student learning was rarely evaluated by schools in the study and if done so, was rarely executed very effectively or well.”

In 2007, the Training and Development Agency for schools published 8 principles for impact evaluation which build on Kirkpatrick’s and Guskey’s frameworks.

So, on the one hand we have a range of models and frameworks that are very well known in academic circles and, on the other, we have school and CPD leaders yet to employ such tools effectively.  Why is this?

Evaluation of CPD seems to have got stuck at the first level of participant reactions (‘happy sheets’) and organisations and leaders are unsure how to move past this.  I’m also concerned that impact evaluation still focuses on CPD activity (what has happened) rather than the difference the development activity makes for the participants and the young people with whom teachers work (the change that has been brought about).

Over the last five years a team of colleagues at the London Centre for Leadership in Learning at the IOE has developed a practical approach to impact evaluation that is simple in concept yet rigorous in the difference it can make. The initial thinking behind this approach was first highlighted in London’s Learning (Porritt, 2005).   This resource for CPD leaders in London explored Guskey’s key concept that:

“Good evaluation does not need to be complex; what is necessary is good planning and paying attention to evaluation at the outset of the professional development program, not at the end.”

Traditional impact evaluation tends to be at the end of a development activity.  Such summative evaluation may not help you to know what works and what doesn’t work for the children and young people you support every day.  We believe that all initial planning as to the potential impact of CPD should be undertaken before CPD activity starts, not after.  And by impact we mean, for example, stating specific changes in a teacher’s classroom strategies or clarity about a changed approach by a middle leader to addressing variation in teaching quality in her team. In terms of learning outcomes, we want to agree at the outset the differences in how children will learn as a result of proposed CPD activity – for example, pupils will move from using closed questions to the use of higher order questioning.  This is a simple concept to agree yet requires a significant change in the CPD practice of many organisations.

Such initial planning leads naturally to some rigorous questions, the answers to which help an organisation, team or individual design an approach that offers some practical solutions.  We have built on Guskey’s framework and developed it further by asking about the type and size of the difference teachers and leaders want to achieve through CPD.    In particular we believe establishing the current practice or baseline is vital to help colleagues articulate the quality and depth of the subsequent impact on adult practice and young people’s learning.

Working in such a way with schools and CPD leaders has led to the evolution of a simple yet rigorous approach to impact evaluation.  Much more significantly, our approach enables colleagues to design more effective improvement and development processes that have greater potential to ensure impact is achieved.  As a CPD leader stated,

“This will focus my planning and give me much greater clarity on what I want to achieve. It will sharpen my attention to specific steps in my support for colleagues.”

This means that impact evaluation has the potential to be, first and foremost, a powerful method to raise the quality of learning and standards.  Accountability requirements and value for money then become helpful bonus balls rather than the reason we evaluate the impact of CPD.  This is an important point to me.  I believe many teachers tend to view impact evaluation as about demonstrating external accountability and so have not looked to innovate in this field.  If we see impact evaluation as a high quality learning tool for professionals as well as young people, we can bring about a step change in its application: fewer ‘happy’ sheets and more evidence of real difference.

We have tested this approach in two TDA (as then) projects:

  • Effective Practices in CPD –  600+ schools / LAs / organisations
  • The Leadership of CPD – LAs / schools in all UK regions

In particular, we have explored this way of working within a Masters module for CPD leaders at the Institute of Education, Innovative Leadership of CPD, through tailored programmes for Aspiring Leaders at all levels and through workshops and seminars in schools and at the IOE. Colleagues have found the approach both challenging and stimulating:

“I now understand that CPD has to have an impact on learning and bring about change.”  Participant, Effective Practices in CPD

“This really brought home the importance of impact evaluation as a tool for focusing school development.”  Participant, Innovative Leadership of CPD programme

We find that colleagues who use this approach rigorously then question the range and purpose of traditional CPD approaches and focus on the impact to be achieved for individuals, the organisation and young people.  This means that more schools know what effective CPD looks like, and know how to evidence and articulate the difference it is making.  If the principles and approaches of impact evaluation are better understood and established from the outset of a development activity, rather than as an after-thought or an accountability measure, then impact evaluation becomes a powerful tool for making a difference to children’s learning.  That’s the exciting potential of looking again at the purpose of evaluating the impact of CPD and the continued challenge.

If you are interested in exploring this approach, contact me at v.porritt@ioe.ac.uk or take a look at the case studies from the project for the TDA published in Effective Practices in Continuing Professional Development: lessons from schools (Earley and Porritt, 2009) –

References

Bubb,S and Earley, P(2007) Leading and Managing Continuing Professional Development, (2nd edition) London: Sage.

Goodall, J et al (2005) Evaluating the Impact of Continuing Professional Development (CPD,) Department for Education and Skills

Guskey, T. R. (2000). Evaluating professional development. Thousand Oaks, Ca,

Corwin Press.

Ofsted, (2006) The Logical Chain

Porritt, V (2005) London’s Learning: developing the leadership of CPD, Department for Education and Skills