https://innovateuk.blog.gov.uk/2018/02/15/introducing-our-evaluation-framework-how-we-evaluate-impact/

Introducing our evaluation framework – how we evaluate impact

Evaluating the impact of what we do is of great importance to Innovate UK. It is vital to demonstrate and understand our impact, and to ensure we are achieving the greatest impact possible with the funds we are entrusted with. Evaluation isn’t just about measuring the impact of a programme.

It is just as important as a tool to understand why and how programmes are having an impact, the pathways to that impact, and to understand which aspects of a programme are more or less effective.

Evaluating our impact

Evaluation of innovation support is, however, a difficult task, with challenges in identifying, measuring, and attributing impact for any specific programme.

As part of our journey in improving our evaluation programme, and overcoming some of those difficulties, I am proud to introduce Innovate UK’s first ever published evaluation framework , which sets out our approach to overcoming the evaluation challenges we, and many other parts of the public sector, face in evaluating our impact.

What are the challenges?

All of Innovate UK’s activities have one overarching goal in mind: to increase UK economic growth through connecting and funding business-led innovation. The impact of our activities are, as a result, difficult to capture. The returns to innovation occur over several years, and not always in the companies we directly support.

The main output of our programmes is new knowledge. Knowledge is intangible, and whilst it can be protected with a patent, or kept a trade secret, knowledge does flow with the people involved in innovating, or to customers buying a new product, or suppliers involved in the production of it. Through these channels, impacts spill over to the wider economy in ways we’re not able to predict or capture.

Innovation programmes only directly support a relatively small number of companies – around 10,000 in Innovate UK’s history. Compare this with the millions supported in other areas of government policy, such as education and welfare, and we are faced with a small sample in any one programme with which to conduct statistical analysis of impact. This can cause significant problems for many of the traditional evaluation methods we would seek to employ.

Overcoming challenges

To help us overcome the problems we face in understanding and demonstrating our impact, we have implemented a number of approaches to mitigate the challenges in hand, and continuously improve the rigour and reliability of our evaluations. These approaches include;

  • Introducing an improved monitoring process, to capture more consistent and comprehensive data on the projects we fund
  • Conducting evaluations over longer time-periods, with larger cohorts and a logic model approach to demonstrate outputs and outcomes before full impacts are realised
  • Utilising third-party data to improve our understanding of impact, drawing on verified sources to build a more complete picture of the companies we work with

Through these measures, we have been able to successfully implement some of the most robust evaluation methods available, including randomised control trials and regression discontinuity design.

Where challenges remain which limit the use of these data-heavy methods, we have introduced more theory-based approaches, combining a wide range of methods to build the most complete picture of impact possible.

Lessons learned

The approach we have taken to improving our evaluation framework provides many lessons that are applicable across the public sector. Through implementing some of these lessons, a consistent and transparent approach to evaluation can enable improved robustness of evidence, greater understanding of what works and why, and, therefore, improved programme design and delivery.

  • Design evaluation into the programme: it’s never too soon to start evaluating!
  • Data is key: It is vital to ensure you’re collecting the right data at the right time.
  • Evaluation is often very difficult: But this doesn’t mean it should not be attempted – instead, ensure the most robust methods practical are applied
  • Sample size is fundamental: Try to design an evaluation which covers at least 200-300 beneficiaries
  • Do not get too preoccupied with a single number: The headline return-on-investment figure is important, but the narrative and lessons around it will inform decision-making just as much, if not more.
  • Be innovative when evaluating: Ensure a robust core method is in place, but then look to push the boundaries a little – learn something new about how to evaluate each time you do.

There’s much more on these lessons in the framework – have a read and hopefully it will help others improve their own approaches. We’re always looking for ways to improve our own evaluation, and we’re keen to lead a wider debate about the best way to do this – so please do get in touch with any ideas!

Contact us:

Follow Dan Hodges on Twitter @EconDanH

You can follow Innovate UK on:

Leave a comment

We only ask for your email address so we know you're a real person