Evaluation5's Blog

February 9, 2011

On Complexity & Evaluation

Filed under: evaluation5.0 — evaluation5 @ 2:34 pm

Marlen Arkesteijn/CapturingDevelopment

Before reading this blog, please go to “Anecdote. Putting stories to work” for a simple and charming video animation on the Cynefin Framework on complexity. Unfortunately I could not embed the video here!

Michael Quinn Patton, 2008

These days ‘Complexity’ is a real buzz word in the world of monitoring and evaluation. In May 2010 we had the CDI Conference on‘Evaluation Revisited: Improving the quality of evaluative practice by embracing complexity. Not so long before that a similar event took place in Australia; various blogs are dedicated to complexity (e.g. Rick Davies), Patton is writing about it, and I had an assignment a few months ago on ‘The suitability of the MSC method for the evaluation of complex programmes’ for the Wageningen University as well.

Yesterday another seminar took place on M&E and Complexity: “Planning, monitoring and evaluation in Complex Social Situations”, organised by DPRN.

During this seminar complexity was not further defined, and the discussion focussed on what types of tools could be useful for complex programmes. This resulted in a discussion on using LogFrame approaches versus ‘alternative’ methods like Outcome Mapping and the Most Significant Change method. With around 100 participants in the venue, this inevitably caused a lot of confusion, and black and white opinions, with LogFrame approaches ending somewhere at the least popular part of the spectrum.

I think most of us agree that the discussion should not really be about tools, but about various aspects of programmes and exploring how we could best monitor and evaluate progress and learn to do better (if needed), and only then discuss what type of tools could be usefull.

I particularly like the work by Patricia Rogers (Using programme theory to evaluate complicated and complex aspects of interventions, Evaluation 2008; 14;29) and Michael Quinn Patton (Getting to maybe, 2009).  They unravel the ‘complexity’ issue, based on the work of Kurtz and Snowden (Cynefin framework, 2003) and Glouberman and Zimmerman (2002) and show how programmes can have various simple, complicated and complex aspects. Rogers also shows how LogFrame minded models can still help to unravel, analyse and thus understand complex aspects of programmes.

So complex programmes usually have simple aspects (usually at the level of input, output and to a lesser extent outcome) that could be dealt with with conventional result based monitoring and evaluation tools.

My credo is, use your common sense, count what CAN be counted and what is USEFUL to count. And for complicated and complex aspects,  use methods that pay heed to non-linearity, and emergent outcomes/impact, like Most Significant Change, Responsive evaluation, Fourth Generation evaluation, Reflexive monitoring etc.  Or with the words of Bob van der Winden, one of my collegeaus:  ‘Do not beat a drum with an axe’.

And explore, investigate, study how your programme works, and do not be easily satisfied!


Leave a Comment »

No comments yet.

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Create a free website or blog at WordPress.com.

%d bloggers like this: