Evaluation5's Blog

November 15, 2011

Systems thinking in evaluation once more

Filed under: Evaluation theory, evaluation5.0, interactivity, learning — capturingdevelopment @ 3:07 pm

Posted by Marlen Arkesteijn/CapturingDevelopment

Yesterday I attended the live webinar on‘Systems Thinking for Equity-focused Evaluation’ organized by UNICEF, UNWOMEN, the Rockefeller Foundation and a bunch of other organisations and institutes that together formed MY M&E (a really awesome and informative website with all you ever wanted to know about monitoring and evaluation).

Yesterday’s seminar is just one of the very many webinars they organize on evaluation (there are series of webinars on equity focussed evaluations, on emerging practices in development evaluation, on developing capacities for country-led M&E systems, on country-led M&E systems, etc). Every other week or so you can attend -for free- lectures delivered by top-notch evaluators and methodologists like Michael Quinn Patton, Patricia Rogers, Bob Williams, Martin Reynolds, etc., and theoretically debate with them!  Yes, we live in a world of wonders!

Yesterday both Bob Willliams and Martin Reynolds, both reknowned system thinkers/ evaluators, gave short introductions on ‘Systems thinking for Equity-focused Evaluations’ for a global interactive classroom of nearly 100 participants.  Bob Williams briefly explained the key principles of ‘thinking systematically’: Inter-relationships, perspectives and boundaries. These are the three principles  many methods from the ‘system field’ have in common (Williams claimed there were about 1200-1300 methods in the system field!). Martin Reynolds dived into the cross-road between equity focused evaluation and one of the system methods: Critical Systems Heuristics (CSH).

Although Martin Reynolds presentation looked rather impressive, the complexity of his story combined with some technical disturbances, made his lecture hard to follow and understand. There was one topic though that strongly made my ears wide open! He was talking about steps in  CSH, starting with a ‘Making an Ideal Mapping of ‘Ought’ (sounds like a fairytale), followed by a Descriptive mapping comparing ‘Is’ with ‘Ought’, and other steps. This ‘Ideal Mapping of ‘Ought” is placed at the beginning of the whole exercise to provoke ‘blue sky thinking’ and letting people realize that reality is constructed, and can be re-constructed if we really want.

Why is this remark raising my interest? Well, if you have followed my earlier blogs, my queste is very much ‘How can evaluation contribute to re-construction? Or with other words, how could evaluation contribute to ‘system change”.  Bob Williams commented on Reynolds saying  that it made him think very much of organisational development and ‘vision’ building, and that is certainly true as well.

And all that brings me again to my eternal question: ‘How does system thinking contribute to evaluation practice?’ Are it the new clothes of the emperor, or can it really contribute something solid? Again, I come to the conclusion it is not so much about the tools and instruments from the systems field itself, but about the way of thinking. Think big, act small, and see our world as one big construction site, taking nothing for granted, and challenge the existing rules of the game. Let evaluation (either with or without system thinking) help us in contributing to the transformation of this world! 

Next week, 22 november Particia Rogers will provide a lecture and on 6 December 2011 it is Michael Quinn Patton’s turn! You are strongly advised to join!

Advertisements

July 28, 2011

Maria Joao Pires & Evaluation Practice

Filed under: behaviour, evaluation5.0 — capturingdevelopment @ 2:39 pm

Posted by Marlen Arkesteijn/CapturingDevelopment

What has Maria Joao Pires -the reknowed pianist- to with evaluation practice? Well, in first instance for most people likely nothing, but in my reality, or better,  in my brains Maria makes a great connection with evaluation.

It is already quite some years ago that I came across a documentary on Maria Joao Pires. In this documentary you see her students struggle with some of the most complex piano pieces, intertwined with shots of the gorgeous surroundings of her farm in Portugal. Although I am not a connaisseur, I guess the students played – technically- superb and showed great virtuosity!

Despite their virtuosity, Maria was -most of the time- not impressed. I do not recall exactly what she said, but it was very much in the line of ‘Yes, technically you played the piece very well, but tell me, why should you play this piece? What did you add to this piece? How did you interpret it? I want you to put your soul into this piece! Otherwise the piece could be played by anybody else. What makes your piece different from the (same) piece student X is playing?’ (after writing I found some clips on Youtube, aughh, memory is a feable thing; anyway for the point of this blog is does not make much difference ;-)).

 I am not saying that evaluators are piano players, but Maria has a point here, also for evaluators. As evaluators we need to have expertise (knowledge, technical, procedural and intellectual) as a ground rule. Without this expertise we are nowhere and not worth to be hired anyway. The question here is, is that enough? If we are virtuose in our expertise, does that suffice to be a ‘good’ evaluator?

During a diner gathering with other evaluators (organised by Evaluation 5.0), we discussed -part of- this topic.  A first additional qualification that good evaluators (in our view)  should have, we concluded, is proper behaviour. The outcomes of an evaluation are influenced by many different factors, but one we have a certain control over is our own behaviour. When we are directive, the evaluated very likely will be defensive or timid. When we are open, and are truly listening, the evaluated may be open too and share his or her mind.

But still, does this qualify us as good evaluators? Not necessarily. So do we need to put our soul into our work, just like the pianoplayers should according to Maria? I am not quite sure about that. But what we do need to do, is to be aware of our vision and motivation. What is it we are actually doing? Are we mainly earning money? Or do we want to contribute to a more just and sustainable world through our practice? Shouldn’t we first clarify our vision, and use our expertise and behaviour to contribute to that vision?

Not that I have my vision ready, but my, I could start trying and ask myself ‘why should I do this evaluation and not somebody else?’.

June 24, 2011

Reading Michael Quinn Patton

Filed under: Evaluation theory, learning — evaluation5 @ 1:04 pm

By Marlen Arkesteijn, Capturing Development 24 June 2011

Since I am writing an article on development cooperation and its M&E approaches, -and naturally to keep myself updated- I read Michael Quinn Patton’s latest book (2011) ‘Developmental Evaluation. Applying Complexity Concepts to Enhance Innovation and Use’, published by the Guilford Press, New York.

To avoid any confusion: Developmental evaluation has nothing to do in particular with development cooperation. ‘Developmental’ is referring to the approach that Patton’s follows. He writes (based on a quote of Pagels) “Evaluation has explored merit and worth, processes and outcomes, formative and summative evaluation; we have a good sense of the lay of the land. The great unexplored frontier is evaluation under conditions of complexity. Developmental evaluation explores that frontier.” (pg 1)

So Developmental evaluation is an evaluation approach dealing with complexity. However, various practitioners and evaluation professionals have start using Developmental evaluation within development cooperation.

The book is a very good read, with illustrative examples and hilaric and anecdotal situations. Patton describes for example how he has come up with Developmental evaluation.  He was working for a programme, using formative and summative evaluations as his repertoire, while the team did not want to come to a fixed model (summative evaluation) that could be tested during a summative evaluation. “We want to keep developing and changing”, they stated….. “Formative evaluation! Summative evaluation! Is that all you evaluators have to offer?”, one of the team members exclaimed. ‘Frustration, even hostility, was palpable in his tone.’……. “Well,” I said, seeking inspiration in my coffee cup, “I suppose we could do, umm, we could, umm, well, we might do, you know… we could try developmental evaluation!” (pg 3)

Developmental evaluation supports innovation development to guide adaptation to emergent and dynamic realities in complex environments, so it is quite different from regular evaluation approaches that focus more on control, and finding order in the chaos. Patton mentions five complex situations developmental evaluation is particularly appropriate for:

  1. Ongoing development in adapting a program, policy, or innovation to new conditions in complex dynamic systems;
  2. Adapting effective principles to a local context as ideas and innovations are taken from elsewhere and developed in a new setting;
  3. Developing a rapid response in the face of a sudden major change, exploring real time solutions;
  4. Preformative development of potentially broad impact scalable innovation;
  5. Major system change and cross-scale development evaluation.

A very important key feature of developmental evaluation is that it aims to contribute to social change and ‘nurture developmental, emergent, innovative, and transformative processes’. It is not so much about testing and refining a model (formative) or about a judgement (summative). It has a strong action-research component. With this, he is embarking on a rather new purpose of evaluation. Ofcourse, other types of evaluation aim to contribute to social change as well, but usually in an indirect way, exploring what works and what doesn’t. Developmental evaluation goes a step further, and aims to be part of the action, facilitating interventions that may work (or not).

Another, very much related key feature is the ‘closeness’ of the evaluator to a programme. From a person that is only visiting mid-term or at the end of a programme, a developmental evaluator is ‘continously’ present. Asking questions, probing, exploring with the programme, providing feedback in ‘real time’ in rather short feedback loops.

These two features are in my opinion, exactly what may be needed when dealing with complex situations. The situations are complex, unpredictable, non–causal, non–linear, emergent and may need constant attention. Programme or project leaders (in my experience) are many times too involved in their management activities to also be able to remain reflective and ask critical questions themselves.  A developmental evaluator could provide help.

Overall, it is an inspiring and thought provoking book, and offers good guidance, without falling in the pittfall of blueprints or steps! Ofcourse it also raises questions. Especially when he is talking about system change, the fifth complex situation. Here he refers to the work of Bob Williams who uses a quite broad understanding of system as long as boundaries, perspectives and interrelationships are involved. In the end this means that almost all situations are ‘systems’ and that is what I see happening in debates.

I think (and correct me if I am wrong) the ‘system’ concept needs unraveling and  ’demystification’. What is really necessary  is to challenge the institutional settings and its related norms, values, cultures etc that reproduce current unsustainable practices.   What could help this unraveling  is to borrow from concepts and theory used in innovation science.

In my article on development cooperation and M&E appproaches, this will be one of the topics I will further explore and discuss. It is going to be an inspiring and hot summer! I hope to write more about this topic in my next blog.

June 12, 2011

Evaluation ‘tools’

Filed under: Uncategorized — bwsupport @ 10:15 am

By: Bob van der Winden

I often get the question: ‘What is a good evaluation tool?’. Of course I must answer that I don’t know, or even worse: they don’t exist…

As in the Zimbabwean (Shona) proverb ‘Don’t beat a drum with an axe’ you must be careful with tools: you cannot use ‘one tool fits all’. But that’s unfortunately exactly what many evaluation theorists want to make us believe: ‘Use my tool and you will solve the X,Y or Z problem!’
It does not work like that and that can be shown best with the following diagram:

or in less theoretical language, e.g. for a brave project in Zimbabwe:

Most of the time the ‘first generation’ (quantitative) information can be counted (in the upper left box) and should be counted as well! But this is not much more than what we use to describe as output!

We need ‘Second generation’ information – qualitative , what are peoples ‘perceptions’ – for the upper right and lower-left box, in order to understand more about the outcome (what does the project do with people, what do e.g.trainees do with their acquired skills).

Now we also need to know about the impact: what effect does the project have on the (Zimbabwean in this case) society?

And then we are in dire straits, where a single tool cannot help us anymore: we will need a host of methods / methodologies to find out about it: how else can you measure something that is unknown as a practice and for which very few standards are developed?

In my view the best way is using the ‘back of your brains’: putting all your knowledge and skills on the scale and start working on a tailor-made methodology. Many times in my case that will be fourth generation (see http://www.bwsupport.nl) or one of the many tools we are trying to develop in thefifth generation methodologies with a group of evaluators (see http://www.evaluation-5.net)That can include (parts of) Most significant change, Social return on investment, Developmental, Outcome mapping, logical framework, etc. methodologies. 

But the way of operating is virtually always the same: count whatever can be counted – it is certainly necessary, but it is not enough. Personally  I prefer qualitative methodology for outcome and a mix of co-creative, constructivist and iterative ways of ‘tackling impact’.

May 18, 2011

Media, development and Evaluation

Filed under: Uncategorized — bwsupport @ 1:43 pm

At the moment I’m working with 2 African Media Organisations: one in West, one in Southern Africa. They are quite different in almost everything, but the most interesting is the goals they are after:
One wants to open up the debate in society in order to promote social justice. That is what media are not always after, specifically not commercial media… The second is an exile newspaper, and they only want one thing: to give a voice to the voiceless, yes, but then tell the facts like they are, whoever is involved.
It makes me think of an old, ongoing debate in journalism: are media meant to promote something – like social justice, or development? – or are media there in the first place to ‘tell it like it is’ ?.
Or may be it can be linked: can you promote social justice by ‘telling it like it is’?
There is a lot of (philosophical) theory written around this theme, in the first place by the German philosopher Habermas (since 1989: The structural transformation of the Public Sphere), but the discussion is still alive and kicking…
Most concretely: do we want to develop the media (1) or do we want to promote development through media (2) or do we want to to ‘tell it like it is’ also where development is concerned? I am quite convinced of the last goal, and much less about the first 2.
In my view the media should develop themselves (where they can use some help sometimes), the development should also develop itself, so let the media stick to the core business; ‘telling it like it is’!
Why is this so necessary? Not because I think there is only one truth in this multifaceted world. Not because I think any media can ‘tell the truth’. The reason is simply that there is one thing media must have, and that is credibility. If you do not trust the people who are ‘sifting’ the news for you, why would you read them, look at them, listen to them?
If people get the right or (or even wrong) idea about whatever media that they are speaking ‘their master’s voice’ – even if that master is a charity of good will – those media will instantly loose credibility.
Now both organisations I’m working with do solve that problem in a certain way: one is above all supporting (community) media that themselves are independent ‘without fear or favour’ (as far as possible in autocratic Africa). The danger they’re in is that they themselves become the master – small news organisations are willing to talk their talk…
The other is so opposed to the powers that be, that they almost become a ‘mouthpiece’ of the opposition, thus also becoming someone’s ‘pet dog’ – at least in the perception of readership.
The only answer I have is that it’s a balancing act, time and again – as it is in my job: evaluating them. Are we the donor’s pet dog? Or are we only talking the talk of the beneficiary? It’s a balancing act between those two, although the solution is not ‘staying in the middle’, but rather telling it like it is: credibility is also our best friend and it can only be earned by staying deadly honest.
Bob van der Winden

May 10, 2011

Advocacy: theories of (policy) change

I like to work on the basis of a theory of change in monitoring and evaluation. If a program does not have such theory, I try with the people directly involved, to uncover what the conscious or unconscious thinking is behind action or the program, on what theory is it based. When they do have a theory, we validate it. A theory of change helps in deciding what indicators to monitor, what proxies are suitable or which may generate circle reasoning. Also, in evaluation a program theory can be key to identify key evaluation questions.

A brilliant read on the subject of theories of change was recently published by Patricia Rogers and Sue Funnell entitled “Purposeful Program Theory” (2011). The book also pays a good deal of attention to visualizing theories of change using a variety of charts and graphs. Rogers writes a lot about evaluation and complexity and I like what she writes. I am still digesting their book and will blog on it on some other occasion. This is a link Genuine Evaluation, Rogers’ blog.

To get a good picture of a specific theory of change you may need to abstract somewhat from the details: like looking to a painting through your eyelashes: see less, see more.  When information is filtered and you start to see the bigger picture. But it helps so to speak to have good eyelashes 😉  On Aid on the Edge of Chaos the blog from Ben Ramalingam, I found a very readable brief on theories about how policy change happens. The brief, written by Sarah Stachowiak of Organizational Research Services is very helpful to analyze and uncover the theory behind a broad variety of policy advocacy efforts. Stachowiak’s paper represents in a very concise way different approaches to a similar goal: policy change. She outlines 6 broad theories:

  1. the theory of the “Large Leaps” where mobilization of media and new, unexpected alliances create awareness and visibility that result in significant change
  2. the “coalition theory” that suggest change happens through sustained, coördinated activity of individuals that hold the same core, policy beliefs
  3. the “ policy window” theory of change suggests that change depends on several things coming together: problems, policies and politics. Only when problems are “political” issues and/or solutions are considered politically acceptable, a window for change may exist
  4. The “messaging and framework” theory emphasizes that information and how it is presented influences behaviors and decisions.
  5. Based on the “power elites” theory advocates would aim to influence a selected group of (individual) key decision makers an invest in their credibility among these
  6. Finally the “grassroots” theory challenges the power elites theory and claims groups can hold power, that power is based on coöperation and that it can be shifted through action and events

While all these theories may lead to similar results at the level of impact, they all go with different outcomes and results. For monitoring (and evaluation) this is essential. Interestingly some theories will fit an organizations capacity better than others: to stay close to (my Brussels’) home an organization like Friends of Europe has the network capacity to influence key decision makers, maybe take advantage of policy windows, but none to support grassroots communities. Concord, who represents organizations that have capacities to work with on the basis of the “grassroots” and “messaging and framework” theory promotes action based on a mix of the latter and the “coalition” theory.

I am not qualified to judge with theory holds true in a general sense and for my work in supporting monitoring and evaluation that is not needed. The brief provided me extra eyelashes, a very useful framework that help – to name a few possibilities – to :

  • trigger a process in which an organization develops a theory of change and frames what they and others actually do in a particular context, without having to start from scratch
  • the models help to assess whether an change strategy fits an organizations capacities (and the situation)
  • the framework may help facilitate strategy debates in networks of diverse organizations
  • using the framework it will be easier to identify or uncover a coherent set of intermediate results, outcomes and outcome indicators for a particular advocacy strategy

If you want to read the brief (it is a mere 14 pages, with very clear graphic representations of the theories) you can download it here.

By the way, some of these theories of change give pretty pictures………..

February 9, 2011

Complexity versus System approaches in evaluation

Filed under: Evaluation theory, evaluation5.0 — evaluation5 @ 2:41 pm

Posted by Marlen Arkesteijn/CapturingDevelopment

Next week, on 25-26 January 2011 the GTZ Conference on Systemic Approaches in Evaluation will take place. Since we (Barbara/ WUR and I) will present ourReflexive Monitoring in Action approach there, Rosien, one of my Evaluation 5.0 colleagues asked : ‘Is this a counter movement against the Complexity Guru’s?‘. ‘I think’, said Bob, also an Evaluation 5.0 colleague, ‘you could see the complexity discussion as a branch of the systemic approaches.’ ‘Well’, I said, ‘I actually think, that the system thinkers are a branch of the complexity thinkers.’Mmm’, Bob replied, ‘this is an issue we could probe a bit further into.’ And so I do.

I started reading Bob Williams , a reknowned thinker on ‘Using Systems Concepts in Evaluation’. In the FASID document ‘Issues and Prospects of Evaluations for International Development’ (2010) he says something interesting: “The history of the systems field is (…) rooted in addressing complicated and complex problems with limited time and with restricted resources.”(pg 37) In my own words, it is a way of ‘dealing’ within M&E approaches with complicated and complex problems; looking glasses that help you to make sense of a situation. Mmm, can I think of any other M&E approaches dealing with complicated and complex problems? Oh yes, many constructivist methods deal with complicated and complex problems. So then, what does system thinking add that other approaches do not?

Williams also states in his new book “Systems Concepts in Action. A Practitioner’s Toolkit.” (2010, Stanford Business Books) that the principles of system thinking can be expressed by three concepts: The concepts of inter-relationships, perspectives and boundaries. I admit system thinking can give you a broader outlook on situations, because you do see developments in (non-causal/ non-lineair) relation to each other, and you see multiple perspectives.  But that is something a good Theory of Change exercise could do as well…. What then does system thinking really add? I cannot really find an answer in Williams’ writings.

In my own practice I definately see what the added value is of certain kinds of system thinking like Reflexive Monitoring in Action: Facilitating system learning, questioning certain systemic barriers (e.g. laws and regulations, market structures, norms and values), or reframing them into opportunities, questioning each others practices and maybe even more importantly, questioning your own practices.

I am curious what other practitioners see as added value of system thinking! After the conference I will get back to this fascinating issue!

On Complexity & Evaluation

Filed under: evaluation5.0 — evaluation5 @ 2:34 pm

Marlen Arkesteijn/CapturingDevelopment

Before reading this blog, please go to “Anecdote. Putting stories to work” for a simple and charming video animation on the Cynefin Framework on complexity. Unfortunately I could not embed the video here!

Michael Quinn Patton, 2008

These days ‘Complexity’ is a real buzz word in the world of monitoring and evaluation. In May 2010 we had the CDI Conference on‘Evaluation Revisited: Improving the quality of evaluative practice by embracing complexity. Not so long before that a similar event took place in Australia; various blogs are dedicated to complexity (e.g. Rick Davies), Patton is writing about it, and I had an assignment a few months ago on ‘The suitability of the MSC method for the evaluation of complex programmes’ for the Wageningen University as well.

Yesterday another seminar took place on M&E and Complexity: “Planning, monitoring and evaluation in Complex Social Situations”, organised by DPRN.

During this seminar complexity was not further defined, and the discussion focussed on what types of tools could be useful for complex programmes. This resulted in a discussion on using LogFrame approaches versus ‘alternative’ methods like Outcome Mapping and the Most Significant Change method. With around 100 participants in the venue, this inevitably caused a lot of confusion, and black and white opinions, with LogFrame approaches ending somewhere at the least popular part of the spectrum.

I think most of us agree that the discussion should not really be about tools, but about various aspects of programmes and exploring how we could best monitor and evaluate progress and learn to do better (if needed), and only then discuss what type of tools could be usefull.

I particularly like the work by Patricia Rogers (Using programme theory to evaluate complicated and complex aspects of interventions, Evaluation 2008; 14;29) and Michael Quinn Patton (Getting to maybe, 2009).  They unravel the ‘complexity’ issue, based on the work of Kurtz and Snowden (Cynefin framework, 2003) and Glouberman and Zimmerman (2002) and show how programmes can have various simple, complicated and complex aspects. Rogers also shows how LogFrame minded models can still help to unravel, analyse and thus understand complex aspects of programmes.

So complex programmes usually have simple aspects (usually at the level of input, output and to a lesser extent outcome) that could be dealt with with conventional result based monitoring and evaluation tools.

My credo is, use your common sense, count what CAN be counted and what is USEFUL to count. And for complicated and complex aspects,  use methods that pay heed to non-linearity, and emergent outcomes/impact, like Most Significant Change, Responsive evaluation, Fourth Generation evaluation, Reflexive monitoring etc.  Or with the words of Bob van der Winden, one of my collegeaus:  ‘Do not beat a drum with an axe’.

And explore, investigate, study how your programme works, and do not be easily satisfied!

July 15, 2010

Using MSC in the Netherlands: Are Dutch people able to tell stories?

Filed under: evaluation5.0, learning — Tags: , , , — evaluation5 @ 10:44 am

Posted by Marlen Arkesteijn/CapturingDevelopment

After having worked for International Cooperation for more than 20  years in China, Bhutan and the Netherlands, in the field of monitoring and evaluation, I am actually still surprised how well developed M&E in International Cooperation is, and how much use we can make in the Netherlands of this expertise!
Over the last 5 years I have been able to make fully use of my ‘foreign’ experience within the Netherlands, in sustainable agriculture, protection of wildlife, in healthcare and lately, in rural community development as well.

Currently I am coaching a group of young professionals to perform an evaluation using the Most Significant Change method in the East of the Netherlands. It makes me realize what a wonderful profession I have and how much fun it is to work with people with energy and enthusiasm.

The first issue we ran into was “Do the Dutch have an oral tradition, are they able to tell stories?” When I joined the group of young professionals (YPs), they had already started with the interviews, and stated that people were not able to tell a story, only one-liners with very general remarks. When we probed deeper into the question, a few issues came up:
a) Letting people tell a story and a most significant one as well, is something different from doing an in-depth interview.
b) Some of the people that were interviewed were actually not involved in the project and thus had no story to tell indeed.
c) The question arose: ‘What is actually a (MSC) story?” (A good question indeed!)

To unravel the issues we worked at two levels: At a technical side and at the content side.

At the technical side, it is sometimes really difficult to make people talk and to invite them to tell a story that matters. One of the most important techniques is the art of listening in combination with ‘probing further’. People may tell 10 short one line stories about changes, but when asking about the most significant, there are 1001 questions to ask. If there is a MS change, invite people to tell the change in detail, what is the change, how does it look like? It also requires “recognizing a story when it passes by”. For many of us changes are part of our daily life and therefor we tend to forget how important the details are. How did the change come about, what happened, and what happened afterwards, and then and then? Why is this change for you the most significant, why do you choose this story?

At the content side, I did a collective interventionscheme (theory of change) exercise with the young professionals to make their assumptions on how the project works explicit. This was a real revelation! They analysed various levels of results, all kind of causal and non-causal links, connections, loops etc. Especially the various levels of results made it for the professionals easier to ask relevant questions during the ‘interviews’ , they found clues for talking to people!

At the end, it appeared that also Dutch people, at least part of them, love telling stories and do so with a lot of gusto! It is a matter of approaching people that have a story to tell indeed, and a matter of proper facilitating. I saw life in people’s eyes and enthusiasm during the interviews and discussions, an essential ingredient for evaluations, since it are these people that need to work with the outcomes of the evaluation.

Next time more about MSC in the Netherlands!

Marlen Arkesteijn

April 19, 2010

Stories: the difference between experience and memory

Filed under: behaviour — Tags: , , — Rosien Herweijer @ 8:53 pm

This is a very interesting discourse by Daniel Kahneman at the TED 2010 on the difference between experience and memory. I found it on the David Snowdon’s blog on the Cognitive Edge

This difference is something that is very important to acknowledge in the process of evaluation and it has tremendous implications. To what extent can we base claims and concerns on memories? How does it compare with the actual experience? Indeed, really food for thought!

Older Posts »

Blog at WordPress.com.