Categories
Complexity-responsive evaluation

The importance of a complexity focus in program evaluation

Despite the recognition that most development programs are complex, most evaluations use designs that do not adequately address complexity. These two blogs discuss the importance of complexity, why it is frequently ignored, and propose user-friendly evaluation designs that can address complexity.

Most development programs are designed and implemented in complex political, socio-cultural, economic and ecological contexts where outcomes are influenced by many factors over which planners and program managers have very little control. These factors interact in different ways in different project locations. Consequently, a project, which has a clear design and implementation strategy, may produce significantly different outcomes in different locations or at different points in time.

Despite the widespread acknowledgement by evaluators and stakeholders that projects, and project evaluations are “complex”, most evaluations employ designs that implicitly assume the project is “simple” with a clearly defined linear relationship between the project inputs and a limited number of outcomes. For example, most quantitative evaluations adopt a pretest-post comparative group design where outcomes are estimated as the difference in the change for the project and comparison groups over the life of the project. The randomized control trial (RCT) is the most well-known design but a similar logic underlies many other experimental and quasi-experimental designs (propensity score matching, double difference, instrumental variable estimation, regression discontinuity

The policy and methodological implications of the lack of attention to complexity in the design and evaluation of development projects, programs and policies was discussed in a two part blog that I prepared for the International Initiative for Impact Evaluation (3ie) published in June 2021.

One of the reasons why many agencies do not address complexity is because much of the complexity literature is very technical and because complexity is considered to be “too complicated” to understand or measure. Part 1 presents an easy-to-understand framework for mapping complexity along 4 dimensions, together with a checklist for rating the level of complexity of an intervention on each of these dimensions. I also address the important question “why do so many evaluations ignore complexity?”.

https://www.3ieimpact.org/blogs/understanding-real-world-complexities-greater-uptake-evaluation-findings

In Part 2 I discuss practical approaches for evaluating complex development interventions. A 5 step evaluation methodology is presented which combines the use of familiar evaluation tools and techniques with the application for tools such as systems analysis that are designed to address complexity.

https://3ieimpact.org/blogs/building-complexity-development-evaluations

Some of the key take-aways include:

  1. Most evaluations of development programs largely ignore complexity. This is due in part to a perception by policy makers, managers and many evaluators that complexity is too technical and difficult to incorporate into most evaluations, and in part because most conventional evaluation designs do not address complexity.
  2. When complexity issues are not addressed, an evaluation will often over-estimate the impact of a program.
  3. The complexity map included in Part 1 provides an easily understood framework for identifying and discussing the different dimensions of complexity that can affect how a program is implemented and how it achieves its intended outcomes.
  4. The complexity checklist provides a useful first estimate of the level of complexity (rated from very high to very low) on each of the dimensions.
  5. Finally, it is important to recognize that complexity theory emphasizes that all dimensions of a program are holistically interlinked and that the effects of a single program component cannot be assessed isolation from other components and from the broader context within which a program operates. The challenge is to respect the holistic nature of any program while finding ways to sufficiently simplify the program to make it possible to evaluate its effects. The present approach seeks to achieve this through a process of “unpacking” the program into individual components, each of which can be assessed separately and then using systems analysis and other complexity-responsive tools to reassemble the findings of the different components to assess the total impact within the broader context. Different readers may have different opinions on how well these dual considerations are addressed.

michaelbamberger's avatar

By michaelbamberger

Leave a comment