What’s your ToC? Theories of Change are the perfect way to understand others


I’m working hard on my book Learn Theorymaker. Theorymaker is, amongst other things, a set of rules for drawing Theory of Change diagrams. It provides a small number of conventions about what an arrow means, what a box means, and so on, within a Theory of Change diagram. Though there are just a few conventions, they are very powerful.

At the same time, I’ve been working on a Theory of Change for a client which involves multiple stakeholders and so I’ve been thinking hard about what this might actually mean.

What is a Theory of Change?

In a nutshell, a Theory of Change in Theorymaker is a Theory plus two more magic ingredients.

What is a Theory?

A Theory consists of some Variables plus some Rules about how they affect one another, perhaps like this:

Father: mood in evening

 Child: behaviour on arriving from school

 Father: satisfaction with work day

  Boss: mood during day

This is a pretty vague Theory because it says hardly anything about the nature of these Variables - are they numerical or what? And it doesn’t say anything about the kind of Rule that links them up: are these linear relationships? Probably not, probably there are some tipping points involved. Still, all Theories are vague to some extent, and sometimes a simple, vague Theory is all we need.

Adding in !do Variables and !valued Variables

Now suppose I am the child and my aim is, glory be, to improve my father’s mood this evening. To turn this Theory into a Theory of Change, I just take my (more or less correct) Theory and add two things:

  • I mark one or more Variables as valuable to me (using a green wedge)
  • I mark one or more Variables as controllable by me (using a red heart)

As the only thing I care about in this Theory is my father’s mood, I just mark this Variable with a heart. And the only thing I can control directly is my behaviour coming home from school. So my Theory of Change looks something like this:

Father: mood in evening !valued 

 !do Child: behaviour on arriving from school

 Father: satisfaction with work day

  Boss: mood during day

So, at least I know that, as I want to influence my father’s mood that evening, I need to tweak my behaviour on arriving from school.

Actually, Theorymaker gives us some neat ways of being a bit (or a lot) more precise than that. Here, we are adding some more details:

Father: mood in evening !valued 

 !do Child: behaviour on arriving from school

 Father: satisfaction with work day

  Boss: mood during day

Adding more details to the diagram

We have designated all of these Variables with a new symbol (a rising black triangle) to say that they are so-called lo-hi Variables: a very simple and common kind of Variable which is not necessarily measurable with numbers but which is at least capable of gradations. For example, the Dad’s mood could be rotten, or OK, or even pretty sunny, or anything in between. This kind of Variable is very common but there are plenty of other kinds - numerical Variables for example, or binary yes/no Variables.

And we have also put a bit of information about the Rules which join up the Variables. This information is given with two more symbols, the rising arrow and the half-filled circle. You can see them on the two Variables which are influenced by others - Father satisfaction and Father mood. They give information not on the Variables but on the Rules about how they are influenced.

In each case, the rising arrow tells us that the relationship is “overall positive” - so for example positive changes in the boss’s mood cause positive changes in the father’s workday satisfaction. There are plenty of other kinds of Rules - sometimes we even have a precise numerical function which predicts one Variable from the others - but this “overall positive” Rule, while vague, is useful and common in Theories of Change.

And finally the half-full circle tells us that these influences are neither trivially small nor completely deterministic, but somewhere in between.

Conscious and unconscious Theories

The crucially important thing about Theorymaker Theories of Change is that they actually guide someone’s behaviour, and that very same person has at least some influence over at least one of the Variables and also values at least one of the Variables. And these three things fit together. If I decline to make a big effort with my behaviour on coming home from school, either I don’t really care about my Dad’s mood, or I don’t really believe this Theory or it doesn’t capture all of my relevant beliefs - for example because it leaves something out, such as other Variables I value or other important influencing Variables.



Rarely do Agents operate blindly. Firstly, they make use of what in Theorymaker we call reporting Mechanisms. A simple example is me using a thermometer to inform myself about the air temperature.

My belief about the current air temperature

 (style=dashed)Temperature as recorded on the thermometer

  (style=dashed)The actual air temperature

We can guess that somewhere within my cognitive world (and presumably, literally somewhere in my brain), I have a representation (in Theorymaker terms a Report Variable) which is mediated by the temperature reported on the thermometer; and both of these Variables have the same logical shape as the Variable we are interested in, namely the current air temperature. So in an ideal world, my belief about the air temperature reflects the reading on the thermometer which represents the actual air temperature; and also, ignoring the intervening Mechanism, my belief about the air temperature reflects the actual air temperature:

My belief about the current air temperature

 (style=dashed)The actual air temperature

So, in an ideal world, these dotted lines are causal (because my belief tracks the thermometer which tracks the temperature) but they are more than that, because both the thermometer and my belief mean something; each is part of a (again causal) system of conventions and encodings which is not shown here but which sets up the thermometer to mean what it does. So, if you raise the temperature you will not only see (if you could see into my brain) my belief changing but you will also see me going back to my house to change into shorts.

So a reporting Variable is not only causally controlled, more or less, by the reported Variable but also its Levels mean the Levels of the reported Variable, i.e., it is embedded in a representational system.

The number of beads of sweat on someone’s face might also be causally influenced by the temperature but it doesn’t mean the temperature unless we establish a system to use this information in a particular way, a process which involves calibration, practice, etc.

A Theorymaker theory of motivation and behaviour change

We often hear (Rogers 2008) that project theories which involve multiple stakeholders are in principle merely complicated rather than complex (Glouberman and Zimmerman 2002). I don’t agree.

Sure, trivially we can see other stakeholders as just places where more or less deterministic things happen. But then we aren’t really seeing them as stakeholders, as Agents.

So this …

Consumer buys product ((no,yes))

 Consumer sees advertising ((no,yes))

… is no different from this …

Glass breaks ((no,yes))

 Glass is hit ((no,yes))

Here, we aren’t talking about whether the effects are completely determined or not. That is a different issue. (In Theorymaker, we always assume effects are incompletely determined unless we explicitly specify otherwise, and we mark this with a full circle instead of a half-full circle). No, the point is that there is no agency going on.

This is a big deal, because behaviour change is a big deal.

Even popular models of behaviour change and behaviour change communication as used by, say, the World Bank, are just as mechanistic:





A psychologist would probably call this a theory of motivation and might also say that as such it is a pretty poor theory.

But can we model a better theory of motivation, including an idea like agency using just the same Theorymaker building blocks? Yes, we can. Here’s how.

-Agent's Project


 m::Intervening Mechanism(s)



   t::Agent's ToC with updates;colour=purple7;style=rounded

    (style=dashed)group! Agent's Project

    i::Other influences on ToC

rank=same;Agent's Project;o;m;a;i


Now, this is our Theory which helps us understand (and later maybe try to influence) an Agent’s actions.

First, notice that the Outcome(s) don’t have a heart symbol and the potential action(s) don’t have the green wedge. These things are reserved for our intervention Variables and outcome Variables. But still, the Variables capturing what the Agent can do, and the thing they are trying to achieve, and any intervening Mechanisms, are most likely things out there in the world that we too can see.

This is what philosophers might call an intentional theory of motivation. Crucially, with it we can explain or (hopefully, but probably inaccurately) predict the actions of the Agent. We can say things like this:

The Agent did X in order to get Y

By increasing A, the Agent hoped to get more of B

… which sound very different from the merely behavioural explanations we were looking at above:

The Agent bought the shoes because they saw the advertising

… and yet at the same time, we haven’t left the basic Pearlean paradigm of causal networks. It is in an uninteresting sense still true that The Agent bought the shoes because they saw the advertising (if they hadn’t seen it, they wouldn’t have bought them), but our intentional explanation is much richer and more useful.

The crucial factor which drives (and in a sense, explains) their actions is their own Theory of Change: a rich Variable marked here with a purple border. (A “rich Variable” is still a Variable, still something which can be one way or another, something which enters into causal explanations, but one which might need megabytes or gigabytes of storage to capture28. A good example of a rich Variable is “the contents of the report” as in the sentence “He was really angered by the contents of the report”.)

The dual nature

A simpler but deceptive alternative

Updating an Agent’s Theory of Change

Updating with information

The Agent’s Theory of Change can be updated in different ways, in order of sophistication:

  • Information just about the current Levels of the different Variables. This is the kind of workaday information which most M&E manuals restrict themselves to talking about.
  • Information about the strength and nature of the influences, sometimes called the parameters of the Theory.
  • Information which require changes to the structure of the Theory, for example that some connection is not (any longer) operational or that an new but important Variable has been identified.

Updating in other ways

But the Agent’s Theory of Change can be influenced in other ways too. So for example,

Solubility of Theories


Rogers, Patricia J. 2008. “Using Programme Theory to Evaluate Complicated and Complex Aspects of Interventions.” Evaluation 14 (1): 29–48. doi:10.1177/1356389007084674.

Glouberman, Sholom, and Brenda Zimmerman. 2002. Complicated and Complex Systems : What Would Successful Reform of Medicare Look Like ? July. doi:0-662-32778-0.

  1. in terms of information theory, it has a high entropy