# Key Ratios and Evaluation Theories

We show how some basic evaluation Variables like Relevance can be “calculated” (using Soft Arithmetic) according to certain Definition Rules.

If required, pre-calculate the likely theoretical Levels of the evaluation Variables: what is the likely Effectiveness, Impact, etc., of this project, given what we already know? This might involve information you calculate yourself using Soft Arithmetic (e.g. putting existing information on outcomes in relation to existing information on inputs in order to assess cost-effectiveness), and info you get directly, e.g. from a previous study.

Evaluation Variables like Relevance can be “calculated” (using Soft Arithmetic) according to certain Definition Rules.

Defined Variables like “Relevance”, or like “the sum of all the student scores across all the schools” are Variables just like the Variables from which they are defined. So you can define a Variable using a Rule including several other Variables some or all of which may be defined Variables.

Nothing really new is added when we define additional Variables, but they are very convenient. For example, key evaluation measures like the effectiveness and impact of some project (i.e. of some Mechanism) are Variables defined on the Variables within that Mechanism.

Less and less need to have separate presentations of primary Findings, one per heading. Each heading can be answered by going through the other findings and creating Definitions. e.g. humanitarian principles.

## Evaluation Theories

An Evaluation Theory has some similarity with a (directed) mind map of concepts.

## DfID says “cost-effectiveness”. I say “impacts per input”. Here’s why.

Ratios can be very useful so it’s good to see a focus on them. Let’s look at Julian’s suggestions:

• the output/input ratio is called efficiency
• the outcome/output ratio is called effectiveness
• the impact/input ratio is called, a little surprisingly, cost effectiveness.

I should clarify that we can think of “ratios” in terms of a very soft and tolerant Soft Arithmetic. We can report a qualitatively described outcome - say improvement in education outcomes in some number of provinces, in relation to aka “divided by” an input which might itself be partly qualitatively described - perhaps dollars but also, for example, activist efforts. We can call this a ratio, and we can compare it to a similar ratio from a comparable programme with similar, but almost certainly not identical, nominators and denominators. We can even say that one programme seems to have, say, a better outcomes-to-inputs ratio than another, without ever doing any actual multiplication or division. The “nominators” and “denominators” will themselves usually contain “soft” additions and subtractions - like dollars plus effort, or lives saved plus opportunities increased.

It is worth noting that this “effectiveness” definition really isn’t in accord with the OECD DAC definition, and if in doubt one should really stick with the latter because they are so widely accepted. The “efficiency” definition is in accord with OECD DAC. But I never heard of “cost effectiveness” specifically requiring impact for its denominator.

My thought is, why use terms which are rife for misunderstanding, why don’t we just spell out what we mean? Rather than saying “the project tried hard to improve effectiveness (according to the DFID definition)” can’t we just say “the project tried hard to improve the outcome/output ratio” or “… to get more outcomes per output”? “impact per input” has fewer letters and no more syllables than “cost effectiveness” and it is, I submit, one little bit clearer.

All of these formulations, in any case, only make sense if we all know what “output” or “outcome” actually mean in a given context and programme. As we all know, I dearly hope, “output” and “outcome” are not absolute but relative, context-specific terms.

As Julian points out, even if we have ratios that doesn’t mean we have a rule for telling us what is a good ratio and what is a poor one. But to pre-empt his point: in general, there isn’t such a rule and we don’t need one. If there was one, we could say that a vaccination programme in Zimbabwe was better than an entrepreneurship programme in Bristol, and what would that mean? But where programmes are comparable and need comparing, we can hope that the nominators and denominators in particular are comparable. We can say: programme X delivered more vaccinations per dollar than programme Y. So, on this basis, programme X was better than programme Y without us ever having needed to ascribe an absolute worth to a ratio on its own. In practice, as Julian says, there is always a judgement involved which can’t be reduced to an algorithm - especially when comparing nominators or denominators which are not the same, i.e. comparing lives saved with money saved: what I call, with tongue in cheek, “evaluation arithmetic”.

(Endnotes: 1) in this post we’ll gloss over the idea that “impact” is just another thing further down the line from “outcomes”; 2) as Julian points out, the world is usually more slippery than this simple inputs>outputs>outcome model suggests, but still it can be useful; and 3) we’ve tried not to worry too much here about counterfactuals; 4) I also confess that I am not sure if DFID has 3 or 4 E’s, and whether equity is one of them, but that doesn’t really matter here.)