Rubrics in evaluation
We already talked about rubrics a little under Soft Arithmetic.
I really like the approach used in Outcome Mapping, which speaks about changes or Differences which we “expect to see”, “like to see” and “love to see”. So they provide an ordinal scale without using numbers, which I think is better.
In this interesting post, Julian King provides an example of rubrics which are somewhat more general. The first column is formulated relative to “most interventions”. As long as we all have the same reference set of interventions in mind, this is fine. It wouldn’t work otherwise.
Julian’s example is adapted from a real project and was not intended as a benchmark for assessing comparative cost-effectiveness in all circumstances. Nevertheless, the first columns (cost and effectiveness respectively) could apply to almost any intervention, though the final column, about sustainability, is limited in scope to community-based interventions. The way outcomes performance, the second column, is described relies on achievement of targets so it is most useful for interventions which are equally ambitious in the way they set targets. As it happens, the first column is anchored relative to similar interventions whereas the second column is anchored relative to targets and the third column is anchored to specific examples from the realm of community interventions. But one could take the same idea and anchor different columns in different ways
Anyway, Julian’s overall approach is a great example of using “soft ratios”. So a soft ratio for one intervention would be given as something like “effectiveness (4) / cost (3)”. As Julian implies, we can’t necessarily carry through this division as we would using ordinary arithmetic (4/3 = 1.33) but we can certainly use soft arithmetic so that we can at least say a project with a score of 4/3 is better than one with, say, 2/4. (I haven’t dealt with sustainability here, which is Julian’s third column.)