Intrinsic Value

As evaluators, sometimes we are told that certain things “just are” valuable to the client, and we are told how to calculate their Value, according to a pre-defined rule, usually given in an evaluation Terms of Reference.

But Value is a challenge to evaluators above and beyond these demands.

First, we might need to make intermediate calculations about the value or quality of something, for example the quality of some teaching, even though this is not of direct interest to the client and no specific criteria are given. If the quality of the teaching is relevant to understanding why some part of a project was not effective, the client will expect the evaluator to report on it.

Second, in some cases we may need to make assessments of quality or value even in connection with Variables which have been explicitly defined by the client as “what counts”, even where this involves disagreeing with the client. This might sound like an unwelcome moral or political obligation which I want to push for moral or political reasons. But no, it follows from the nature of our work as evaluators.

  • There are myriad concepts involving like “quality” and “value” which we come across doing evaluation, which have no central defining characteristic but bear only family relationships to one another; and these myriad concepts fade gradually off into myriad others which are not obviously about worth or value: there is no clear-cut distinction between “facts” and “values” in evaluation.
  • Disputes about value and quality in evaluation are disputes about whether evaluation Statements correctly reflect what they are meant to reflect (e.g. are they reliable and valid) and as such are no different from disputes about other terms and concepts. Our duty as evaluators includes ensuring that our evaluation Statements (especially but not only those involving and implying value and quality) are correctly understood.

Value cannot be separated from facts in evaluation

Scriven’s analysis of the major schools of thought in evaluation, such as those championed by Alkin, Rossi and Freeman, Stake, Cronbach, and others, finds that almost all of them “can be seen as a series of attempts to avoid direct statements about the merit or worth of things”26.

Scriven doesn’t say that these approaches to evaluation can’t make statements about worth or value. But they are only able to make statements of the form “this particular Variable reached Level 4.9, which according to my client is valuable and important”. His point is that “the major schools of thought in evaluation” do not show evaluators how to make statements like “good scores were reached on the happiness questionnaire, but it is debatable whether this really measures happiness; some of the users of the youth centre who I saw looked quite sad”.

Traditional approaches to social science, and to a lesser extent even evaluation, have tried to maintain a fact/value distinction. But the very idea of a clean fact/value distinction is tricky. First, there is a whole, ragged family of ways in which value and correctness are infused throughout our language. It is not as if value was in fact a gold star which can be applied, yes-or-no, to some statements and not to others.

When Google’s algorithms or neural networks learn to sort and classify human movements and identify, say, skating movements, it is likely that they will spontaneously evolve involve some kind of representation of ideal skating movements, as this would be an efficient way to organise the kinds of movements we make and the way we in fact categorise those movements. And if that neural network is rich enough to include representations of human movements in general, and safe movements and dangerous movements and clumsy movements, and if it knows how to rate paintings and images according to classical and even unconventional criteria of beauty and symmetry; and if it has learned words like “good” and “ideal” and “valuable” and “elegant”, associating them in myriad ways with myriad concepts like “lack of pain” and “healthy” and “appropriate”, then it will be able to identify, approximately, “good skating moves”, even though it hasn’t been specifically trained to and even though it has never encountered that criterion before.

Evaluators need not trouble themselves about “the definition” of quality, value, goodness etc. Wittgenstein reminds us that there are myriad uses of these kinds of words which bear family resemblances to one another, without there being any shared and mystical core meaning. Evaluators regularly make more or less objective Reports of a whole variety of Variables which include value, quality etc. Contrary to several hundred years of tradition in western science, Theorymaker native speakers affirm that there is nothing essentially more difficult in reporting value, quality and so on than there is in appraising whether something is colourful, appropriate or sustainable. Just as we can directly perceive causation, and under certain circumstances reason from non-causal data to causal conclusions, we can not only directly perceive quality and value but also reason (under certain circumstances) from apparently value-free data to conclusions about value and worth.

Searle’s example: from observing, in a given context, someone saying “I owe you five pounds then” and taking the offered five pounds from a friend, we can conclude that this person is in fact in the other’s debt and has a duty and an obligation to return it, other things being equal. (Except, Scriven says (Scriven 2012) this is a rare example; I think it is very common.)

(Are these deductions - involving the application of a definition Rule - or observations?)

Evaluators deal with a social world which is dripping with phenomena like “value”, “obligation” and “duty”, both where these are mentioned (though they can never be completely defined) within an evaluation Terms of Reference or agreed standards (for example humanitarian standards) and also just as part of the data which make up the social world.

The way in which rules or heuristics about quality, duty and value are valid other things being equal is reminiscent of the way which heuristics from other realms are valid other things being equal. So if you want to move around as a human on our planet you will need rules like “thou shalt not kill” and “hungry people should be provided with food” and “symmetrical faces are more attractive”, just like you will need “glass objects shatter if you hit them really hard with a hard object”, even though all of these rules may be attenuated or even overridden in certain circumstances. All of these heuristics are examples of the kind of simple, relatively robust and autonomous Theories with which we understand our world. Remember, Judea Pearl doesn’t just assert this: he helps build AI systems which do this.

Human Rights and the “humanitarian imperative” are examples of these kinds of heuristics, in this case moral heuristics, which will be particularly familiar to evaluators.

Nothing special about value

Disputes about value and quality in evaluation are disputes about whether evaluation Statements correctly reflect what they are meant to reflect (e.g. are they reliable and valid) and as such are no different from disputes about other terms and concepts. Our duty as evaluators includes ensuring that our evaluation Statements (including those involving and implying value and quality) are correctly understood. This is not a special moral imperative; it arises from our duty to ensure that all our our evaluation Statements are correctly understood, i.e. that they reliably and validly report the Variables they are meant to report.

Suppose we have to evaluate a project with a goal and a title “peace in our time” which consists entirely of massage sessions for disadvantaged children. The massage is great and the children appreciate it. The only indicator pre-specified in the logframe is satisfaction with the massages, and this indicator is at 100%. Should we say anything?

Of course not. This is simply because the evaluation statement does not validly report the actual outcome.

You can see this problem as being about whether the questionnaire is an appropriate “indicator”. But this problem goes beyond explicit indicators - do they validly and reliably reflect the Variables they are attached to. It is our job as evaluators to make sure the evaluation Statements will be correctly understood and this includes their implicit context as well as what they say explicitly. We would have to be careful reporting that a project with such a title has been successfully completed even though its explict (and actually much more prosaic) outcomes have been achieved.

Nothing trivial about closed Value

… and still it isn’t a trivial job. Many key evaluation skills may be required. It may still require making fuzzy descriptions, combining hard-to-combine Variables and even interpreting Variable labels; just it doesn’t require also interpreting what is meant by “good”, “valuable” etc.

References

Scriven, Michael. 2012. “The logic of valuing.” New Directions for Evaluation. doi:10.1002/ev.