Tuesday, July 19, 2011

The Pitfalls of Objectivity in Policy Design

We strive for objectivity in our work, as well we should. The less observations or conclusions are dependent on opinions, biases, or assumptions, the more replicable and generalizable results will be by others, and thus they will be more useful, and more convincing.

A prevailing view of objectivity derives from a conception of correctly representing phenomena in the natural sciences. In this view, there is an objective natural reality of objects "out there" and we, if we wish to be good scientists, seek to create representations that are objective to the extent that they replicate the objects in that reality. We must observe the objects to do this, with our sensory perceptions and with apparati, but we are more scientific if we can take the "we" out of the observation. If one wants to objectively characterize a rock "heavy" is a very useful subjective description, but a poor objective one. "25lbs" is a better objective one, but still tied to the context of the planet Earth. "11.33980925 kilograms" is the best objective measure, even if but it loses the subjective component. "Heavy" means the same thing here and on the moon in an experiential sense. 11.33980925 kilograms does not.

The reason we don't like "heavy" as a measurement, is it is very closely tied to the observer. A weightlifter and a four year old will have very different observations of what is heavy. The advantage of "heavy" as an observation, is if I do know the observer, I can abstract a lot of subjective information, and infer a lot of objective information from it. If I know you described the rock as heavy, I know that when you describe another item as heavy, you mean something in approximately the same range of mass. I can also know, when you have described a mass (that I also have made a subjective judgement about) as heavy, whether I think you are a weakling or not.

Let's call "heavy" and aspect of the rock, and "11.33980925 kilograms" a property of the rock. Both aspects and properties, have their uses. If you tell me someone was hit by a car on the freeway, you've conveyed a lot of information. I don't need the mass or velocity of the vehicle, or a detailed representation of the human body and the effects of force exerted on it, to know this is "bad" news for said individual. My intuitive understanding of human physiology, that cars are "heavy" and move "fast," especially on the freeway, is enough, and is in fact superior for rapidly and fully conveying what occurred, than a formally observed and modeled (attempts at repeated observations might run into some ethical issues) explanation of what happened.

That said, maybe there was construction on the freeway (or it just happened to be somewhere in the DC Metro area, or both) so the car was traveling two mph. The car was made up of balsa wood and rice paper and piloted by a midget and the person hit was a 400lb goliath in a suit of armor. Well, the formal representation route would have avoided this mistake, by refusing to take any information from the aspects of the situation and only using the properties, but once again would have been more costly than simply amending the initial statement with the above qualitative description and ensuing subjective judgments. On the other hand, if we wish to talk about general matters of car velocities and the ensuing dangers (though dangerous is an aspect, not a property) we may wish to take more formal routes.

Why do we throw out so much perfectly good information when conducting scientific inquiry? In fact we don't, interpretations of what situations are relevant to a theory or other human concern a situation are relevant and should be investigated, and interpretations of the aspects of those situations and the data they produce, are necessary for science to be an intelligible endeavor. This is why science is conducted by scientists, not computers. While computers can represent and analyze all kinds of data, we need a human being to interpret the data for it to be intelligible and intelligent. Even AI, just isn't that intelligent.

Indeed, all statistics must be interpreted. 90% of Pasteur's data on contagion theory is reported to have been ignored by him. We maintain the ideal of astronomy because it attractively avoids the slippery slope where the differences between interpretation, confirmation bias, and pure fabrication, are distinguished by the subjective intent of the individual researcher, and thus the research result, difficult to replicate under any circumstances, is very rarely objectively verifiable. Trusting the researcher and his or her reviewing peers is essential and necessary for much of the scientific endeavor.

Just trusting someone, a subjective stance (and as we know an often troubling one), is intuitively anathema to how we conceptualize science. Isn't the whole point of science that we can replicate results that are intuitively implausible of people that we don't trust? Isn't that what objectivity really is?

Can anyone beat Galileo's work as the style of investigation all scientific work should seek to imitate? Here we have someone take a theory, that is intuitively implausible based on our sense experiences, and confirm it with empirical observations. The theory and linked observations, are so strong, that it stands, despite all the organized political and spiritual powers that be, and despite that the man himself recanted it, as True, and replicable by anyone who wishes to verify it.

But there is a danger in idealizing this style of science, particularly when we get into the social sciences. The social sciences are often thought of as soft sciences, perhaps not really sciences at all. Their subjects of study are often so maddeningly hard to pin down, and likely to adapt to circumstances, that the development of formal representational models, that will have any empirical validity, predictive value, and societal relevance, is maddeningly elusive. This is the subject of much gnashing of teeth among social scientists. Who doesn't what to be a real scientist dealing with real hard Galilean Truths?

The social scientists that have best positioned themselves as being "real" scientists are economists. We have a lot of formal models based on assumptions that are more or less intuitively plausible, and more or less based on empirical observation. They do a lot of math, come up with a lot of counterintuitive conclusions. What economists have, are prices, employment figures, interest rates, and other such measurements, which mean they can get farther away from the world of subjective interpretation, (what does it mean that 54% of the population voted for this candidate, depends on why the population, who the candidate is, who the population thinks the candidate is, etc., etc.).

These numbers are more or less objective, one car argue about methodologies for measuring GDP, whether a black market exists, but at the end of the day, if an apple sold for $1, the apple sold for $1. The perception that economics deals with objective truths has given economists a great deal of power in the policy arena. Predicting the outcomes of various policies is undeniably complicated work so methodologies such as cost benefit analysis can help policymakers and the public get a rough idea of the policy's financial implications and thus whether it is "worth it."

The tricky thing, is that the apple's valuation at time of sale of $1 is a subjective valuation. I may not actually want the apple later when I initially planned to purchase it, and thus I may have wasted my money. This isn't a terrible problem, one buyer does not a market make (exception monopsony) provided that we trust the market is efficient and most buyers are rational. Similarly an unemployment rate doesn't just reflect the number of people out of work but looking for employment, but also the unemployed's subjective perceptions of whether it is worthwhile to keep looking, and employers' subjective judgements regarding where the economy will go. A full picture of unemployment must reference who the unemployed are, and why they're doing what they're doing, and who the employers are, and why there doing what they're doing. Suddenly we're stuck with subjective judgements in this most scientific of social sciences, no wonder macroeconomic arguments are often so heated. While we can formally represent the unemployment rate in a satisfactory manner, any formal representation of the employers or unemployed is going to be strongly contested.

Fortunately as a nation, we do not need to come up with agreed upon formal technical arguments for all political issues. While the American People employ technical experts to advise elected and appointed political representatives, ultimately the sovereign power of this nation resides in The People.

Just because the technical experts are let off the hook from finding technical and objective solutions to all of America’s problems does not mean that everything is hunky dory. The power Congress delegates to Executive Branch agencies may enshrine or foster over time a class of experts whose values differ markedly from that of The People. (We will leave who The People are aside for a moment, but needless to say, who they are, and what they want is naturally a contested political issue.) Thus we have practices such as cost benefit analysis that put a “weight” on an agency action that is readily understood by all.

How fully this weight describes the action can be complicated. Just as a rock can be described by its weight it will also have chemical and physical properties, texture and a geographical and perhaps even a cultural history. Its weight, just like the measure of economic efficiency that is cost benefit analysis, will capture some of these properties better than others. The numbers of a cost benefit analysis are a worse measure than the weight of the rock, because we aren’t able to weigh the economic efficiency of the action directly, rather we must derive it from technical assumptions that may only be understood by experts. Thus cost benefit analysis, may serve an anti-democratic and anti-transparent function by moving the realm of decision making to technical conflicts between experts inside and outside of government. The situation may be further worsened because decision makers and the public may have very little idea what a cost benefit analysis says and does not say. Just as the weight of a rock might not be its most interesting feature, so the economic efficiency of an agency action may not be what we care most about.

The solution is not to throw out cost benefit analysis, or even give up on “objectivity” as an important guidepost for decisionmaking. The key is too make sure that no one thinks they know everything about a rock because of its weight. Introducing more criteria to the analysis such as employment, distributional, indirect, and environmental justice impacts are all moves in this direction. Note that these more or less objective criteria do not bring us closer to an objective decision. A rule that would assign weights to them would be necessary to do that, and even if one were to come up with such an objective rule, it would be the equivalent to describing the rock by its weight again. These additional criteria allow and force the public and decision makers to think more critically about the issue at hand. Thus the veneer of objective decision making is lost, but more critical, engaged, informed, and transparent decision making is fostered.

In a future post (teaser!) I will discuss a different conceptions of objectivity and how analyses based on it incorporate many of the advantages (but also some pitfalls) of the intuitive understanding of the aspects of objects under investigation.

No comments: