Wednesday, July 27, 2011

The Neuroscientific Case for Heidegger

I've been debating Prof. Massimo Pigliucci at Rationally Speaking over the usefulness and relevance of Heidegger's work. I am putting the below excerpt from Antonio Damasio's excellent Self Comes to Mind as an example of a situation and behavior I believe to be very difficult to explain without reference to Heidegger's work refuting the substance ontology.

Briefly stated, the substance ontology tells us objects exist because they have physical form and we observe that physical form. Intuitively, this is a pretty plausible explanation, which is why it is been more or less unquestioned since Plato. Up until Kant, this led to a problem, one could examine objects (the Empiricists) or one could examine the contents of the mind (the Rationalists) but the two could not be reconciled. Something existed because it was physically there, so it was silly to say it had any relation to the contents of the mind. And yet, the mind felt like the most real thing for the Rationalists, and so we have the axioms of Descartes and Spinoza that start with the mind and then try to make a proof of the external world without any empirical recourse to it.

Building on Kant, Husserl created phenomenology, the idea that we can have access to the objects we interact with as well as ourselves by reflecting on the situations in which we interact with the objects. But this was still something taking place in our own heads, we had free reign in our own heads under Husserl, but what relevance did this have to the cold material world outside our heads?

Heidegger tells us that phenomenology has relevance outside our heads by taking on the substance ontology. He demonstrates that (as beings that have a stance on our own being) the physical properties of objects are not the primary stuff that make up the world. (There is an important distinction between the World, which has people in it, and the Universe, the realm of science, where this does not apply.) We don't go out and find a hammer, nails, and wood, and discover they have the properties of building a place to live in a certain way. Rather, the place to live in a certain way, connected with our stance on being, preexists and coordinates the use of the hammer, nails, and wood. The hammer doesn't exist primarily as piece of wood with a blob of metal on top, rather it exists primarily as something to build with. And this something to build with is based on our stance on our own being. Without a self, the coordination of these physical objects breaks down. I'm challenging Prof. Pigliucci to come up with a better philosophical explanation for the condition described below. In this case the self is lost, but the mind is perfectly capable of acting in an intentional manner towards objects consistent with the substance ontology.

Removing the Self and Keeping the Mind

Perhaps the most convincing evidence for a dissociation between the wakefulness and mind, on the one hand, and self, on the other, comes from another neurological condition, epileptic seizures. In such situations, a patient’s behavior is suddenly interrupted for a brief period of time, during which the action freezes altogether; it is then followed by a period, generally brief as well, during which the patient returns to active behavior but gives no evidence of a normal conscious state. The silent patient may move about, but his actions, such as waving goodbye or leaving a room, reveal no overall purpose. The actions may exhibit a “minipurpose,” like picking up a glass of water and drinking from it, but no sign that the purpose is part of a larger context, The patient makes no attempt to communicate with the observer and no reply to the observer’s attempts.

If you visit a physician’s office, your behavior is part of a large context that has to do with the specific goals of the visit, your overall plan for the day, and the wider plans and intentions of your life, at varied time scales, relative to which your visit may be of some significance or not. Everything you do in the “scene” at the office is informed by these multiple contents [contexts?], even if you do not need to hold them all in mind in order to behave coherently. The same happens with the physician, relative to his role in the scene. In a state of diminished consciousness, however, all that background influence is reduced to little or nothing. The behavior is controlled by immediate cues, devoid of any insertion in the wider context. For example, picking up a glas and drinking from it makes sense if you are thirsty, and that action does not need to connect with the broader context.

I remember the very first patient I observed with this condition because the behavior was so new to me, so unexpected, and so disquieting. In the middle of our conversation, the patient stopped talking and in fact suspended moving altogether. His face lost expression, and his open eyes looked past me, at the wall behind. He remained motionless for several seconds. He did not fall out of his chair, or fall asleep, or convulse, or twitch. When I spoke his name, there was no reply. When he began to move again, ever so little, he smacked his lips. His eyes shifted about and seemed to focus momentarily on a coffee cup on the table between us. It was empty, but still he picked it up and attempted to drink from it. I spoke to him again and again, but he did not reply. I asked him what was going on , and he did not reply. His face still had no expression, and he didn not look at me. I called his name, and he did not reply. Finally he rose to his feet, turned around, and walked slowly to the door. I called him again. He stopped and looked at me, and a perplexed expression came to his face. I called him again, and he said, “What?”

The patient had suffered and absence seizure (a kind of epileptic seizure), followed by a period of automatism. He had been both there and not, awake and behaving, for sure, partly attentive, bodily present, but unaccounted for as a person. Many years later I described the patient as having been “absent without leave,” and that description remains apt.

Without questions this man was awake in the full sense of the term. His eyes were open, and his proper muscular tone enabled him to move about. He could unquestionably produce actions, but the actions did not suggest an organized plan. He had no overall purpose and made no acknowledgement of the conditions of the situation, no appropriateness, and his acts were only minimally coherent. Without question his brain was forming mental images, although we cannot vouch for their abundance or coherence. In order to reach for a cup, a pick it up, hold it to one’s lips, and put it back on the table, the brain must form images, quite a lot of them, at the very least visual, kinesthetic, and tactile; otherwise the person cannot execute the movements correctly. But while this speaks for the presence of mind, it gives no evidence of self. The man did not appear to be cognizant of who he was, where he was, who I was, or why he was in front of me.

In fact, not only was the evidence of such overt knowledge missing, but there was no indication of covert guidance of his behavior, the sort of nonconscious autopilot that allows us to walk home without consciously focusing on the route. Moreover, there was no sign of emotion in the man’s behavior, a telltale indication of seriously impaired consciousness (emphasis in original).

Such cases provide powerful evidence, perhaps the only definitive evidence yet, for a break between two functions that remain available, wakefulness and mind, and another function, self, which by any standard is not available. This man did not have a sense of his own existence and had a defective sense of his surroundings.

Tuesday, July 19, 2011

The Pitfalls of Objectivity in Policy Design

We strive for objectivity in our work, as well we should. The less observations or conclusions are dependent on opinions, biases, or assumptions, the more replicable and generalizable results will be by others, and thus they will be more useful, and more convincing.

A prevailing view of objectivity derives from a conception of correctly representing phenomena in the natural sciences. In this view, there is an objective natural reality of objects "out there" and we, if we wish to be good scientists, seek to create representations that are objective to the extent that they replicate the objects in that reality. We must observe the objects to do this, with our sensory perceptions and with apparati, but we are more scientific if we can take the "we" out of the observation. If one wants to objectively characterize a rock "heavy" is a very useful subjective description, but a poor objective one. "25lbs" is a better objective one, but still tied to the context of the planet Earth. "11.33980925 kilograms" is the best objective measure, even if but it loses the subjective component. "Heavy" means the same thing here and on the moon in an experiential sense. 11.33980925 kilograms does not.

The reason we don't like "heavy" as a measurement, is it is very closely tied to the observer. A weightlifter and a four year old will have very different observations of what is heavy. The advantage of "heavy" as an observation, is if I do know the observer, I can abstract a lot of subjective information, and infer a lot of objective information from it. If I know you described the rock as heavy, I know that when you describe another item as heavy, you mean something in approximately the same range of mass. I can also know, when you have described a mass (that I also have made a subjective judgement about) as heavy, whether I think you are a weakling or not.

Let's call "heavy" and aspect of the rock, and "11.33980925 kilograms" a property of the rock. Both aspects and properties, have their uses. If you tell me someone was hit by a car on the freeway, you've conveyed a lot of information. I don't need the mass or velocity of the vehicle, or a detailed representation of the human body and the effects of force exerted on it, to know this is "bad" news for said individual. My intuitive understanding of human physiology, that cars are "heavy" and move "fast," especially on the freeway, is enough, and is in fact superior for rapidly and fully conveying what occurred, than a formally observed and modeled (attempts at repeated observations might run into some ethical issues) explanation of what happened.

That said, maybe there was construction on the freeway (or it just happened to be somewhere in the DC Metro area, or both) so the car was traveling two mph. The car was made up of balsa wood and rice paper and piloted by a midget and the person hit was a 400lb goliath in a suit of armor. Well, the formal representation route would have avoided this mistake, by refusing to take any information from the aspects of the situation and only using the properties, but once again would have been more costly than simply amending the initial statement with the above qualitative description and ensuing subjective judgments. On the other hand, if we wish to talk about general matters of car velocities and the ensuing dangers (though dangerous is an aspect, not a property) we may wish to take more formal routes.

Why do we throw out so much perfectly good information when conducting scientific inquiry? In fact we don't, interpretations of what situations are relevant to a theory or other human concern a situation are relevant and should be investigated, and interpretations of the aspects of those situations and the data they produce, are necessary for science to be an intelligible endeavor. This is why science is conducted by scientists, not computers. While computers can represent and analyze all kinds of data, we need a human being to interpret the data for it to be intelligible and intelligent. Even AI, just isn't that intelligent.

Indeed, all statistics must be interpreted. 90% of Pasteur's data on contagion theory is reported to have been ignored by him. We maintain the ideal of astronomy because it attractively avoids the slippery slope where the differences between interpretation, confirmation bias, and pure fabrication, are distinguished by the subjective intent of the individual researcher, and thus the research result, difficult to replicate under any circumstances, is very rarely objectively verifiable. Trusting the researcher and his or her reviewing peers is essential and necessary for much of the scientific endeavor.

Just trusting someone, a subjective stance (and as we know an often troubling one), is intuitively anathema to how we conceptualize science. Isn't the whole point of science that we can replicate results that are intuitively implausible of people that we don't trust? Isn't that what objectivity really is?

Can anyone beat Galileo's work as the style of investigation all scientific work should seek to imitate? Here we have someone take a theory, that is intuitively implausible based on our sense experiences, and confirm it with empirical observations. The theory and linked observations, are so strong, that it stands, despite all the organized political and spiritual powers that be, and despite that the man himself recanted it, as True, and replicable by anyone who wishes to verify it.

But there is a danger in idealizing this style of science, particularly when we get into the social sciences. The social sciences are often thought of as soft sciences, perhaps not really sciences at all. Their subjects of study are often so maddeningly hard to pin down, and likely to adapt to circumstances, that the development of formal representational models, that will have any empirical validity, predictive value, and societal relevance, is maddeningly elusive. This is the subject of much gnashing of teeth among social scientists. Who doesn't what to be a real scientist dealing with real hard Galilean Truths?

The social scientists that have best positioned themselves as being "real" scientists are economists. We have a lot of formal models based on assumptions that are more or less intuitively plausible, and more or less based on empirical observation. They do a lot of math, come up with a lot of counterintuitive conclusions. What economists have, are prices, employment figures, interest rates, and other such measurements, which mean they can get farther away from the world of subjective interpretation, (what does it mean that 54% of the population voted for this candidate, depends on why the population, who the candidate is, who the population thinks the candidate is, etc., etc.).

These numbers are more or less objective, one car argue about methodologies for measuring GDP, whether a black market exists, but at the end of the day, if an apple sold for $1, the apple sold for $1. The perception that economics deals with objective truths has given economists a great deal of power in the policy arena. Predicting the outcomes of various policies is undeniably complicated work so methodologies such as cost benefit analysis can help policymakers and the public get a rough idea of the policy's financial implications and thus whether it is "worth it."

The tricky thing, is that the apple's valuation at time of sale of $1 is a subjective valuation. I may not actually want the apple later when I initially planned to purchase it, and thus I may have wasted my money. This isn't a terrible problem, one buyer does not a market make (exception monopsony) provided that we trust the market is efficient and most buyers are rational. Similarly an unemployment rate doesn't just reflect the number of people out of work but looking for employment, but also the unemployed's subjective perceptions of whether it is worthwhile to keep looking, and employers' subjective judgements regarding where the economy will go. A full picture of unemployment must reference who the unemployed are, and why they're doing what they're doing, and who the employers are, and why there doing what they're doing. Suddenly we're stuck with subjective judgements in this most scientific of social sciences, no wonder macroeconomic arguments are often so heated. While we can formally represent the unemployment rate in a satisfactory manner, any formal representation of the employers or unemployed is going to be strongly contested.

Fortunately as a nation, we do not need to come up with agreed upon formal technical arguments for all political issues. While the American People employ technical experts to advise elected and appointed political representatives, ultimately the sovereign power of this nation resides in The People.

Just because the technical experts are let off the hook from finding technical and objective solutions to all of America’s problems does not mean that everything is hunky dory. The power Congress delegates to Executive Branch agencies may enshrine or foster over time a class of experts whose values differ markedly from that of The People. (We will leave who The People are aside for a moment, but needless to say, who they are, and what they want is naturally a contested political issue.) Thus we have practices such as cost benefit analysis that put a “weight” on an agency action that is readily understood by all.

How fully this weight describes the action can be complicated. Just as a rock can be described by its weight it will also have chemical and physical properties, texture and a geographical and perhaps even a cultural history. Its weight, just like the measure of economic efficiency that is cost benefit analysis, will capture some of these properties better than others. The numbers of a cost benefit analysis are a worse measure than the weight of the rock, because we aren’t able to weigh the economic efficiency of the action directly, rather we must derive it from technical assumptions that may only be understood by experts. Thus cost benefit analysis, may serve an anti-democratic and anti-transparent function by moving the realm of decision making to technical conflicts between experts inside and outside of government. The situation may be further worsened because decision makers and the public may have very little idea what a cost benefit analysis says and does not say. Just as the weight of a rock might not be its most interesting feature, so the economic efficiency of an agency action may not be what we care most about.

The solution is not to throw out cost benefit analysis, or even give up on “objectivity” as an important guidepost for decisionmaking. The key is too make sure that no one thinks they know everything about a rock because of its weight. Introducing more criteria to the analysis such as employment, distributional, indirect, and environmental justice impacts are all moves in this direction. Note that these more or less objective criteria do not bring us closer to an objective decision. A rule that would assign weights to them would be necessary to do that, and even if one were to come up with such an objective rule, it would be the equivalent to describing the rock by its weight again. These additional criteria allow and force the public and decision makers to think more critically about the issue at hand. Thus the veneer of objective decision making is lost, but more critical, engaged, informed, and transparent decision making is fostered.

In a future post (teaser!) I will discuss a different conceptions of objectivity and how analyses based on it incorporate many of the advantages (but also some pitfalls) of the intuitive understanding of the aspects of objects under investigation.