Monday 26 April 2010

On Reasons and Animals: My Reply to Ben

Special thanks to Ben for his thoughtful feedback on my previous post (see comments). I concede that our ordinary linguistic practice often includes the metaphorical attribution of agency. But of course, that does not mean that such attributions are always metaphorical. For example, no one worth taking seriously (at least for the purposes of the present discussion) would say it is metaphorical in the case of fully competent human beings. So the question is, on which side of the divide does our agency-attributions to animals fall? Is it like our agency-attribution to inanimate objects, or is it more like our attributions to human beings?

Now there are clearly numerous respects in which animals are more like human beings than they are like inanimate objects. Hurley’s monkey, for example, has a heart, brain and haemoglobin bearing blood cells; as do we. But these points of similarity are not salient to the question at hand. Thus, in this post, my task will not be merely to show that animals are more like humans than they are like inanimate objects. That’s a given. Rather, it will be to show that they are similar to humans in ways that are salient to the question of agency. Consequently, when assessing my claims, one cannot simply ask if they are true; one must ask if they’re relevantly true. This is the question I wish to take up.

One striking difference between animals and inanimate objects is that the former are ordinarily assumed to possess motivational states while the latter are not. Moreover, the claim that animals possess motivational states is meant to be taken literally. This is a point made quite emphatically by Mary Midgley:
There is nothing anthropomorphic in speaking of the motivation of animals. It is anthropomorphic to call the lion the King of Beasts, but not to talk of him as moved, now by fear, now by curiosity, now by territorial anger. These are not the names of hypothetical inner states, but of major patterns in anyone’s life, the signs of which are regular and visible. Anyone who has to deal with lions learns to read such signs, and survives by doing so. Both with animals and with men, we respond to the feelings and intentions we read in an action, not to the action itself. (Midgley (1978), Beast and Man., pp. 105-6.)
I believe that the difference between animals and inanimate objects highlighted by Midgley is salient to the question of agency. In the knife case, the focus is on the event (the cutting of the hand). But in the animal case, the focus is on the animal’s inner (read: psychological) states. If one sees a lion walking towards you, it is important to be able to distinguish between whether it is being motivated by a desire to eat you or a desire to reunite with its pride that is gathered under a tree fifty feet behind you. Our folk psychology is what provides us with the tools we need to interpret the lion’s actions. But notice, we are not simply paying attention to the bodily movements of the lion, but also to the inner psychological states (e.g., the lion’s intentions), which we hope to opine by observing the bodily movements. Our concern is primarily with the lion’s psychological states since they will determine what the lion will do next. Moreover, while we may attempt to put ourselves in the lion’s shoes in order to help us opine what its intentions are, we nevertheless assume that these psychological states are real and that they belong to the lion.

The preceding claims are supported by a key assumption of any folk-psychological attempt to make sense of animal behaviour—namely, that when we attribute motives and intentions to animals, we may get things wrong. For example, we assume that it is possible for us to mistakenly conclude that the lion intends to reunite with its pride when it actually has its sights set on us; and this assumption entails that the lion’s inner states exist quite independently of our beliefs about them; that they are not simply hypothetical states we attribute to the lion. This does not seem to be true of inanimate objects, which occupy a very different place in our folk psychology. We do not, for example, think we can be mistaken about the motivations of an inanimate object. There is simply nothing to be mistaken about. I believe this difference suggests that while the motivations we attribute to inanimate objects are metaphorical, the same is not true in the case of animals. When we attribute a motive to an animal we typically take ourselves to be attributing something objectively real, something we may possibly be mistaken about.

I believe the above observations may reveal a difficulty with Ben’s alternative proposal. He writes:
We attribute rational agency to non-rational animals (and even inanimate objects) because we think about the relevant behavior/events (in these, but not necessarily all cases) by adopting the animal's (or object's) perspective from within our own first-personal point of view.
The problem with this proposal is that it overlooks what we ordinarily take to be an important difference between animals and inanimate objects. In the case of inanimate objects, like a knife, there is simply no perspective to adopt. The knife does not enjoy any representational states; it does not see, taste our experience the world in any way. It does not have desires, wants, or appetites. This is why talk of adopting a knife’s perspective can only be metaphorical. But animals, like Hurley’s monkey, are ordinarily assumed to have a perspective in the most literal sense imaginable. They enjoy representations of the world, and are plausibly assumed to be motivated by desires and appetites, and so on.

I hold that having motivational psychological states and being moved to act by them is sufficient for agency. Since the term agent, at least as it features in philosophical discussions, is a term of art, this definition may be seen as stipulative. Given the account of agency I favour (and one is of course free to suggest a better philosophical account), it follows from the conclusion of the preceding paragraph that animals may be agents (in a non-metaphorical sense). But terminological issues aside, the claim I actually set out to defend in my previous post has little to do with whether or not animals are agents and more to do with the concept of a reason and with what it means to be responsive to reasons. My contention is that there is an ordinary usage of the word ‘reasons’ that is consistent with the claim that animals may be responsive to reasons, and it is this usage of the word that I wish to subject to philosophical analysis.

In attempting to identify the ordinary usage of the word ‘reason’ that I have in mind, it may be helpful to distinguish between reasons-for-which an agent acts and the reasons-with-which an agent acts. The latter refers to considerations that an agent takes to be a justification for her carrying out a certain action. By contrast, let us say that the former refers to whatever motivates an agent to act. (Note: I may not be drawing this distinction in the same way others have.) When we specify the reasons-for-which an agent acts, there are typically two forms that our explanations take: We may say that a lion is walking towards me in order to devour me (an mode of explanation that emphasises the lion’s goals; think pull rather than push), or we may say the lion is walking towards me because it wants to devour me (a mode of explanation that cites specific desires that the lion possesses; think push rather than pull).

I believe that there is an ordinary and intuitive use of the word ‘reason’ according to which the following is true: to say that the lion approached the man because it desired to devour him is to give the lion’s reason for approaching the man. Likewise, to say that the lion approached the man in order to devour him is, again, to give the reason the lion approached the man. It is this usage of the word ‘reason’ that I am interested in preserving: reasons as reasons-for-which. I do not claim that it is the only usage of the word; I do not even claim that it is the only usage of the word that has a legitimate claim to being part of our ordinary linguistic practice. There may be multiple ordinary usages (i.e., meanings) of a single word. I only claim that it is a legitimate part of our ordinary linguistic practice (rather than my personal concoction), and it also happens to be a part of our ordinary linguistic practice that I wish my account of reasons and agency to preserve.

15 comments:

Eric Thomson said...

Good stuff. Operant conditioning presupposes the existence of primary reinforcers and punishers, which depend strongly and necessarily on motivational states. Anyone that would say motivation isn't real has never spent years training mice in a lab. Give a mouse water, the next day they won't do squat in the behavior apparatus. Deprive them of water, and they will do exactly what you want for five microliters of water.

I have often wondered what would happen if philmind was more focused on desire/motivation as a core idea rather than belief.

Once you have the idea of motivation in place, of course you can ask how rational, or ideal, the behavior of the organism is wrt attaining the motivational objects.

I think what has made people stumble is that they get stuck on giving reasons why plants aren't agents. Venus fly trap seems motivated to get the fly, for instance. Do you have any thoughts on that?

AVERY ARCHER said...

Eric,
Thanks for stopping by. I think you're right about the tendency of philosophers to give beliefs priority over motivational states. But when examined from an evolutionary or phylogenetic perspective, I think the capacity for belief arose in the service of, and in order to better regulate, action (rather than the other way around). Given the evolutionary priority of the practical over the theoretical, perhaps we should take seriously the possibility that the practical should have conceptual priority as well.

Perhaps something else that has made people stumble is a failure to adequately appreciate that some motivational states (e.g., desires) have representational content. Just as a perceptual experience may represent a certain state of affairs as being the case, I hold that a desire represents a certain state of affairs as something to be brought about. On this view, a venus fly trap is not an agent because (inter alia) it does not enjoy any representational states.

Eric Thomson said...

Seems reasonable. Of course in ordinary discourse you can say that the reason that the plant shuts its trap is to get food. A child might likely even say that the plant wants food.

Hence, it isn't clear what you get out of leaning on the impoverished resources of ordinary discourse, or why you would even worry about whether ordinary discourse can allow what you are saying.

AVERY ARCHER said...

Eric,
You raise an interesting worry, one that suggests that there is a need for me to unpack the notion of ‘ordinary linguistic practice’ more than I’ve done so far. First off, by ordinary linguistic practice I do not mean all linguistic practice; it is meant to be a normative notion. This means that there are some linguistic practices, such as those of a child who has not gained full mastery of a particular language game, which may fall short of the normative notion. In short, the child who says a plant wants to eat is guilty of a category mistake, she is not using the word ‘want’ properly (note: and by ‘properly’ I don’t mean grammatically). Which leads me to my second point: I believe that what makes ordinary linguistic practice ‘ordinary’ (in the normative sense I have in mind) is that it encodes our folk psychology. This is why I believe we should try, as far as possible, to preserve it in our philosophical theorising. I do not see ordinary linguistic practice (or the various assumptions that constitute our folk psychology) as the last word. But I believe it is an important constraint on our philosophical theorising.

I can say a great deal about why I think our ordinary linguistic practice should be a constraint on our philosophical theorising, though that would probably take us too far afield. Suffice it to say that I believe a philosophical theory is most relevant when it is able to preserve as much of our ordinary linguistic practice as possible, given other constraints such as logical consistency and what can be adduced from empirical evidence. By way of illustration, consider the following example from epistemology. I believe that I live in the US Eastern Standard time zone. In fact, this is something I also take myself to know. Now suppose someone came along and challenged my claim to know that I live in the US Eastern time zone. But suppose I were to learn that the person subscribes to a technical notion of knowledge according to which an agent can only be said to know that P at time T if the agent also knows, at T, everything that logically follows from P. Let us assume (what, in my case, happens to be a fact) that I do not know everything that follows from the fact that I live in the US Eastern time zone, and that this is the basis for the person’s claim that I lack knowledge. It seems to me that the person in question ignores our ordinary linguistic practice because we do not ordinarily require that an agent know, at some time T, all that follows from P in order to know that p at T. In so doing, the person’s challenge is no real challenge at all. Instead, it simply changes the topic of conversation from what we ordinarily have in mind when we talk about knowledge to something else. In short, his challenge suffers from a certain kind of irrelevance.

There is a danger of irrelevance whenever we ignore ordinary linguistic practice in our philosophical theorising. Moreover, I believe (though time won’t allow me to defend this claim here) that when a theorist like Davidson, for example, defines belief in radically holistic terms that preclude the attribution of beliefs to animals, he runs into this difficulty. Rather than showing that our ordinary belief attributions to animals are mistaken, I believe Davidson simply changes the topic of conversation to something else (i.e., his own theoretical invention). The question then becomes, why should we give up on our ordinary conception of belief and embrace Davidson’s, and this is a question that (to my knowledge) he never takes up. Rather than illuminating our ordinary concept of belief, Davidson simply invents his own concept, and this threatens his account (however elegant and internally consistent it may be) with a certain irrelevance.

AVERY ARCHER said...

As for a Venus fly-trap having a reason to shut its trap; again, I want to emphasise that there are multiple uses of the word ‘reason’, many of which are metaphorical. The point of my post was to emphasise that it is not metaphorical in the case of animals because our folk psychology gives them a very different status to plants and inanimate objects. We also talk about the reason the bridge collapsed, or the reason the wind died down, but it is also clear that the preceding use of the word differs from the usage applied to both animals and human beings – i.e., agents with representational states. To use a very different example: it is clear that the word ‘die’ is being used differently when we say “the wind died down” than when we say “my cat died”. Moreover, no one would suggest that animals can only die metaphorically (i.e., in the sense that a wind dies) because animal-death does not entail all the same things as the death of mature humans (e.g., the latter entails the cessation of meaningful life projects while the former does not). The many differences between humans, animals and plants (though real and otherwise important) are simply irrelevant to the question of whether or not they can all die (in the most literal sense). Again, not every difference is a salient difference.

I wish to make a similar claim with respect to humans and animals (but now with plants falling on the other side of the divide) with regards to reasons and agency. I wish to say that differences between humans and animals (while numerous and otherwise important) are not relevant to whether they can both have reasons or both be agents. This is because of the place that both humans and animals occupy in our folk psychology – which deems both as agents with representational states. Thus, by ordinary linguistic practice, I do not simply mean being able to stick a particular word in a sentence without violating any grammatical rules. I also mean the various assumptions of folk psychology that determine the particular sense the word takes; for it is our folk psychology that informs us that bridges are not the types of things that have motivational or representational states, and that lions are the types of things that do; and that the word ‘reason’ when applied to each is being used in very different ways.

Eric Thomson said...

Thanks for unpacking the position some more, it seems tenable. I really like the direction you are going, actually.

Luke said...

These two posts are quite engaging. I wonder to what extent you think our attribution of intentional states to animals depends on the principle of humanity (via Stich via Grandy via Davidson via Quine). That is to say, our ability to attribute intentional states--whether they be desires, beliefs, repulsions, or the like--depends on how similarly the potential mental states behave to our own. If this is the case, can we say that an animal's having a reason for something is an inherent difference between that animal and, say, a venus fly trap or does it just turn on how far our concept of "reason" stretches?

AVERY ARCHER said...

Luke,
It's not clear to me that the two are mutually exclusive. Why couldn't we say that there is an inherent difference between animals and venus fly-traps (and there certainly are many) and that it is the fact that the former behave like and resemble us in certain ways that (at least partly) explain our ability to recognise this inherent difference. Of course, behaving or resembling us is not enough. One can imagine encountering an island with people who behaved like and resemble us, but then later (perhaps after one of them has had an accident) we discover that their heads are empty except for a remote control device. Certainly, the fact that they behaved like or resembled us would not be enough to sustain our belief that they had desires, intentions etc, or that they were agents. In short, being human-like is not sufficient for agency-attribution; it is important, from the perspective of our folk psychology, that purported agents actually have the relevant psychological states. If we were to discover that all cats were merely remote controlled devices (void of any appetites, desires or representational states), this would be grounds for concluding that they are not agents. We would then be forced to revise the place they now occupy in our folk psychology. But if the issue were simply one of resemblance, then such a discovery would make no difference. I think the point generalises to the question of whether or not a cat can have a reason (to, let’s say, quietly approach a bird).

Luke said...

I wonder if that response only pushes the issue back a step. While I agree that behavioral descriptions cannot capture all we want in a concept of agency, I'm not sure that appealing to "psychological states" does not use the principle of charity.

Suppose that on your island of quasi-humans we discovered not that they quasi-humans were remotely controlled but, rather, were paid actors given characters to portray. I think in this case we would also say that, upon this discovery, we take back all of our ascriptions of agency. Or, just as intuitively, we would say that the character being portrayed had agency, which would mean that in the remote-control case we would also say that the character being portrayed (through remote activation rather than acting) had agency. A better way to put this would be to say that the quasi-humans acted as if they had agency.

The difference here does not turn on whether or not the potential agents had psychological states, but, rather, whether what we characterized as psychological states worked similarly enough to our own for us to count them as agent-backed psychological states. I am being a little circumlocutory here because I'm not sure exactly what it is for something to be a psychological state (is it the same as being in a neurological state? can computers have psychological states?). My own tendency would be to say that whether or not we judge something to be a psychological state depends on whether or not it behaves similarly enough to our own psychological states. Thus, the question would just be pushed back a step.

I don't know what to conclude, but I think the upshot might be that our attributions of agency depends on characteristics of the attributor (and the concepts the attributor uses) and not just characteristics of the potential agent.

AVERY ARCHER said...

I am not sure I agree with your suggestion that we would respond to the discovery that the "quasi-humans" were just paid actors in the same way that we would if we were to discover that they were being remotely controlled. First off, while it is plausible to regard the remotely controlled entities in my original example as quasi-human, it is not clear that we would wish to call the paid actors quasi-human simply because they were acting. Presumably, thespians are humans (in good and regular standing), even when they’re performing.

Moreover, upon learning that the island inhabitants were just actors, I do not think we would take back our ascriptions of agency. Rather, we would simply revise what we take to be the content of their intentions. For example, suppose we saw one of islanders walking towards the supermarket with a shopping cart. Instead of attributing to her the (genuine) intention to go to the supermarket, we would attribute to her the intention to act as if she were going to the supermarket. But insofar as we continue to attribute to her some intention, we continue to regard her as an agent. The same is not true of the remotely controlled entities in my original example. Any attribution of agency would be made to whoever was manning the controls, not to the remotely controlled entity itself.

Moreover, I believe your claim that we would say the quasi-humans were acting as if they had agency also get’s things wrong. Suppose I (genuinely) intend to go to the supermarket. In such a case, we would wish to say that my agency expresses itself in my forming the intention to go to the supermarket. But suppose I were only acting as if I was going to the supermarket, but I had not genuinely formed such an intention. One would not conclude from this that I was not an agent or, a fortiori, that I was a non-agent acting as if it were an agent. Rather, one would continue to regard me as an agent, but now my agency is being expressed in an act of pretence (i.e., acting as if I were going to the supermarket) rather than in the forming and execution of a genuine intention to go to the supermarket.

But even if your assessment of the quasi-humans is right, I’m not sure it shows what you take it to show. You seem committed to the claim that “whether or not we judge something to be a psychological state depends on whether or not it behaves similarly enough to our own psychological states.” But this is perfectly consistent with the claim I defend in my post; namely, that the assumption that animals have genuine psychological states is part of our folk psychology. Let us assume, for the sake of argument, that the fact that an animal behaves similar to us is sufficient for us to attribute psychological states to that animal. It does not follow from this that: (1) animals do not really have psychological states, nor (2) it is not an assumption of our folk psychology that animals have psychological states. Consider: there may be cases in which our attributions of psychological states to other humans is based partly on the fact that they behave like or resemble us in certain ways. Certainly, it would not follow that, in such cases, the human beings in question lack psychological states or that our attribution of psychological states to them is only metaphorical. In brief, your argument appears to prove too much; it applies with equal force to our agency attributions to other human beings.

Luke said...

I guess I've been a bit unclear. Let me say things a bit differently.

Concerning the differences between acted characters and remotely controlled humans, what I meant to say (though I sacrificed clarity for succinctness) was merely that in either case, one is dealing with a simulated agent, a character. In the remote control case, the character is controlled by some other agent from a remote location. In the acting case, the character is controlled by the actor pretending to be that character. In either case, what we have is an agent simulating the actions of another agent. Though we would not attribute agency to the remote-controlled quasi-humans, we would attribute agency to the remote controller. In either case, we would have to remove our attributions of agency to the character portrayed onto the person portraying that character (whether through acting or through remote control). While in the actor case we would not change our mind that the person is an agent, we would change our mind that the character is. Or at least that's how it seems to me.

What I think we see from this case is what you thought was too much to have proven. When we attribute agency--whether it be to humans, animals, machines, or anything else--how we do so depends on how we think agents act, which depends on how we understand our own actions. I think you agree with this in that you think our attributions of agency depend on our folk psychology. But, to get back to the original question, don't you think it depends on a specific aspect of our folk psychology, namely, the principle of humanity that we follow in making intentional characterizations?

I guess I want to back off about claims about psychological states for the sake of clarity. What I'm really wondering here is whether our attributions of specific psychological states depend on how similar those psychological states are to our own. I don't think this makes our attributions metaphorical or artificial. Rather, it says something about how it is possible to attribute psychological states at all.

Anonymous said...

Hi, Lawrence Speke Lauder here,

It seems that we do indeed ascribe states to animals and to other people because we hold them to be analogous to our own.
And as you point out this can lead to mistakes like mistaking
an East Indian's up and down shake of the head for ascent, yes, andalogous to our own---when in fact it means no in the Indian culture.
One would assume similar mistakes can be made with animals.

Yet what else have we got but analogy in such situations?
ANd motivational states? If such states are to explain people's actions then since people are always doing something or other--even if it is just to lie down and doze and continue to do so or belch at the table---then people are in a perpetual motivational state. And the scope of motivational states? The universe has produced this body, the body wants to stay alive, so it needs to eat need so I cheat a customer because I need to impress the boss to keep my job to make money to eat. So am I motivated by the production by the univerese and the ongoing processes of that universe? ----Am I then the direct acting out of the desires and motivations of the universe
which produced and sustains this body?
How far do you want to go with this?
A man owns five cars. You ask him why? He says--I love to look at them and have people admire them. But in fact he: runs a chop shop and this is his cover, or he greww up homeless and living in cars, now associates cars with home
and so feels more comfortable with many cars-- or some such.
A man searches through a dumpster
---you presume he's looking for tossed out packaged food--it happens. You approach him: " You offer him a dollar for food, he looks insulted, says " My girlfriend and I just had an argument and she through my favorite porn dvd in here--I'm trying to find it."
Infering motivation from behavior
is tricky and even if the person agrees he was motivated by x---does he really know? What if later he decides x was not his motivation after all, but p instead?
If a persons training and education and experience are
responsible for all his actions--then isn't his motivation--the whole shebang?
A man says: " I am no longer motivated to do z since my gal departed". YOu take that to be his entire motivation? Like a rat no longer pushing a lever when the food has run out?
Seems a silly reduction to me.
Rat behavior as a model for man's? Is the rat really analogous?
And what of the motivations of the rat? Apparently the mind set is that rats are like men except simpler-----that is certainly the assumption. How would you prove what motivates a rat--how varied the motivations and how interconnected? How prove it is so simple? No--it's an assumption.
Anyway, define motivation as you wish --to support your point--that's what everybody else does.
But please expand your notion of motivation---or do you lack sufficient motivation?

AVERY ARCHER said...

Luke, Thanks for the clarification. I think I may have misunderstood the aim of your argument. In my reply to your very first comment, I suggested that your claim that our agency attributions to animals may be based on the fact that they behave like or resemble us in certain respects is consistent with my claim that we ordinarily assume that animals have psychological states. Your reply seemed to express some dissatisfaction with this suggestion (saying it only pushed the question one step back), and I took your argument to be an attempt to show why my suggestion was unsatisfactory. However, it now seems as if your point may have been slightly difference – namely, to insist that a certain type of analogical reasoning guides our agency attributions in general - whether in the case of humans, animals or machines. I do not wish to deny or affirm this point (though it does have some putative plausibility). Instead, I simply want to stress that it is perfectly consistent with the claim that agency-attribution to animals is a legitimate part of our folk psychology and ordinary linguistic practice. Since, by your lights, the analogical reasoning is at play in both animals and human agency-attribution, then it cannot be used as a basis for denying the legitimacy of such attributions to the former while affirming it in the case of the latter. This result is perfectly consistent with the goals of my post.

As an aside, I should register my reservations about a view, along the lines of Dennett’s “intentional stance”, which would see agency attributions as simply a matter of a certain attitude we adopt towards a certain object (in order to make the interpretation and prediction of the object’s behaviour easier). I believe that, according to our folk understanding, the possession of motivational psychological states with representational content is a necessary condition for the legitimate attribution of agency. The fact that such attribution is useful (for example, as it would be in the case of computer chess program) is not enough. Insofar as a computer chess program lacks such motivational states, it does not qualify as an agent. The fact that such agency attribution would make predicting the chess program’s next move easier is, in a certain sense, really beside the point.

I think we may be able to get a better handle on this question by distinguishing between the enabling and constitutive conditions of our agency attributions. For example, it may be the case that content of our folk psychological assumptions are to be partly explained by considerations of the kind Dennett highlights; such as the fact that they are useful in our attempts to make sense of complex processes. It may also be true that we are motivated to assume that some X is an agent because of the analogical considerations you highlight. However, this does not mean that these considerations are a constitutive part of our folk psychology. For example, when we say that a cat intends to catch a bird, we do not mean that it is useful to see the cat as possessing certain motivational states in order to predict its behaviour. Nor do we mean that the cat resembles us in certain ways. This may, of course, be true of our agency-attributions, but it is not what we are attempting to convey when we make such attributions. Rather, we mean that the cat possesses certain bird-directed motivational states. It is with the constitutive rather than enabling conditions of our agency attributions that I have been concerned.

AVERY ARCHER said...

Lawrence,
You raise a wide range of issues relating to the question of agency attributions. I agree that these are all questions that a comprehensive account of human motivation would need to answer. Call my post what you will, but a comprehensive account of human motivation it is not. However, I hope to eventually examine most (if not all) of the issues you raise, and I am highly motivated to do so. Stay tuned!

Anonymous said...

Hi,
by a matter of hazard i just found your blog. I read your profile and there is something i don't quite understand.
<< Your logic professor proved that You didn’t exist >> Can you tell me how? What do you mean by not existing ? I'm way curious... and will be awaiting for your answer