Monday 27 December 2010

Becoming our own intelligent designers? A consideration of our possible cosmic development

As usual, it takes me a while to get back to topics I'd previously mentioned I might post on. Other commitments are always to blame. It was still quite a year for the Acheron team. We realized a dream of seeing Laurie Anderson performing live at the Sydney Opera House. We also took in China Mieville giving a live reading of selections from his latest book, Kraken. Fascinating as each was in their own right, for me it is always best when things can be brought together, rather than be treated as discrete episodes to be judged on their own respective merits.

Mieville mentioned how he uses Samuel Delany in his creative writing class at Warwick University. Derridata in turn leaked this information to one of Delany's confidants, and "Chip" himself was eventually tipped off! Great as that was, I was particularly interested in how Mieville briefly situated himself in relation to Lovecraft. He mentioned spending more time of late in Providence to get a "feel" for the place, so no doubt we'll soon see what dividends that yields for his upcoming "weird fictions". More forgivingly than was generally Lovecraft's want, Mieville also professed his atheism, while at the same time distancing himself from the "celebrity atheism" of Dawkins et al-- conceding that faith can play a positive role in some people's lives. In contrast, Lovecraft only begrudgingly allowed that Catholicism could inform aesthetics in a worthwhile sense.

Given Mieville's involvement in an upcoming academic conference called Spaces of Alterity, I'm wondering whether he or the other participants will be willing to build upon the aforesaid comments by considering how "the sacred" could inform conceptions of "counter-hegemonic space". I'm thinking here of those who have considered how our planet can be rethought by grounding "the possibility for a Global Ethic that will provide hospitality to all aliens, near and far". The relationship between "the sacred" and "astrobiology" appears set to become an ongoing concern. If you check out the website, you can see that Laurie Anderson was involved too. I mention this fact in part because it is suggestive of a certain consistency to the cultural tastes and interests of the Acheron team as well.

I would describe Anderson as a progressive artist, but whenever I find myself becoming more pessimistic about our prospects for venturing very far, I feel closer to Lovecraft. Afterall, his stories were predicated on panspermia bringing the human race into contact with higher civilizations that were indifferent to us. At no time did he suggest that we could ourselves learn to direct the process to progressive ends. In contrast, Meot-Ner and Matloff in effect follow Carl Sagan and Francis Crick by suggesting that panspermia could be used to create a "Noah's Ark" to save species threatened by changes to Earth's ecosystem, or even changes to the solar system, such as the death of the sun.

Once you start following these debates, you soon realize that any receptiveness to such notions is dependent on how you interpret "the sacred":


I find it ironic then (during my more optimistic, or rather, "utopian" moodswings) that even the proponents of the "selfish gene" theory will attempt to appear responsible by urging us to adhere to the cautionary principle: i.e. notwithstanding natural selection, we are not just driven by our genes, insofar as we are also cultural beings that must be held accountable for our choices and acts. But if the end result is merely a deferential attitude to the order of Nature, how desirable is it really? Surely the greater challenge is to think of how directed panspermia could forge cosmic development as a counter-hegemonic practice; a space irreducible to privatisation, commodification, homogenisation etc? I anticipate that the "Spaces of Alterity" conference will reference Nick Dyer Witheford's Games of Empire, which envisions gaming environments as an example of such spaces. That is fine, as far as it goes. But it stops well short of the ambitious economy of scale Kim Stanley Robinson has in mind when he presents terraforming as a utopian project. To my mind, this distinction makes Robinson the most important science fiction author working today.

For some, building on an impetus for terraforming/directed panspermia will mean hitching the Intelligent Design wagon to Fred Hoyle's The Intelligent Universe. I understand the basic reasoning, which would aim to show the continuity between us taking control of cosmic development and the will of God. In a comparable vein, the conjoining of Islam and science fiction is notable. Others, such as Robinson, are likely to be more muted so far as any specific privileging of religion per se is concerned.






Be this as it may, this group would probably, at least in principle, broadly assent to us collectively becoming our own "intelligent designers". This is light years away from the central message of Lovecraft (and arguably much of "weird fiction" as well) or what is for many the classic sci fi movie of all time-- 2001: A Space Odyssey. Just watch the Flash animation in Part IV and you'll see what I mean.

I won't speculate any further then about what I'd like to see featuring in the "Spaces of Alterity" conference. So I'll just reiterate that I enjoyed China Mieville's talk. Kraken inspired me to create an image of Cthulhu laying waste to Sydney. I'm also posting a few pics of China giving his reading.


Thursday 23 December 2010

Tuesday 14 December 2010

Monday 13 December 2010

Nonhumans

(NB: this essay is presently not referenced due to a lack of time)

Introduction: Why consider nonhumans?

The idea that humans are intrinsically and crucially different from all other beings is deeply entrenched in our beliefs and institutions and found in both our philosophical and religious roots. Along with it, generally runs the assumption that we therefore have no reason to include nonhumans in moral considerations. It is only relatively recently, with the work of Peter Singer and others, that this assumption has been seriously questioned in the name of “animal rights”. Coincidently, it is relatively recently that this assumption has also been challenged by the possible coming of an entirely new being — ‘Artificial Intelligence (AI).

The thought of including animals in moral considerations seems to many of us either as one leading down a dangerously slippery slope where we must equate human with animal life, or as an absurd product of sentimentality run amok. The thought of including AIs, which are commonly considered not even constitutive of life itself, is downright ridiculous to most. Yet, I think this question is a serious one of substantial theoretical and practical importance.

Perhaps, then, we should begin by examining the commonly held belief that morality is something which is relevant to humanity alone. Is morality a purely human endeavor? In one sense no — moral systems of a kind exist in other animals, particularly the higher primates, and certain animals do exhibit behavior that is empathetical and loving. In fact, some, such as Darwin, claim that humans and other social animals possess an instinctual moral sense as a
common and inevitable product being sophisticated social organisms. This, of course, is not to say that other species share the exact same type of morality that we possess, since the moral instinct is relevant to the social functioning of the particular species. But neither does it mean, however, that moral instincts are necessarily operative only in interaction with one’s own species. Relationships of cross-species reciprocal love, sympathy and sacrifice do commonly form, as many pet owners will attest, not just between humans and animals, but between animals of different kinds, as well.

In order to explain or justify our exclusions of animals fiom moral considerations, it seems that we must clarify precisely how considerations for nonhumans are relevant to human morality. Perhaps we should begin by accepting that we currently do include some animals, but not others, in moral considerations. Some nonhumans, then, have already entered our moral circle. An enquiry into the question of why we include some and exclude others (fully or partially) from moral considerations would clarify our actions and beliefs and explore the question of potential
injustices. Further, such an investigation is also necessary if we are to evaluate the question of incorporating new types of beings, such as AIs or Cyborgs in our moral circle. We should be able to state when and why such new beings would become worthy of (or would be denied) moral participation, and with what limitations, if any.

Scope and direction of essay

I have chosen to investigate the issue on moral considerations for nonhumans through the presentation of both animals and Als because considerig them together makes it more difficult for us to fall into our traditional assumptions against their inclusion. In an important sense, animals and Als exemplify the 2 sides of what it is we consider human and relevant to morality in ourselves — the rational and the sensual. In fact, they are often excluded from morality for precisely opposing and contradictory reasons. I believe that examining them together will allow us to unveil such contradictions and come closer to an acceptable answer to the question of whether or how nonhumans should be included in moral considerations.

I will begin with a presentation of the more traditional debate of the question of the inclusion of animals in moral considerations. As this problem has traditionally been stated, I will first address the question of the “nature” of animals — that is, what characteristics/capacities they possess. Secondly, I will present various attempts at answering the question at hand through an appeal to the “nature” of animals, as well as brief criticism with regards to these approaches. Thirdly, I will offer a scheme, relating particular characteristics to their relevant types of moral consideration, for dividing kinds of beings in accordance with their capacity for moral participation.

I will then, following a similar course, examine the possible “nature(s)” of Als and evaluate the traditional theories in the larger scenario of nonhumans which includes Als. Lastly, I will present two attempts at addressing this problem through a Functionalist, rather than Essentialist approach, and evaluate their effectiveness in addressing this issue.

The traditional debate of animals and moral consideration

"It is in accordance with this principle that specieism is to be condemned. If possessing a higher degree of intelligence does not entitle one human to use another for his own ends, how can it entitle humans to exploit nonhumans?" [7]

The nature of living things [8]

Aristotle presents us with the following characteristics which he uses to separate plants from animals and from man:

1 The capacity to gain nourishment
2 The capacity to reproduce
3 The capacity to be aware of the world through sensory apparatus
4 The capacity to desire, feel, remember and imagine
5 The capacity to think and calculate

Of these, he ascribes only the first two to plants and the first four to animals. Homosapien is considered an animal with a 5th crucial distinguishing capacity- to think and calculate- and is therefore classified as a rational animal- distinguished from animals by his ability to reason.

Descartes offers us a very different analysis of the difference between animal and humans: animals are mere "automata" rather than conscious beings. Animals have neither mind nor soul (which Descartes uses interchangeably) and while they may act as if they desire and feel pain, these things can be explained without assuming they are actually conscious and suffer [9]. Thus, only humans are capable of feelings and desires.

Kant offers a 3rd reason for the division of animals from humans. According to Kant, only humans possess the capacity for free will (autonomy). Kant claims that when a dog hears a noise outside, he does not ask himself whether he should or should not bark, he simply barks. Since animals do not have self consciousness, Kant asserts, they lack the ability to choose between alternatives. Without an ego, they cannot conceive of how they ought to act- they are not free.

Conceptions of the kinds of beings we have moral obligations to

Traditional approaches have assumed that the question of whether or not we do have such obligations is intimately connected with our inquiry into the nature of the being in question. The two main questions that are asked are:

1 What characteristics must a being possess if we are to have any duties towards it?
2 Which lower animals, if any, have these capacities?

Three main capacities have been considered by different thinkers to be such that, if a being lacked them, it would follow that we would not have any duties to it. They are the same three that Aristotle, Descartes and Kant have used to distinguish animal from human: the capacities of reason (rationality), free will (autonomy) and of feeling and desiring.

Rationality

Aquinas asserts that rationality (or intellect) is the capacity which makes beings more or less perfect: the greater a being's rationality or intellect, the more perfect it is. Through this criteria, he declares, humans are more perfect than animals, who lack this capacity, while angels are more rational and perfect than Man [sic], and God is absolutely perfect- God is "pure intellect". Aquinas sees it as God's intention that the less perfect beings can be rightfully subordinated to the more perfect beings. Thus animals are rightfully subordinated to Man, as Man is rightfully subordinated to God.

Aquinas concludes that we can, therefore, have duties only to those beings who have the capacity to reason- ourselves, other people, and God. While Aquinas does admit that animals are sentient and can be treated cruelly, which would be to cause them unnecessary pain, he concludes however, that although such cruelty would be wrong, we do not have a duty to refrain from it, since we cannot have duties towards non-rational beings. The real fault of treating animals cruelly, according to Aquinas, is that doing so may create a habit of cruelty, which may then lead to greater cruelty to other humans [10].

Free will

Kant claims that only humans, unlike animals, exist as "ends in themselves" and therefore only human existence has intrinsic worth. He seems to base this on the fact that only humans have free will. It is this capacity for free will that makes one's existence intrinsically worthwhile- an “end in itself’. Similar to Aquinas, Kant cloesn’t think that cruelty to animals is wrong, but urges against it because of the supposed negative effects this kind of treatment has on the way humans treat one another.

The capacity for suffering

Others, such as Peter Singer have disputed the assumption that the claim to equal consideration depends on such matters of fact like rationality or the capacity for free will, which are found unequally among persons. In his essay,“All Animals Are Equal” (1974), Singer claims that equality is a moral ideal, not a simple assertion of fact and that the principle of the equality of human beings is not a description of an alleged actual equality among humans, but rather a prescription of how we should treat one another. Bentham’s utilitarian consideration of counting each person equally as one unit of utility exemplifies this principle.

According to Singer, it is the capacity for suffering that is the vital characteristic that gives a being this right to equal consideration. He points out that if equality is to be seen as related to any actual characteristics of humans, other than that of the capacity for suffering, none can be found which no human will lack. The only characteristic that all humans share is the ability to suffer. Singer concludes, therefore, that the principle of equality requires that suffering be counted equally with suffering, in so far as rough comparisons can be made, of any being capable of suffering.

A brief evaluation of these conceptions with regards to
animals.

Beings which are rational

This definition was for a long time used for the purpose of excluding women from personhood and its associated powers. Women were held to be incapable of true rationality and thus were regarded as less than full persons. Is the way we view animals similarly suffering from limited vision? Is our specisism standing in the way of our realization of some animals as rational beings? Perhaps, as some critiques have claimed, animals may be capable of some type of rationality, and it is our overly limited conception of what rationality is that limits us from seeing it in certain
animals. Further, as Singer points out, while this criteria may eliminate at least most animals from consideration, it would also eliminate some humans as well. Babies and some of the mentally feeble who don’t possess the characteristic of rational thought, for example, would also be excluded.

Beings which are autonomous

Similar problems exist for utilizing flee will as the essential criteria for moral consideration. It is open both to objections that if it does exclude all animals on this condition, it would also exclude some humans (e. g. babies) and that it is wrong or biased in its empirical claim that no animals have the capacity for free will. With regards to Kant’s barking dog, for example, even if we were to assume that the dog barks reflexively, with no self-conscious thought, it may very well be that other sorts of actions, such as deciding when to play with a human or when to leave a human alone, could possibly be reflective. We, ourselves, react reflexively to many situations,
and reflectively to others. Is it not possible that other animals may do the same to a certain degree.

A further problem with utilizing free will as the necessary criteria, is that it is also very difficult to state how it is exactly that we, ourselves, do have free will. Claims with regards to either humans or animals in the above example are either extremely difficult or impossible to confirm.

Beings with the capacity for suffering

Even if we were to accept Peter Singer’s argument regarding the necessary exclusion of some humans on the basis of any capacity other that of suffering as correct, the conclusion that any being which suffers is owed moral consideration is not one which we must necessarily make. Perhaps, instead of expanding the criteria for inclusion to all beings capable of suffering, it should be constricted only to those agents which show certain capacities, such as, some degree of rational thought and/or free will — excluding some humans in the process.

Also, while Singer in “All Animals are Equal” does stop short of claiming that non-humans deserve equal moral considerations, it is unclear on the basis of what justification this qualification is declared. Why would animals not deserve full and equal moral consideration if suffering is the only qualification for such equal treatment among humans? Perhaps the answer Singer would give us as a Utilitarian is that Ethics should consider all suffering beings a priori equal, but then it should give considerations to the ways they would suffer as different types of beings. A claim may be made that human suffering: comprised of physical suffering, the capacity for mental and emotional suffering, as well as the projection of suffering into the future and depression, simply adds to a greater total amount of suffering per being as compared to animals. A utilitarian view of comparable suffering would, then, give humans greater consideration based on their supposed greater capacity for suffering.

While such a view does go a long way towards preventing us from sliding down the ‘dangerous’ slope of treating animals and humans as equals, I am still not satisfied that it is successful in addressing all of our concerns regarding cases of equal suffering. If I was to kill either an old man possessing neither friends nor relations, or a female dog who must care for her puppies, giving neither one any previous warning, would I be justified in killing the dog instead of the man? Further, I am suspicious (though I wouldn’t go as far as outright rejecting) of the claim that
human suffering is greater than animal suffering. How are we to measure the severity and personal consequence of animal suffering? Is this claim that our suffering is greater — that we ourselves must therefore count for ‘more than one’, something other than what Singer calls ‘speciest’?

Towards an Essentialist moral participation scheme

Observing approaches such as those of Kant and Aquinas, I find it difficult to understand what exactly the relevance of capacities such as rationality and autonomy are to the granting of moral protection. These approaches, by focusing on moral participation as a whole, fail to distinguish between the actual differing roles within it. I would like to suggest, that the different offered criteria for moral consideration (rationality, autonomy and the capacity for suffering) can be regarded as actually relevant to two different and separate aspects of moral consideration, rather than to a single function (full moral participation), whereby the agent is both responsible for his actions and is protected from harm from other agents:

(1) Moral responsibility — When an agent (not ‘merely’ a being, but an agent — a particular type of being, actively acting within a moral context) acts, he is considered to be morally responsible for his actions (assuming he has not being constrained in some way which absolves him from that responsibility).

(2) Moral protection — A being has the right to not be harmed by others when they act upon him.

Agents whom we hold to be morally responsible necessarily possess both the rational capacity to make decisions and the capacity to make them freely. The capacity for suffering, on the other hand, is considered irrelevant in cases of moral responsibility, at least as long as it does not interfere with either of the other two conditions (for example, acting under the threat of pain affects one’ s ability to act freely and acting while suffering can affect one’s ability to think clearly and rationally). With regards to moral protection, that is, protection from being acted upon and harmed, it seems that only the capacity for suffering is relevant — unless we accept a conception of morality as necessarily an exchange between parties potentially capable of harming one another. Alter all, when we speak of moral protection, we speak of protecting someone from being harmed. In such considerations the being whose protection we are concerned with, never acts. Therefore, capacities or action, such as reason and autonomy are irrelevant to a being’s
claim for moral protection.

Adding AIs to the traditional debate

“...machines will exhibit the full range of human intellect, emotions and skills, ranging
from musical and other creative aptitudes to physical movement. They will claim to have
feelings and, unlike today’s virtual personalities, will be very convincing when they tell us
so.” [16]

The “nature” of Artificial Intelligence

What are the potential characteristics of Artificial Intelligence? Are Als capable of “desires”, “interests”, “reason”,“free will”, or “suffering” in a sufficiently relevant sense,”? I’ve put these concepts in quotes because they have become concepts challenged to fit a new model of being. While “Artificial Intelligence” is a broad term which includes machines within a large range of capacities and expertise, what I am interested in here, is the particular type of AI which makes the attempt to function as a being of roughly human equivalence. Such an entity is both the goal
and direction of much of current AI research and is a being which is potentially functionally capable of integration into human society.[18]

The following is a brief analysis of the possible characteristics of such AIs

Can Als be alive?

Perhaps the one undisputed requirement for moral participation is that it is only for the living - I have no moral obligation towards my cd player or television. Are Als similarly “mere objects”, not at all alive and therefore completely outside of the moral circle? Some, such as Geofi Simons, claim otherwise:

. .most people show a quick reaction to the idea of computer life: the notion is first rejected and then the reasons are sought: all known life is based on hydrocarbons; machines cannot reproduce; computers and robots can only drive their power from human beings; ‘mere’ machines cannot be conscious, creative, intelligent, aware; nor can they make judgments, take decisions or experience emotion; computers and robots may mimic certain human activities, but artifacts will never be truly intelligent and they will certainly never be alive [. . .] it can be said at once that this is largely a matter of human vanity. Status and self-image are at stake. Copernicus, Darwin and Freud met bitter opposition - at least in part - because they effectively dethroned mankind from (respectively) the center of the universe, a zoological pedestal, and conscious autonomy over all human motivation.

How is it, then, that we distinguish the living from nonliving — after all, ‘living’ encompasses
an immense range of beings? Simons reminds us that to be considered living, a robot would
merely have to match the characteristics “of a 5'1’ rate lichen” — it does not need to even be
conscious to be alive. He asserts that the necessary life criteria can be accommodated under
4 categories:
(1) Life substance assembled according to blueprint (ex: DNA)
(2) Life injects, distributes and stores energy
(3) Life can remember and handle information to perform tasks
(4) Life can reproduce itself

According to Simons, each and every one of these conditions is found in today’s machines.
Computers, in both hardware and software, are assembled according to a blue print. They
receive, distribute and store energy. They have memory and process information to perfonn
their tasks. They can reproduce both in the sense of duplicating software and as is already
done in some factories, be built by other machines. He concludes, then, that there are 3 life
forms on earth: plants, animals and machines.

Approaches towards creating Artificial Intelligence

Artificial Intelligence is commonly considered as an attempt to copy the conscious human mind
and indeed, the traditional approach in creating Artificial Intelligence, commonly referred to as
the “top-down” approach has been one which has looked at the conscious mental processes
of human beings doing particular tasks and attempted to duplicate them[_2_Z1. Such an approach is largely rule-based in that the AI is programmed with a list of rules that it should follow. For example, an AI playing the role of a doctor, would be programmed to inquire about a patient’s daily activities if he determines the patient to have high blood pressure. Such an action would generally follow the rule; If blood pressure is high (defining "high" with 140/90 or such value), then inquire about daily activities. Expert systems - intelligent systems limited to a narrow aspect of conscious human interaction, such as flight reservation, psychiatry, research assistance, etc. are ideal for such developments of Artificial Intelligence and such attempts have had considerable success. Some such systems also make use of “fuzzy logic” in order to add a greater human-like element to their interaction.

Expert systems which detennine actions and responses from a list of choices in such a fashion
are easy to grasp and perhaps equally easy to dismiss in terms of being ‘actual’ persons. In fact,
while such rule-based systems may be applicable towards the creation of expert systems, they
seem to lack much of the depth that may be said to constitute real conscious beings, of which
they are mere simulations. Some, such as Moravec (1990) suspect that the reason for this is
that the most powerfiil aspects of intelligence are actually unconscious ones, which can not be
reproduced from the examination of conscious rationality.

In fact, while it seems relatively easy to make computers solve problems of intelligence or play
games like chess with defined rules, it’s extremely difficult to get them to do many of the things
that any l year old can easily and instinctively do. This point is illustrated when one reflects on
just how difficult it is to tell a robot how to safely move around a commonly furnished room,
how to pick up and hold the many types of things it may encounter, or how to distinguish
perceptually different types of things. Such a disparity between the difficulties of the re-creation
of particular process of conscious mind and of the instinctual processes that lie below it,
highlights the importance of the consideration that while we have a billion years experience about nature of the world and how to move and act within it, reasoning is a very thin new layer of thought, effective only because of much our older and much more powerful unconscious layer'.

Other approaches, in the fields of Cybernetics and Robotics, rather than focus on conscious
human thought, have looked elsewhere. Cybernetics, taking on the materialist assumption
that brain is mind, have attempted Neuron Modeling - forming building models of actual
neural-systems at the neural level in order to recreate mental faculties. Such an approach,
however, is difficult to implement because of the huge number of cells involved and the difficulty
in discovering what it is that they do.

A more interesting, and perhaps a more promising approach of making Als the kinds of beings
relevant to moral participation, is What is referred to as the “b0ttom~up ” approach. This
approach, more common in the field of Robotics, focuses on the simulation of learning behavior,
which is mostly unconscious, rather than conscious thought. It is learning behavior that is
instrumental in dealing with the fundamental problems of mobility and perception. Some experts, such as Moravec, believe that the bottom-up approach is capable of eventually producing beings with the full capacity to reason, possess emotions and desires and feel pleasure and pain.

Moravec projects that human-like. AI will be common in just 40 years. How is the basic capacity of movement and perception supposed to result in such complex and apparently biologically-related capacities? After all, we feel pain and emotions and assume that to do so we need an actual biological body that feels. Moravec’s hopes lie in the theory of Convergent Evolutions, which holds that there are some things which are so useful to functioning in nature, that they are recreated through evolution again and again. A common example of such an evolution, and a matter of controversy since the 193‘ century, has been the eye, a very complex organ which appears to have been recreated by evolution over 40 different times. The bottom ‘up approach, then, would attempt to set-up experimental conditions to duplicate the course of evolution, adding new capabilities from growingly complex organisms, a few at a time, allowing for the natural build up of the foundation for intelligence, consciousness, self-consciousness, feelings,
desires, joy and suffering, and morality (not necessarily in that order).

It should be made clear that the production of things such as emotion and consciousness are not
the explicit goal of Robotics, but are rather regarded (at least by Moravec) as a natural (guided)
evolutionary byproduct of sophisticated life fonns. Such a view can be traced at least back to
Darwin, who claims that any social beings possessing sufficient intelligence would develop a moral sense, the capacity for misery, language, and communal ethical discussion.

Moravec suggests that the bottom-up and top-down approach will probably meet each other
half-way. We will then have beings with instincts similar to those of our own unconscious, the
appropriate “organ” systems to enable them to interact with other beings, the capacity for
conscious and self—conscious thought, as well as feelings and desires, potentially enabling them to join human culture and enter into the moral circle.

Can AIs be rational?

While it is obvious that a simple calculator can duplicate some types of processes that we call “rational”, it seems inappropriate to label the calculator as itself rational. Indeed, when we speak of rationality, we refer to quite a bit more than this. James Fetzer, presents us with a common fuller description of rationality, dividing it into 3 distinct types
:
(I) Rationality of ends -choosing specific goals, aims, or objectives as worthy of pursuit. A necessary condition of this type of rationality is that the objective must not be logically, physically or historically impossible. Searching for a number which is both odd and even, wanting to be in two places at the same time, or aiming to be the first human being to climb Mt. Everest, are examples of irrationality of this type.

( 2) Rationality of action —choosing means that are appropriate in relation to one’s ends in the sense of being appropriate to one’s ethics, abilities and opportunities. A vegetarian entering a hamburger eating contest in order to win $5 because he wants to fly from the United States to Europe on the Concorde would be acting irrationally in this way.

(3) Rationality of belief— accepting all and only those beliefs that are adequately supported by available evidence. It would be irrational, in this sense, for a person who has seen direct or indirect evidence that man landed on the moon to fail to believe that this event had actually occurred.

In accordance with these distinctions, Fetzer concludes that the capacity for rationality should not be accepted for AI. This is so, he claims, because all 3 of these types of rationality presuppose the existence of agents whose behavior is ‘at least partly determined by the causal interplay of motives and beliefs, where motives are desires within a system of beliefs that might possibly be true’, and Fetzer considers it unlikely for Als to be able to satisfy these requirements.

While one may contend that such a definition of rationality is biased towards human agents and should not be identically applied as a criteria for rationality in machines, I believe that if our concern is with rationality in terms of its relevance to moral actions, then the above definition of rationality’ s interconnectedness with motives and beliefs is actually quite relevant. I do not think, however, that Fetzer’s pessimism is necessarily justified. Firstly, Fetzer appears to consider Als only through the top-down model, and in fact refers to them as ‘inanimate machines'. If the bottom-up approach in Robotics is successful, by the time AIs will develop intelligence, they will have also had the foundation of “instincts”, “desires” and “beliefs” (with or without the quotation marks) and would therefore have become the types of agents towards which such criteria of rationality could be applied.

Secondly, even if we remain with the top-down view of AI development, however, it remains unclear precisely why the motivations and desires of AIs would not qualify. Als would have reasons for the short term and long term goals they would choose to pursue. In what qualitatively important way, are we to distinguish these from what we normally consider desires and motivations as relevant to moral participation? We can not do so on the premise that motives or desires are the types of things that computers can not determine on their own, while humans can. We do not select our fundamental desires, motivations and goals and are heavily influenced by environmental pressures to conform them as required, yet we claim to be able to determine some of our ends ourselves. If that is to be taken as true, then the very same argument can be made for Als, which start off with a general set of programmed drives and
motivations, have some of their motivations and ends influenced by environmental interaction and determine some of their ends themselves.

Can Als be autonomous?

Can we truly make the claim that we possess free will and AIs can not? It’s difficult enough to even establish that we ourselves have free will in the first place, how can we honestly make a qualitative distinction between our own processes and those of AIs? Are definitions of free will, such as Franklin’s possession of 2“ order volitions (identifying with out desires on a higher, secondary level — wanting to want what we want), anything other than mere attempts at describing our own processes, ultimately incapable of distinguishing us from at least some Als?

Marvin Minsky, AI’s “grandfather”, semi-seriously suggests that perhaps, contrary to the common conception, it is only robots that can be said to have free will, since only they can actually be made aware of why they make the decisions that they make and therefore would have the ability to ponder and alter those decisions. Humans, on the other hand, simply act, not really knowing why or how they do so. Whether or not Als are capable of free will, then, is not quite clear, since the notion of free will itself is fraught with so many ambiguities. As long, however, as the same can be said for human beings, this question may be irrelevant. Or rather, perhaps we should make a relatively strong claim for autonomy the relevant consideration, which, it seems, can certainly be made for some AIs.

Can AIs possess the capacity for suffering?

It is in our interest to develop machines that weigh their options with accordance to their desires, and in fact this type of development is already under way. The purpose of creating “desires” and “fears” in Als, should be obvious — it is the same one that evolution has found so functional in our own ability to contend with our environment. The fact that desires and fears do play an important practical role in the survival of an organism does not mean, however,
that they are necessarily a characteristic of all organisms of human-like complexity, though Darwin and Moravec consider this highly likely. Neither, and more importantly, does it mean that if “desires” and “feelings” are programmed into AIs, they will become of a type sufficiently relevant to our own.

Programming ‘pleasure’ and ‘pain’ in AIs

“an adequate account of pleasure and pain must explain the central part they play in motivating action. Some link pleasure and pain in some appropriate way to action, but do so merely on the basis of a verbal definition, which loses sight of their inherent nature as qualities of feeling Others rightly take pleasure and pain as essentially qualities of feeling which we can only know from our own experience but leave us with nothing but a purely contingent relation between these feelings and our observable behavior as creatures with wants ”
— Timothy Sprigge The Rational Foundation of Ethics, pg. I39

Giving an adequate account ofwhat pleasure and pain are and critically attempting to apply such criteria to nonhumans is, I believe, an extremely difficult and complex task outside the scope of this essay. I will, however, attempt a brief speculative analysis of the issue, which I hope will cast some light on the relation between our common conceptions of pleasure and pain and what AIs may be capable of. As Sprigge points out, pleasure and pain have two important aspects that must be, though oflen are not, reconciled: they must be linked to motivation/action and they must also relate to our own experiences of them.

Hans Moravec claims that vertebrates owe much of their behavioral flexibility to the encouragement and discouragement of future repetitions of recent behavior — an effect evident in what we refer to as ‘pleasure’ or ‘pain’. He aims to instill similar capacities in robots by creating a “unified conditioning mechanism which increases the probability of decisions that had proven eflective in the past under similar circumstances and decreased it for ones that had been followed by wasted activity or danger”. He then proceeds to call the success messages ‘pleasure’
and the danger messages ‘pain’ (Moravec does leave this terms in quotes). Pleasure would increase the probability that on action would continue and pain would decrease this probability or interrupt the action.[36]

After a robot has been active for a while, it will have accumulated in its memory warning messages and begin to avoid the steps that led to problems and repeat or refine pleasant solutions. Eventually long chain of associations would be created in the robot’s mind as to the steps leading to desired and undesired ends. A danger at this point is that unless there is an adjusting feedback-loop that allows for a moderation of these ‘feelings, ‘pain could grow
into an incapacitating phobia and pleasure into an equally incapacitating addiction. ” [37]

These comparisons between states of “phobias”, “addiction” and from Michio Kaku “madness”, seem to speak of behavior which is at least somewhat similar to our own, but do they represent an artificial analogy or actually amount to being the same thing in a sense relevant to our considerations of beings who suffer? The description of pleasure and pain (whether or not they are in quotes) offered us by Moravec, only seems to satisfy the first of the two conditions described by Sprigge as necessary for an adequate theory of pleasure and pain to account for. We are given here an account which neatly integrates pleasure and pain into action — they exist as reinforcement or discouragement of functional behavior. We are not, however, told how such an account would be relevant in any way to the way we, as human beings, experience pleasure and pain.

The following three claims (which are by no means exhaustive) may be made against Moravec’s evolutionary-oriented view:

(1) Some pleasures don’t seem to have a survival-oriented point.
When we lay to relax in the sun, enjoying its warmth while a cool wind lightly brushes over us, we seem to be doing this for the sake of pleasure itself, rather than any end goal towards which pleasure is merely acting as a positive reinforcer.

It is, however, possible to consider actions for pleasure as actions which once possessed survival-utility of either the organism or the species as a whole, but have since then lost their underlying purpose. Consider the fact that all mammals have a strong desire for salt and derive considerable pleasure from consuming it. We need salt in very minute quantities, but because salt is rarely found in nature and was once hard to come by, we have a very strong desire for it. These days, however, salt is plentiful and the pleasure we derive from it no longer plays the role it once
did in increasing our survival ~ in fact, many suffer medically from this pleasure which induces us to eat thousands of times much more salt than we need, because it is not biologically aware (biological evolution is rather slow) that we no longer are faced with the problem of a shortage. Perhaps many of our actions which appear to be aimed at ends which have purely hedonistic, rather than survival-oriented ends, can be explained in a similar manner.

(2) We often do things for pleasure itself, not its practical significance.
Moravec’s account, seems to reverse our common intuition of what pleasure and pain are. When we look at pleasure introspectively, we quickly notice that we seem to do most things for the sake of pleasure itself We generally, for example, have sex for the pleasurable sensations it gives us, rather than in an attempt to procreate. Indeed, as Sprigge asserts, while psychological hedonism, the theory the we only aim at obtaining pleasure and avoiding pain for the self, is certainly incorrect, aiming at obtaining pleasure or avoiding pain is a major motive in human conduct and our conduct can at least be said to be ‘hedonistically guided. But how can Moravec’ s account explain the fact that pleasure itself seems to be such a major motive on our actions?

One possible answer to this criticism may be that while the part of our being which is the conscious mind may be concerned and aware of the pleasure involved in these actions, our unconscious being (which Moravec sees as the grand foundation of our instinctive being) is actually moving us in a more mechanical manner towards behaviors which are targeted towards practical considerations.

(3) How can we account for the fact that pleasures and pains seem to involve radically different sensations?

If pleasure and pain are nothing more than methods of positive or negative reinforcement, which can be translated into a O and 1 calculus of feeling and desiring — either encouraging or discouraging behavior — why do they come in so many varieties, when it would seem that a single type (as in Moravec’ s model) would do just fine? How can such a model , then, account for the fact that we have a multitude of things that we call ‘pleasure ’ and ‘pain’ which are quite distinct from one another.

It is perhaps not all that difficult for Moravec to deal with this criticism by claiming that the different pleasures and pains that we feel are different because they evolved independently in different types of systems, perhaps even at completely different points in evolution. Further, some of our actual feelings may be ‘basic’ positive or negative reinforcers and some may be a combination of several positive, negative or mixed reinforcers. While, these may be physiologically different from one another, however, what would make them all ‘pleasures’ or ‘pains’ would be their common function as reinforcers for actions.

If we accept this answer, however, and accept that pleasures and pains ‘feel’ according to their structure and therefore do not ‘feel’ as their functional aspect of reinforcing decisions, it becomes difficult to imagine how an AI would ‘feel’ in a way relevant to that of a human. It's one thing to say that Als may share in function similar to that performed by our pains and pleasures, but another all together to say that such a being would be feeling. And certainly, if what we are concerned with is the question of moral participation, it seems that feeling in a particularly relevant way, rather than simply possessing a similar functional capacity, is what we should be concerned with.

On suffering without a biological body

Although, the brain may be seen as a part of the body, it can be distinguished from it in an important way. While the brain functions as the central processor of sensual information, it does not create such information. The capacity to feel physical pain and pleasure, it may be argued, can not be sustained unless both brain and body remain within the system. It is may be assumed, then, that AIs or Robots are incapable of suffering, since they do not have the type
of body which sends the kind of sensual information that causes us to suffer.

Such a claim may be challenged on 2 grounds: The first is that it may be possible for AI or Robotics to precisely duplicate either the “send” or “receive” aspects of this system in order to allow the maintenance of human sensual feeling — particularly in cyborgs. Secondly, and perhaps more interestingly, the claim that a body is necessary for suffering to occur may itself be challenged. We must ask here whether by suffering we speak of the literal suffering of the body or the suffering of the mind, or both, as necessary and sufficient criteria?

If, for example, I replaced my body with an artificial body, which allowed me to feel the world around me, but not in a precisely or particular human way — perhaps I could no longer feel physical pleasure or pain — though I would, however, retain my identity and the ability for mental anguish. To exclude me for failing to suffering physically seems to be wrong. At the same time, it seem rather limited to claim that only mental anguish, rather than physical anguish is relevant to moral protection. It would appear, then, that either pain of the body or of the mind
would be considered relevant.

If this is the case, however, and mental anguish alone can be considered as a necessary and sufficient criteria for moral protection, do Als not need a body at all in order to qualify? If they do not, which seems to be the true if we agree with the case above, what sort of mental suffering can Als experience which would be relevant to what we consider suffering to be? Similarly, if we were to consider the case of an alien being which lands on the earth and complains of certain “mental or emotional suffering” (assuming we translate what he would say into the words
“mental” or “emotional”), we must ask ourselves how such a claim for suffering can be made relevant to our own.

This question of relevancy is crucially important in determining whether or not AIs can be considered capable of suffering. Attempting to answer it would be out of the scope of this essay, and for the time being I will simply leave it open as a problematic.

Adding Als to Essentialist theories

I began by presenting the debate regarding the moral consideration of nonhuman beings and then proceeded to offer a new type of being — Artificial Intelligence - for consideration. I’d now like to ask whether or how this further element adds to the discussion regarding the previously stated approaches.

Beings which are rational

As I stated earlier, I do believe that rationality (by all 3 stated definitions) can possibly be applied to Als. Theories which base moral participation on rationality alone, then, would include some AIs (those capable of satisfying the criteria for a full sense of rationality) as moral participants. However, why should we concern ourselves with the good of an AI which can act rationally, but doesn’t have the capacity to desire or feel or suffer? Such beings would
qualify for moral protection if we were to consider rationality alone as the essential criteria of qualification.

The possibly of such beings raise serious doubts about whether rationality should be regarded in
considerations of moral protection at all. As I stated earlier, it may take a certain amount of rationality for one to be engaged in moral considerations (and to be held morally responsible), but that is a separate issue from being included in moral considerations (to be given moral protection). When being included, the being is acted upon, rather than act itself. Since it does not act at all in regard to these situations, why should its ability to act in one way or another be considered at all relevant?

Beings which are autonomous

Some, such as Geoff Simons, claim that whatever definition we do apply to free will, it can be shown that it can apply to both humans and AI. In fact, attempts to use autonomy as a criteria quickly mn into such difiiculties, since the claim for free will in humans is itself so difficult to establish. Ifwe are to claim that humans have free will despite their DNA programming, pressures from their biological drives, unconscious motivations, social and ecological influences, etc., how are we to claim that Als, which also have a kernel of programming and environmental interaction, are any less free (assuming that anyone at all, is indeed, free). In fact, as I earlier pointed out, a case can even be made to that AIs may actually possess more autonomy than humans.

Even if we assumed, however, that Als did or did not possess free will, the claim for autonomy as a necessary and sufficient condition for moral protection would still remain as problematical as that made for rationality. The same argument can be applied to both criterias. If the question is merely regarding moral protection for beings not initiating the actions in question, their capacities for action seem irrelevant.

Beings with the capacity for suffering

Perhaps the more Als appear to embody the capacities for rationality and free will, the more we become aware of the possibility of beings who possess these capacities, but do not suffer - or perhaps do not possess the appropriate type of suffering. While in the question of animals, whose ability to sulfer was relatively uncontested (with few exceptions, such as Descartes), it was suggested that other faculties, such as rationality and autonomy were necessary, here we have the situation reversed.

Whether or not Als will be truly capable of the sort of feeling and desiring, as AI experts believe, remains to be seen. Similarly, it remains to be seen whether these types of suffering will be sufficiently close to our own to be more than mere linguistic analogies. Unless they are able to possess such a capacity, however, it seems quite odd to consider them as beings deserving moral protection. After all, why would a being without feelings or desires “care” at all about the way decisions are made and their effects on it, when it lacks the ability to care at all. While rationality and autonomy are important for the consideration of moral agency, they, unlike feelings, desires and the particularly the suffering which may result from their frustration, do not seem relevant for moral protection from the point of view of a being that may be made to suffer.

Applying the proposed essentialist scheme for moral
participation

In accordance with the discussion of the affects of introducing Als to the essentialist approaches I’ve presented, I think that a certain re-evaluation of the moral participation scheme l’ve offered is called for. Since, we are now talking about nonhumans who may be qualified as morally responsible agents, rather than merely as beings worthy of moral protection, I believe a further addition to the presented criteria for moral responsibility becomes evident.

Let us assume that we have programmed an “ethical” an AI which is programmed not to cause wrongful harm. Even assuming that our “ethical” AI is rational and autonomous in an acceptable way, are its actions truly moral, or are they simply following certain “rules” in a non-moral sense which to us may be relevant in terms of pains and pleasures. A robot simply following a program which tells it to avoid clubbing me over the head does not act morally when it refrains from such action. Such an AI would be freely abstaining from hurting me, but actions such as this do not appear to enter into the sphere of “moral” action.

Such a view differs importantly from most traditional conceptions of morality, such as Kant’s conception of morality as defined by duty. If we agree with Kant that moral actions are disinterested recognitions of duty, then it seems that AIs, such as described above, acting with the understanding that they have a duty not to harm humans, would indeed be acting morally. As the arguments above illustrate, however, such a conclusion would provide us with an inclusion of what evidently appear to be non-moral actions. Any robot simply following his program in dealing with others would be considered to be acting morally.

I would like to suggest that what’s missing here is actually not a capacity of the individual, which can be considered in isolation of other beings, as are the characteristics of rationality, autonomy and the capacity for suffering. Rather, it is the capacity for empathy towards the beings who may be affected by the action that is the missing necessary component which makes an action moral. Empathy, while a quality within the individual (as opposed to a functional action) only arises in relation to interaction (or simulated interaction - imagining the impact of my actions) with
certain other beings. An action which is rational and free, but is empty of any empathy towards the beings one acts upon, simply seems to lack a moral element. To avoid killing because “it is wrong to kill” without considering that the wrongness of that action has anything to do with the loss to the being acted upon, may be an act of moral consequences and worthy of moral judgment, but it seems, contra Kant, to fall short of being a moral action.

Such a definition of moral action, does not neatly fall in line with the way we view agents as morally responsible, which is based on the view that moral irresponsibility relates to the breaking of certain types of rules which are considered moral. While a full examination of the nature of moral actions and judgments is outside of the scope of this essay, I would like to maintain that when it comes to evaluating whether nonhumans are capable of moral action, such a view seems quite a bit more appropriate.

Criticism of the Essentialist Approach

While I believe the essentialist scheme developed above does help us by highlighting the relation between certain characteristics and relevant moral relations, it remains difiicult to ascertain precisely which beings fit into which categories. This is due to the difficulty of the essential characteristics involved. How can we really know whether a being has free will, feelings or desires? We can not even provide a concrete justification for this claim with regard to other humans — how can we claim to know these things about beings who’s internal consciousness (or lack of) we have no direct knowledge of? The reason we believe other humans possess these capacities is because we believe that we ourselves possess them, and seeing other humans behaving as if they also possess them, we assume that they must also feel and act in at least roughly the same way we do. But even if that is to be given as an acceptable assumption, how can we extend such assumptions to other types of beings and claim to know their conscious
processes?

It is difficult to even imagine what the conscious of a different type of being would be like, how then are we to determine whether any of their internal processes or impressions are relevant to morality? Even rationality, which initially appears to be such a clearly understood concept, is difficult to define or locate in terms of nonhumans — either animals or AI. In terms of determining the capacity of suffering for Als, which is perhaps the most important of the 3 examined criteria, it seems that we are quite far from making any sort of justified claims for either inclusion or exclusion. A critical investigation of these issues is clearly necessary in order to give us greater understanding of what we actually mean by the qualifying standards that we set. While such an investigation is outside of the scope of this essay, it does seem that examinations of the roles of such capacities in light of other beings does well to offer us new critical perspectives on some very old questions. As it is, the ambiguities of the capacities involved present a great practical difficulty for the essentialist, which leads some to claim that this type of approach is, itself, inappropriate.

Functionalist approaches to moral participation

“The Turing Test, and the literature it stimulated, suggest that if, after sufficiently comprehensive tests, we cannot tell the difference between a man and a machine then there is no difference. If a mechanism behaves as if it is human in all the relevant circumstances then it is human. If this sounds too facile it should be remembered that many difficult tests would have to be satisfied before a machine could be recognized as alive in this ambitious sense.”

Some, such as Daniel Dennett claim that rather than concern ourselves with questions such
as whether certain computers really have desires and beliefs, we should instead adopt an
Instrumentalist strategy that assumes that they do possess these characteristics if such
assumptions are helpful in predicting their behavior. Accepting such claims then would be a pragmatic decision, rather than one which is either right or wrong. Dennett is surely correct when he declares “it is much easier to decide whether a machine can be an intentional system than it is to decide whether a machine can really think, or be conscious or be morally responsible.” The somewhat similar, but more radical approach of realism, rejects the notion that imperceptible properties, such as those of desires in computers are any less real - as long as the indirect evidence (perceiving their behavior) supports theories that posit their existence - than perceptible properties. A realist, then, would allow that if an AI is behaving (assuming sufficient observation of its behavior) as if it does possess certain, difficult or impossible to detect, capacities, then it does indeed possess these capacities. Both of this views have become common within the AI industry and are commonly referred to under the banner of Functionalism.

Behavioral Functionalism

All of the approaches presented thus far, have focused on the quest for one or more “essential” characteristics that a being needs to possess in order to be considered eligible for moral consideration. Is this Essentialist approach, however, the best way to approach this question? It is certainly not the approach prevalent in the AI industry, where a more popular approach is that of Behavioral Functionalism, which is well expressed in the famous “Turing Test”.

The Turing Test, developed by Alan Turing in l950, focuses on behavior as the important criteria for intelligence, since it is only behavior we can know about, not whether certain capacities, such as rationality or autonomy, are what drive it. In the Turing Test, a human interfaces through a typewriter or computer screen with Subject X, which may be either another human or a computer. Some functionalists take the instrumentalist claim that if the human
cannot, after comprehensive testing, tell the difference between the computer and human, then there is no important difference between them and that the computer should then be treated as if it is human. Others, such as Simon, take the realist claim that if the human cannot, alter comprehensive testing, tell the difference between the computer and human, then there is no difference between them.

To this date no computer has been able to pass this test for the full range of human behavioral interchange, however, “expert programs”, programs designed for expertise in particular roles, have often been successfully mistaken for humans by other humans. The success of such systems make it seem almost inevitable that sooner or later AIs will be able to pass it in full. Most probably, they will later be able to pass similar everyday tests through common daily
interactions with other humans which will not be able to identify them as non-humans.

Criticism — the Chinese and Korean Rooms

As Searle’s “Chinese Room” example demonstrates, however, imitation does not necessarily imply understanding. The Chinese Room scenario asks us to imagine someone locked into a room with one entry through which Chinese symbols are sent in and one exit through which Chinese symbols are sent out. If the occupant of the room, who is fluent in English but ignorant of Chinese, follows a book of instructions in English telling him which Chinese characters to send out according to which Chinese characters are sent in, then it may appear to those outside of the
room that the room’ s occupant understands Chinese. This, however, is obviously not the case — the room’ s occupant may be accepting input, processing information and outputting information, but he does not understand Chinese, he is just imitating. In fact, when we observe the history of the Turing Test, which is based on a parlor game in which a man and a woman compete to convince a 3rd party that they are the ‘real’ woman, it is obvious that mistaking the man as the real woman does not make him one. He, also, is just imitating, [48]

Does the Chinese Room completely debunk the value of the Turing Test? Perhaps not. William J Rapaport gives us the following example of "the Korean Room". A Korean professor is an authority on Shakespeare, even though he does not know the English language. He studied Shakespeare through excellent Korean translations and his mastered his plays. His articles on Shakespeare have met great acclaim and he is an expert on the bard [49]. The Korean professor, Rapaport suggests, is similar to the man in the Chinese Room. It would, however, be a mistake, he claims, to conclude that the Korean professor does not understand something, and the same should be said for the man in the Chinese room, that he does not understand nothing. Just what is understood, however, is difficult to say. Similarly, it is difficult to conclude from this debate what sort of understanding would be considered relevant to allow for sufficient similarity between AI and human for the AI to be included in human affairs as an equal of some kind.

Criticism- must AIs imitate humans?

Is the ability to function within society sufficient for moral consideration? While the Turing Test presents us with a closed, strictly defined, environment, functioning in society in general might be quite a different matter altogether. Must AIs behave unnoticeably different than humans in order to qualify for moral consideration? The following difficulties come to mind:

1 If an AI is behaving as a human, but does not actually need the same kinds of things as we do- perhaps it does not feel pain, but only behaves as if it does?- does he really deserve consideration about actions that may cause him pain? After all, it really won't matter to it in any way other than the fact that it will need to behave in a certain way in order to portray a human way of suffering.

2 If an AI is behaving as a human, but does not really possess the capacity for free will, does it make any sense to treat him as a moral agency?

3 AIs are different types of beings than humans. There is no reason why they would necessarily desire the same things as we do-- which is not to say that they would necessarily not have desires relevant to moral considerations, but that in order to pursue these desires, activity which would be considered abnormal for humans, may be more appropriate than human-like behaviour. If AIs possess an intelligence and even a capacity for feelings and desires, which is linked to a different "kind" of Being, is it not possible that at least some of their desires will be determined by their own constitution and environment?

Criticism- on Dennett's assumptions

While Dennett is surely correct in his claim that it is easier to think of computers as possessing certain characteristics if they appear to do so, such assumptions may be insufficient, and perhaps dangerous, when it comes to things like moral considerations, which produce real-world consequences. While it may be pragmatic for us now to speak of computers possessing such and such characteristics, once the characteristics in questions impact on actual decisions, simply assuming them seems quite risky. In addressing the question of moral consideration for nonhumans, it seems that we cannot simply answer in terms of practical beneficence, but must give real consideration to the question of the truths of its purported characteristics.

Empathetical Functionalism

This type of functionalism looks at our own behaviour towards kinds of beings, rather than at the behaviour of those beings, in order to seek the kinds of beings that we can be empathetical towards. At this time we do actually a few select animals with at least a minimal amount of moral consideration. It is commonly considered wrong to butcher a dog or a cat , but not a cow or a sheep. Why do we give some animals moral consideration? We have established the kinds of animals we care about and therefore extend moral consideration to them. We have done this without explicit regard to whether or not the type of animal possesses the aforementioned essential characteristics.

Some, such as Regan and Singer, may contest the fact that we do grant such animals a privileged status over others, whose suffering we do allow, as unfair. However, it is important to note that we do offer moral protection to some animals and not to others, that we do have reasons for doing so, and that these relate to our ability to empathise with these kinds of animals. Is our internal practice of extending moral considerations to humans any different? Was it really some sort of empirical discovery about their "nature" that led to the enfranchisement of minority groups who were formerly regarded as "lower animals"? Or was it rather a growing ability to empathise with their suffering and joys? I believe the latter is the far more likely aspect. If sentimentality and empathy are so important, we should ask how it is that we have developed such feelings towards only some, but not other, species. I believe that the key answer to this question is interaction. Dogs and cats, for example, are highly capable of human interaction -- we are thus able to build relationships with them and care about them. The same capacities for interaction may exist in other animals, and as these are discovered and the animals are given the opportunity on a larger culture scale for human interaction, we may find it difficult to continue considering them to not merit any sort of moral consideration. The same may become true over time with regard to AIs.

Criticism of empathetical functionalism

While a functional approach that considers behaviour can be applied on the level of the individual, this type of approach is more general and applies only to an entire kind of being. It may also be more descriptive of the way that we do allow non humans to participate in moral considerations than a justification of why this should be the way in which we determine such participation. Further, this methods allows only the fuzzy distinction of being worthy of moral consideration, it says nothing about what rights and protections a being or a kind of being should have in comparison to another.

Conclusion

I have outlined an Essentialist scheme which presents an improved criteria for including non humans in moral considerations. The scheme focuses on rationality, autonomy and empathy as the necessary characteristics relevant to moral actions and on the capacity for suffering, in a way sufficiently relevant to human suffering, as the necessary and sufficient criteria for moral consideration. I have left the question of just what constitutes "sufficient relevancy" as an open, highly important problematic.

The Essentialist approach alone, however, has shown to be quite difficult in terms of application. This is due to the great practical difficulty of determining which capacities are actually possessed by the beings in question. In presenting Behavioural Functionalism I aimed to demonstrated its practical advantage in dealing with this problem of identifying eligible beings. However, I have also attempted to show alone, Behavioural Functionalism seems quite insufficient in that it admits many beings which we do not believe deserve moral protection, and perhaps wrongly excludes others. It seems either that the Essentialist scheme is necessary as an added "filter" on top of the Behaviour Functionalist approach or that Behaviour Functionalism may be better utilized as a supplement to the Essentialist scheme in situations where indeed it is extremely difficult to determine the actual characteristics of the beings involved.

Lastly, I introduced an approach I have dubbed Empathetical Behaviourism, which I believe provides us with a better base for understanding our current actual reasoning for including or excluding nonhumans and which I believe provides us with a further (and perhaps more relevant) insight in determining what sort of beings we should provide moral protection for. This approach can be integrated with both the essentialist scheme and the behavioural functionalist approaches, though an attempt at such integration is beyond the scope of this essay. The question of precisely which Beings (animals or AIs) should be considered as moral agents or as worthy of moral protection, requires greater examination of the specific characteristics of the Beings involved, as well as addressing some of the problematics left unanswered in this essay. I offer this as a limited speculative assessment of this problem, one worthy of and requiring considerable further investigation.

Wednesday 1 December 2010

Culture of Fear



I don't think anyone will forget this tasteless World Wildlife Fund advertisement in a hurry. It didn't occur to me though until the other day, even after a long period of familiarity (after seeing it on The Gruen Transfer), that the shot where the additional planes come into view appears to be directly lifted from Hitchcock's The Birds (sequence beginning 1: 28).

Ruminations on the horror of the uncanny valley



The concept of the uncanny valley has been floating around for some time now. I think it would be relatively easy to use in a further discussion of the "haunted media" I posted on a little while back. This got me thinking though, derridata, do you know what the take up of this concept has been like among Japanese academics, particularly in relation to anime culture, the regard in which Masahiro Mori is held, etc? The same goes for ludologists, I expect.

Moving away from Japan for now, I really enjoyed reading Posthumanism in John Carpenter's The Thing, simply because it touches on 2 of this blog's all-time favourite films:

The 'uncanny valley', a hypothesis that suggests that the more human in appearance a robot may be, the more repulsive it will be received by a genuine human. Also applicable to dolls and computer generated characters, the uncanny valley suggests that we hold the body sacred, and become disturbed when something appears almost human... but not quite.

This is a far more complex identity anxiety to appreciate, in terms of visual or physical imagery, than the 'Other body' of the Thing. Giger's Xenomorph design, from Ridley Scott's Alien, is a humanoid, relatable evolution of the shark; engineered and phallic in design, externally based on both human genitalia and machine parts. The Xenomorph is ritually parasitic and sexless, both savage and motherly, vile and alluring. Strangely, the Thing lacks this fetishist attractiveness; when it does take on human parts, they are either a perfect mimic, or stretched and disfigured beyond association. But it does fascinate, if only through indifference, and for the film's stunning use of animatronic technology, itself a mechanical imitation of natural life. Though it is sexless (or at the very least, its gender is unidentifiable) the creature shares two common elements with man, a drive to consume and a desire to keep warm.

The case studies we have looked at, such as The Thing and Invasion of the Body Snatchers, are not set in futuristic dystopias or idealistic utopias, but grounded in our own present. The 'it could happen to you' impact of these films should not be overlooked. The Thing is a hideous half-resemblance of man, an amorphous, monstrous fake that not unlike the infection that it metaphorically represents, wants nothing more than to survive; to find food and shelter. In this respect, we are not so different. Science fiction has represented the posthuman in as many ways as it has the human, emphasizing that:

The Thing preys not only on the fear of contagion, but on the loss of individuality. Of all of the recent science fiction 'horrors' it reveals the human condition as much as it tells a good monster story. The films human characters are almost indistinguishable from one another. Cold and impersonal, they are a study of the human race as a whole than any one specimen. The protagonist MacReady's identity is defined not by similarity to his fellow men, but from his differences to the alien. In Carpenter's movie, the posthuman Other and the human form are indeterminable, and identity is indefinite.

As if that wasn't enough, it's incredible how a quote from Darwin resonates so well with Alien's bio-horror theme:

The expression of this [Trigonocephalus] snake’s face was hideous and fierce; the pupil consisted of a vertical slit in a mottled and coppery iris; the jaws were broad at the base, and the nose terminated in a triangular projection. I do not think I ever saw anything more ugly, excepting, perhaps, some of the vampire bats. I imagine this repulsive aspect originates from the features being placed in positions, with respect to each other, somewhat proportional to the human face; and thus we obtain a scale of hideousness.
—Charles Darwin, The Voyage of the Beagle[30]




Another point worth pondering: Aliens alumnus James Cameron believes cinema will soon move out of the uncanny valley, because the medium is not bound to "real time rendering". I won't speculate here though on the chilling technological implications of this statement for "ethical governor" scenarios taken to their logical extreme, as alluded to in derridata's last post: