tag:blogger.com,1999:blog-8705935382346558648.post3367725528777757807..comments2023-10-31T22:40:09.763+11:00Comments on Acheron LV-426: NonhumansAnselmo Quemothttp://www.blogger.com/profile/09409052325497882321noreply@blogger.comBlogger2125tag:blogger.com,1999:blog-8705935382346558648.post-55625864525612892452011-02-07T14:48:25.943+11:002011-02-07T14:48:25.943+11:00Roger,
Kudos to you for the sophisticated probing...Roger,<br /><br />Kudos to you for the sophisticated probing. I'm hoping the essay was balanced between exploratory and critical, as I certainly don't pretend it is definitive in any sense.<br /><br />I'm hoping to return to the topic eventually, but it is currently sidelined by academia: if you don't produce material you can be credited for, you face redundancy! So I'll have to content myself for now by following your links in the interest of one day producing a rejoinder.Anselmo Quemothttps://www.blogger.com/profile/09409052325497882321noreply@blogger.comtag:blogger.com,1999:blog-8705935382346558648.post-40850596245767593802011-02-01T03:28:19.700+11:002011-02-01T03:28:19.700+11:00Really enjoyed this essay, some very interesting m...Really enjoyed this essay, some very interesting material.<br /><br />This debate has a lot of potential quagmires.<br /><br />Rationality and free will are nearly always aggrandised in their fully formed anthropocentric philosophical terms.You correctly (i think) point out that these may only provide ‘linguistic analogies’ with any sensations our prospective AI’s might possess. <br /><br />In terms of a non-human these qualitative values have no stock. Why? In the case of agency we could hypothetically count the number of constituent parts.. Neurons cells etc. If we construct a couple of interconnected neuronal cells we would not assume there is a presence of free will therein. However when these connections reach a cognitive crescendo in the human mind we say free will and rational agency is present. This logic implies that there is a certain amount or arrangement (of nucleic connections) that when reached free will is possible and inevitable. It becomes a question of straws and camels! <br /><br />This is problematic because we know that free will and sentient agency are not numerical values and they are not graduated against boiling points, atomic weights or any cosmological constants you can think up. They are wishy-washy qualia laden abstractions, there is no rubicon that a child crosses to reach agency. I think it is more valuable to only consider free will a continuum or a spectrum.<br /><br />So, can AI’s be Rational? Whenever they pass a Turing test you’ll have to ask them! <br /><br />Cant AI’s be autonomous? If they say so and you believe them.. Yes.<br /><br />Can AI’s Possess the capacity for suffering?<br />Once again channel Turing!<br /><br />Basically I would rephrase the whole Kit and caboodle into something like this: <br /><br />I exist therefore I know matter can be arranged into something that perceives its own agency (i.e. me)<br /><br />As a volitional agent I can manipulate matter.<br /><br />Manipulation of matter can result in an autonomy that can perceive its own agency.<br /><br />You also looked at the question of ‘suffering without a biological body.’<br /><br />This is very interesting because recent studies have shown that the brain has no bias on information from the biological body as opposed to environmental sensory input.<br /><br />You might be familiar with the extended mind thesis? (Andy Clark et al.) They argue as such that cognition extends into the environment. <br /><br />For instance an ape that uses a tool virtualises the tool is its brain as an extension of its own biology. Motor neurons are actually assigned to its use in the same way as they are to the sensory cortex. Among humans this effect goes way further. Some of our cognitive processes may be impossible without calculation extending into culture. Personally I would argue that the existential conception of the self... (‘I think therefore I am’ etc.) is an impossible conception without cultural software for our mental hardware to run.<br /><br />I think the Chinese Room argument (CRA) is worthy of further perusal<br />(I’ve written about it here: http://tinyurl.com/6zq89dx ) <br /><br />I outline it as thinly veiled qualia argument which I’ve written about here: http://tinyurl.com/6abrqlb<br /><br />In the CRA Searle is the processer of a computing system, or as Turing would say a ‘paper machine.’ I would ask is there any definitional difference between the understanding and the processing of information? There is probably only a difference in the complexity and quatitity of the information input. <br /><br />Couldn’t we tell Searle that he in fact a self contained English room argument!?<br /><br />To conclude, I like your concept of 'Empathetical Behaviourism' however I would worry about the motive of placing a moralistic threshold across what appears to be a continuum of agency. <br /><br />Roger<br /><br />rogeroshea.blogspot.comRogerhttps://www.blogger.com/profile/17334393647871653553noreply@blogger.com