Artificial Intelligence from a Psychological Perspective

I wrote this during the first semester of my senior year at Johns Hopkins University for Minds, Brains and Computers with Dr. Paul Smolensky.

Carlos Macasaet
13 December 2004
050.109 – Final Paper

The ultimate chapter of Haugeland’s Artificial Intelligence: The Very Idea addresses the issue of artificial intelligence from a psychological perspective. That is, it attacks the same problem from the opposite direction as the preceding chapters. It starts with real people as its goal and works backwards, theorising about how one may conceptualise certain aspects of human mental functioning that are not immediately relevant to the problem of cognition. Many problems that involve artificial intelligence approaches to a solution such as speech and handwriting recognition do not require one to implement concepts such as emotion or self-concept into the system in order to solve the problem successfully. However, these are concepts that are important to intelligence that have so far been left out of artificial intelligence research either because of their irrelevance to specific problems or because no way has been found to model such concepts.

Haugeland addresses the issue of making sense, which is integral to interpretation and thus computation. Systems can make sense by telling the truth for symbolic systems or behaving in ways that demonstrate intelligence for non-symbolic systems. We can determine the latter by evaluating if the system acts rationally, that is, if it makes pragmatic sense. We do this by ascribing mental states to the system using the ascription schema, which says that we ascribe beliefs, intentions and decisions in a way that maximises the system’s competence.

The interesting thing about pragmatic sense is that, when put into the context of symbolic content, it can contribute to communication. For example, implications can be made by playing off of expectations when speaking. Certain things can be left unsaid because they would be inferred by the listener in the process of making sense of the speaker. Philosopher Paul Grice refines this notion by introducing his maxims of conversational cooperation. In the course of conversation, it is reasonable to expect the parties involved to provide as much information as is needed, be careful not to misrepresent anything, stay on topic and be perspicuous.

Haugeland then digresses to address what he calls metacognition. Most cognitive states involve things in the outside world. However, some cognitive states concern themselves with other cognitive states. This cognition about cognitions is metacognition. Cognition about other people’s cognitions is essential to anticipating their actions and reactions. Therefore, metacognition is required to act intelligently. Furthermore, in addition to thinking about other people’s thoughts, we reflect on our own. Many philosophers believe that this self-reflection is the essence of consciousness or at least self-consciousness, which is distinctively human.

Haugeland next discusses the concept of mental images, which, it has been demonstrated, humans rely on to perform certain tasks, but which is distinctly lacking in artificial intelligence based systems. When asked to solve certain problems that require spatial reasoning, people will say that they pictured the problem in their minds. Other problems, however, do not require mental images. This suggests that there are at least two different kinds of mental representation, quasi-linguistic and quasi-pictorial. Quasi-linguistic representations are the hallmark of computer programmes, language and present artificial intelligence based systems. They are interpreted on the basis of their simple constituents as well as their syntactical structure. Quasi-pictorial representations include pictures, scale models and possibly mental images. They resemble or mimic in some way that which they are representing.

So the question is then of how these mental images are stored in the mind and how the mind manipulates them. We have already seen how symbolic structures can be represented and manipulated, but can a similar representation be applicable for mental images? Analysing existing pictorial representations, we see that it is necessary that the parts of the representation correspond to parts of the object being modelled and that more importantly, the structure of relations among corresponding parts is preserved between the model and the object.

In working with mental images, it is necessary to be able to perform the same operations one would perform on a real image such as changing focus, changing orientation, scaling, adding, removing and moving specific pieces, recognising constituent pieces (thus being able to count them) and superimposing images for the purpose of comparison. For the most part, all these operations are possible with mental images, while some are not possible with simple pictorial representations such as photographs. According to Haugeland, mental images are viewed in an imaginal field, which has roughly the same size and shape as the visual field and has limited acuity that is sharper at the centre. Mental images can be manipulated relative to this field, which remains fixed. Psychologists have confirmed this theory of mental reasoning by devising techniques to measure the time it takes to perform certain mental tasks. For example, they have shown, that the time it takes for a subject to mentally rotate an image is directly proportional to the amount of rotation required. More interesting is the observation that subjects took longer to mentally rotate images of heavy objects, suggesting that size and shape are not the only aspects being represented.

In spite of all the research that has gone into studying mental images, nothing is known about how this is performed in the brain and there is no artificial system that is comparable to it as computer systems are for quasi-linguistic representations. Furthermore, there is no compelling evidence to show that quasi-linguistic and quasi-pictorial representations should be completely distinguished, nor is there reason to believe that they are the only two manners of representation.

Here, Haugeland introduces the Segregation Strategy as a way of positing artificial intelligence while omitting certain aspects (such as mental images) for which we are unsure of how to emulate. The segregation strategy simply assumes that cognition and the phenomenon such as mental imagery are realised in separate mental faculties and that they interact only through well-defined inputs and outputs. This allows us to say that intelligence is one thing and mental imagery is another. It pushes the problem aside based on the assumption that image manipulation is peripheral to intelligence. This would prove to be useless, however, if we learn that intelligence is more than just symbol manipulation as described in a quasi-linguistic representation.

Next Haugeland addresses feelings. So far, no artificial intelligence models have incorporated feelings. Haugeland asks if it would be possible to incorporate them and whether or not it is necessary to do so. He starts by classifying different types of feelings. Sensations are the raw inputs from sensory organs. They are linked to specific physiological functions and are independent of cognition as well as each other. Reactions or passions are automatic responses to input but are not associated with any physiological functions or modalities. They occur immediately and while knowledge is involved, they occur independently of cognition. Emotions are like reactions but they are more responsive to mental argument and evidence. They are shaped more by current beliefs than current input, thus they might form more slowly and have a longer duration. Interpersonal feelings deal with familiar individuals and are indirectly responsive to reason and evidence. Sense of someone’s merit on the other hand is more specific and detached. It is highly responsive to reason. Sense of one’s own merit is similar but applicable only to the self. Finally moods comprise the final category and pertain to nothing in particular. Rather they affect everything at once. They can be influenced by events or chemicals, but they are not automatic or reasoned responses and they are never rational or justified.

While artificial intelligence systems have various inputs that they respond to, it is hard to make the argument that they actually feel anything. In the case of sensations, since they are not cognitive in nature, the segregation strategy would work. Passions are more difficult and require a subtler segregation. Haugeland proposes that passions are compound with one component rooted in physiology, another rooted in sensation and another rooted in cognition. He proposes similar segregations for the other feelings, suggesting that varying proportions in the compound are what define each of the classes. Intellect could do without all of them except for their cognitive components. Moods however are the most difficult because they do not correspond to any inputs or cognitive elements, yet they affect the way humans think.

The next section deals with ego involvement. Haugeland asks whether a genuine artificial intelligence could every truly care about anything. He argues that understanding and ego are not independent and that intellect and self-concept cannot be segregated. He gives as an example stories. People get involved in stories that they read or listen to and they have their own reactions. He goes further to say that fables have morals because we as readers and listeners can identify with the protagonist and so to skip the allegory altogether and going straight to the meaning defeats the purpose altogether because the moral is lost. Thus any artificial system that lacks ego will also lack the ability to understand in the sense that humans do.

This chapter addresses the higher-level issues of human cognition and relates it to the original hypothesis of artificial intelligence that cognition is computation. On the subject of mental images, I am a bit sceptical. I do not deny mental imagery is important to human cognition, but I do not believe that quasi-pictorial representations are necessary for artificial intelligence. I believe that the instinct for mental imagery is inherent in humans as language is, but like language, this may be something that makes humans unique. Humans have developed in a way that favours vision above the other senses. It is true that we use mental imagery to perform spatial reasoning tasks, but we also use it to solve problems that are not inherently graphical in nature. For example, in documenting the way processes work, people like to draw flow charts. In explaining set theory, people like to draw Venn diagrams. In mathematics, graph theory is not inherently visual as its name would imply, but in solving problems, humans tend to start with a picture and then generalise from that. Perhaps if humans had developed without vision or with another sense dominating the others, our mental representations might be different.

With regard to feelings and ego, I do not believe they can be completely separated from cognition because of the profound role they play. From a completely naturalistic perspective, they are not necessary, but as Haugeland points out, the system would not be able to care about anything without them. I think more important question to ask is why, rather than how. If an artificial intelligence were to be devised that completely emulated the human mind but without the higher-level mental processes described in this chapter, then it would have no purpose other than that which was assigned to it. One might argue that humans serve no purpose, but I believe that what makes humans so special is their ability to find purpose in what they do. For us, it does not matter what the ultimate why question is. Eventually, we can just answer with, “why not?”

Assuming that creating an artificial intelligence can be done – that the mind can be realised in a medium other than the brain, then I think this would be an enormous breakthrough for psychological research. In the field of abnormal psychology, many pathologies have multiple causes, some of which may be organic. Emulating, even just to some degree, the human mind could help distinguish between what is caused by brain damage and what is caused by other factors.

Finally, this raises many questions of ethics. While emulating a human brain would allow psychologists to conduct much of the research on personality disorders and child psychology that is seriously lacking, we have to ask if the artificial intelligences should be considered alive in the sense that humans are alive and if so, whether they should be granted the same rights. This is where the movies get it wrong (two horrendous trilogies come to mind). Humans will never exploit artificial intelligences as tools. Most applications that could benefit from artificial intelligence research do not need to emulate the human mind perfectly. A DVD player does not need to “understand” what the viewer really means when it is asked to skip through the credits. But when psychologists are presented with possibly limitless “twins” on which to conduct research, would it be unethical to subject the artificial intelligences to tests unsuitable for humans?

Bibliography

Haugeland, John. Artificial Intelligence: The Very Idea. Cambridge: The MIT Press, 1985.

Leave a Reply