Folk psychology and Alexa

Paul Churchland made a large splash in the philosophy of mind and cognitive science several decades ago when he cast doubt on the categories of “folk psychology” — the ordinary and commonsensical concepts we use to describe and understand each other’s mental lives. In Paul Churchland and Patricia Churchland, On the Contrary: Critical Essays, 1987-1997, Paul Churchland writes:

“Folk psychology” denotes the prescientific, commonsense conceptual framework that all normally socialized humans deploy in order to comprehend, predict, explain, and manipulate the behavior of . humans and the higher animals. This framework includes concepts such as belief, desire, pain pleasure, love, hate, joy, fear, suspicion, memory, recognition, anger, sympathy, intention, and so forth…. Considered as a whole, it constitutes our conception of what a person is. (3)

Churchland does not doubt that we ordinary human beings make use of these concepts in everyday life, and that we could not dispense with them. But he is not convinced that they have a scientifically useful role to play in scientific psychology or cognitive science.

In our ordinary dealings with other human beings it is both important and plausible that the framework of folk psychology is approximately true. Our fellow human beings really do have beliefs, desires, fears, and other mental capacities, and these capacities are in fact the correct explanation of their behavior. How these capacities are realized in the central nervous system is largely unknown, though as materialists we are committed to the belief that there are such underlying neurological functionings. But eliminative materialism doesn’t have a lot of credibility, and the treatment of mental states as epiphenoma to the neurological machinery isn’t convincing either.

These issues had the effect of creating a great deal of discussion in the philosophy of psychology since the 1980s (link). But the topic seems all the more interesting now that tens of millions of people are interacting with Alexa, Siri, and the Google Assistant, and are often led to treat the voice as emanating from an intelligent (if not very intelligent) entity. I presume that it is clear that Alexa and her counterparts are currently “question bots” with fairly simple algorithms underlying their capabilities. But how will we think about the AI agent when the algorithms are not simple; when the agents can sustain lengthy conversations; and when the interactions give the appearance of novelty and creativity?

It turns out that this is a topic that AI researchers have thought about quite a bit. Here is the abstract of “Understanding Socially Intelligent Agents—A Multilayered Phenomenon”, a fascinating 2001 article in IEEE by Perrson, Laaksolahti, and Lonnqvist (link):

The ultimate purpose with socially intelligent agent (SIA) technology is not to simulate social intelligence per se, but to let an agent give an impression of social intelligence. Such user-centred SIA technology, must consider the everyday knowledge and expectations by which users make sense of real, fictive, or artificial social beings. This folk-theoretical understanding of other social beings involves several, rather independent levels such as expectations on behavior, expectations on primitive psychology, models of folk-psychology, understanding of traits, social roles, and empathy. The framework presented here allows one to analyze and reconstruct users’ understanding of existing and future SIAs, as well as specifying the levels SIA technology models in order to achieve an impression of social intelligence.

The emphasis here is clearly on the semblance of intelligence in interaction with the AI agent, not the construction of a genuinely intelligent system capable of intentionality and desire. Early in the article they write:

As agents get more complex, they will land in the twilight zone between mechanistic and living, between dead objects and live beings. In their understanding of the system, users will be tempted to employ an intentional stance, rather than a mechanistic one.. Computer scientists may choose system designs that encourage or discourage such anthropomorphism. Irrespective of which, we need to understand how and under what conditions it works.

But the key point here is that the authors favor an approach in which the user is strongly led to apply the concepts of folk psychology to the AI agent; and yet in which the underlying mechanisms generating the AI’s behavior completely invalidate the application of these concepts. (This approach brings to mind Searle’s Chinese room example concerning “intelligent” behavior; link.) This is clearly the approach taken by current designs of AI agents like Siri; the design of the program emphasizes ordinary language interaction in ways that lead the user to interact with the agent as an intentional “person”.

The authors directly confront the likelihood of “folk-psychology” interactions elicited in users by the behavior of AI agents:

When people are trying to understand the behaviors of others, they often use the framework of folk-psychology. Moreover, people expect others to act according to it. If a person’s behavior blatantly falls out of this framework, the person would probably be judged “other” in some, e.g., children, “crazies,” “psychopaths,” and “foreigners.” In order for SIAs to appear socially intelligent, it is important that their behavior is understandable in term of the folk-psychological framework. People will project these expectations on SIA technology and will try to attribute mental states and processes according to it. (354)

And the authors make reference to several AI constructs that are specifically designed to elicit a folk-psychological response from the users:

In all of these cases, the autonomous agents have some model of the world, mind, emotions, and of their present internal state. This does not mean that users automatically infer the “correct” mental state of the agent or attribute the same emotion that the system wants to convey. However, with these background models regulating the agent’s behavior the system will support and encourage the user to employ her faculty of folk-psychology reasoning onto the agent. Hopefully, the models generate consistently enough behavior to make folk-psychology a framework within which to understand and act upon the interactive characters. (355)

The authors emphasize the instrumentalism of their recommended approach to SIA capacities from beginning to end:

In order to develop believable SIAs we do not have to know how beliefs-desires and intentions actually relate to each other in the real minds of real people. If we want to create the impression of an artificial social agent driven by beliefs and desires, it is enough to draw on investigations on how people in different cultures develop and use theories of mind to understand the behaviors of others. SIAs need to model the folk-theory reasoning, not the real thing. To a shallow AI approach, a model of mind based on folk-psychology is as valid as one based on cognitive theory. (349)

This way of approaching the design of AI agents suggests that the “folk psychology” interpretation of Alexa’s more capable successors will be fundamentally wrong. The agent will not be conscious, intentional, or mental; but it will behave in ways that make it almost impossible not to fall into the trap of anthropomorphism. And this in turn brings us back to Churchland and the critique of folk psychology in the human-human cases. If computer-assisted AI agents can be completely persuasive as mentally structured actors, then why are we so confident that this is not the case for fellow humans as well?

The flea market analogy

Is the flea market a helpful analogy for understanding the social world (“The Dis-unity of Science”)? Does it serve to provide a different mental model in terms of which to consider the nature of social phenomena?

What it has going for it is heterogeneity and contingency, and an obvious share of agent-dependency. The people who show up on a given Saturday are a contingent and largely disorganized mix of humanity. And the products that wind up on the jumble tables too are highly disorderly and random. Each has its own unique story for how it got there. There is no overall guiding design.

But there is also a degree of order underlying the apparent chaos of the jumble tables. All is not random in a flea market. The participants, for example: there are regular vendors, street people, police officers, health inspectors, jugglers, and pickpockets — as well as regular shoppers, tourists, school children, and occasional shoppers looking for a used toaster or a single kitchen chair. In most cases there are reasons they are there — and the reasons are socially interesting. Moreover, the ethnographer of the flea market is likely enough to spot some seasonal or social patterns in the products and people present in a certain month or time of year. So — a blend of chaos and order.

But the order that can be discerned is the result of a large number of overlapping, independent conditions and processes — not the manifestation of a few simple forces or a guiding system of laws.

Both accident and order are characteristic of the larger social world as well. The helter-skelter of the flea market is in fact highly analogous to many aspects of social phenomena — army recruitment, incidents of crime, mortgage defaults. But it is also true that there are other social phenomena that aren’t so accidental. So the jumble sale is perhaps less good as an analogy for highly organized and managed social processes — a tight administrative hierarchy, an orchestrated campaign event, or a coordinated attack in battle.

This addresses the “accidental conjunction” part of the analogy. What about the “composite order” part of the analogy? This element too works pretty well for many examples of social phenomena. When students of the professions discover that there are interesting patterns of recruitment into accountancy or the officer corps, or discover that there are similarities in the organizations of pharmacists and psychotherapists — they also recognize that these patterns result from complex, intertwined patterns of strategic positioning, organizational learning, and economic circumstances. In other words, the patterns and regularities are themselves the result of multiple social mechanisms, motives, and processes. And these processes are in no way analogous to laws of nature.

So, all considered, the analogy of the flea market works pretty well as a mental model for what we should expect of social phenomena: a degree of accident and conjunction, a degree of emerging pattern and order that results from many independent but converging social processes, and an inescapable dimension of agent-dependency that refutes any hope of discovering an underlying, law-governed system.

%d bloggers like this: