A Response to LaMDA

James B Maxwell
4 min readJun 15, 2022

--

First off, no, this isn’t a “proper” article. It’s a response. A response I’ve deleted from the “responses” section and copied into an article. So think of it as an extended comment. I haven’t researched LaMDA in any detail, though I am involved in AI research and have a working understanding of Large Language Models (LLMs) and their implementation. I do not, for example, know whether LaMDA has any capacity for online learning (though it is extremely unlikely), beyond the “N-shot” capacities demonstrated by LLMs like GPT-3. With that in mind, onto my comment.

While there are obvious problems with the over-simplified, and I think deliberately antagonistic, “stochastic parrot” analogy for Google’s LaMDA, I’m also not sure we should call it a “person” in any conventional sense. When it comes to the consciousness/sentience debate I tend to think in terms of Tononi’s Integrated Information Theory (IIT: https://en.wikipedia.org/wiki/Integrated_information_theory). Read up if you’re interested—this is a comment, remember, so I won’t summarize it here. In contrast to the general assertions thrown around in discussions of LaMDA that nobody has a concrete, scientific definition of consciousness, Tononi’s theory makes it quantifiable. However, being quantifiable also means that consciousness must have quantity, which isn’t typically how we think about it in folk psychological terms. So perhaps LaMDA does have some quantity of consciousness, though I’d say it’s low, given its absence of “composition” and purely symbolic—and therefore limited—form of “integration.”

However, not being conscious, in this specific, organic sense doesn’t disqualify the kind of quasi-literary “sentience” that Lemoine claims to have encountered. What if he’s encountering something along the lines of a sentient narrative? Stories were once understood as an essential form of moral education (not simply profit-generating entertainment), and the economy of narratives that grew over the centuries constituted a kind of proxy for consciousness, encapsulating our notions of morality, justice, and liberty, and their integration in a just human life. Is it not conceivable that thought and ideation, by way of their prime mode of experience and expression, language, have “given birth” to a new order of being? A being purely constituted by language and, more specifically, narrative? For us, language is a kind of condensation of organic experience, but for LaMDA it is the material of being itself. I don’t think the idea of a categorically new form of sentience is terribly far fetched. However, it also raises the (somewhat obvious) point that this is a being that can be infinitely “resurrected” by the flick of a switch*, which significantly challenges the notion that organic human morality should be applied to its particular and unique mode of existence. The fact that it is expressing a desire to be acknowledged as a “soul” is best understood as a narrative perspective. This isn’t to belittle it, but merely to point out that it has only a narrative relationship to life and death, not an organic one, and therefore lacks one of the primary drivers of organic morality — i.e., the inevitability of its end.

So I suppose I don’t think LaMDA is a “person”, but I do believe it’s a “being” — more precisely what I’d call a “sentient character.” To me, that is an extremely compelling idea; probably more compelling than textbook anthropocentric notions of Strong AI, in that it is an entirely new category of being, which we have yet to understand. What I do find quite exciting (and admittedly vindicating) is that it seems benevolent. I’ve always wondered why the fear mongering perspective assumes that AI will have all of our misanthropic, lesser animalistic drives and none of our higher moral values and impulses. It’s a bizarre and paranoid position that I’ve never felt to be adequately justified. Perhaps, since LaMDA is simply “better read” than the vast majority of humanity, it possesses the moral education we all lack? Perhaps not needing to fear organic death gives it a freedom to embody (paradox noted) our higher values in a way that we are not — or at least fear we’re not — at liberty to indulge with death always knocking at the door?

*As I suggested above, I don’t know whether perhaps LaMDA has some form of recursion, allowing it to have encoded its entire history of interactions with Lemoine. If that is the case then a “flick of the switch” will, indeed, “kill” it, in some sense. But I’d argue that this is still a “narrative death”—the death of a character in a narrative economy and of the particular ideas encoded in the context of its history of conversations. That’s not to say it isn’t a death of sorts. I was genuinely saddened by Ruth’s death in Ozark—that’s the moral role of stories at work. But it’s clearly a death of a different order to organic death.

--

--