(Unsplash/Greg Rakozy)
In her highly acclaimed book God, Human, Animal, Machine: Technology, Metaphor, and the Search for Meaning, Meghan O'Gieblyn claims that "[t]oday artificial intelligence and information technology have absorbed many of the questions that were once taken up by theologians and philosophers: the mind's relationship to the body, the question of free will, the possibility of immortality." Encountering Artificial Intelligence: Ethical and Anthropological Investigations is evidence that Catholic theologians and philosophers, among others, aren't quite willing yet to cede the field and retreat into merely historical studies.
At the same time, this book confirms O'Gieblyn’s point that advances in AI have raised anew, and become the intellectual background for, what the authors of Encountering Artificial Intelligence term "a set of existential questions about the meaning and nature not only of intelligence but also of personhood, consciousness, and relationship." In brief, how to think about AI has raised deep questions about how to think about human beings.
Encountering Artificial Intelligence is the initial publication in the book series Theological Investigations of Artificial Intelligence, a collaboration between the Journal of Moral Theology and the AI Research Group for the Centre for Digital Culture, which is comprised of North American theologians, philosophers and ethicists assembled at the invitation of the Vatican.
The lead authors of this book, which represents several years of work, are Matthew Gaudet (Santa Clara University), Noreen Herzfeld (College of St. Benedict), Paul Scherz (University of Virginia) and Jordan Wales (Hillsdale College); 16 further contributing authors are also credited. The book is presented as an instrumentum laboris, which is to say, "a point of departure for further discussion and reflection." Judged by that aim, it is a great success. It is a stimulant to wonder.
The book is organized in two parts. The first takes up anthropological questions — no less than "the meaning of terms such as person, intelligence, consciousness, and relationship" — while the second concentrates on "ethical issues already emerging from the AI world," such as the massive accumulation of power and wealth by big technology companies. (The "cloud," after all, depends on huge economies of scale and intensive extraction of the earth's minerals.) As the authors acknowledge, these sets of questions are interconnected. For example, "the way that we think about and treat AI will shape our own exercise of personhood." Thus, anthropological questions have high ethical stakes.
The book's premise is that the Catholic intellectual and social teaching traditions, far from being obsolete in our disenchanted, secular age, offer conceptual tools to help us grapple with the challenges of our brave new world. The theology of the Trinity figures pivotally in the book's analysis of personhood and consciousness. "Ultimately," the authors claim, "an understanding of consciousness must be grounded in the very being of the Triune God, whose inner life is loving mutual self-gift." In addressing emerging ethical issues, the authors turn frequently to Pope Francis' critique of the technocratic paradigm and his call for a culture of encounter, which they claim give us "specific guidance for addressing the pressing concerns of this current moment."
Part of the usefulness of the book is that, at points, its investigations clearly need to go deeper. For example, the book's turn to the heavy machinery of the theology of the Trinity in order to shed light on personhood short-circuits the philosophical reflection it admirably begins. A key question the authors raise is "whether [machines] can have that qualitative and subjectively private experience that we call consciousness." But in what sense is consciousness an "experience?"
It seems, at least, that we don't experience it in the same way that we have the experience of seeing the sky as blue — unless we want to reduce consciousness precisely to such experiences. Arguably, though, consciousness is better understood either as the necessary condition for having such an experience, or as an awareness or form of knowledge (consider the etymology of the term) that goes along with it and is accessible through it. One way or the other, the question needs more attention and care.
It is also important for the discussion of AI that there are distinct forms or levels of consciousness. When I interact with my dog, he is evidently aware of me, but he gives little evidence of being aware of my awareness of his awareness of me. (He is hopelessly bad, accordingly, at trying to trick or deceive me.) By contrast, when I interact with another human being (say, my wife), there is at play what the philosopher Stephen Darwall calls "a rich set of higher-order attitudes: I am aware of her awareness of me, aware of her awareness of my awareness of her, aware of her awareness of my awareness of her awareness of me, and so on." There's a reason why the science fiction writer and essayist Ted Chiang has claimed that AI should have been called merely applied statistics: It's just not in the same ballpark as human beings, or even animals like dogs.
An interesting counter to this line of thought is that AI systems, embodied as robots, may eventually be able to behave in ways indistinguishable from human beings and other animals. In that case, what grounds would we have to deny that the systems are conscious? Further, if we do want to deny that behavior serves as evidence of consciousness, wouldn't we also have to deny it in the case of human beings and other animals? Skepticism about AI would give rise to rampant skepticism about there being other minds.
The authors counter this worry by doubling down on the claim that AI lacks "a personal, subjective grasp of reality, an intentional engagement in it." From this point of view, so long as AI systems lack this sort of consciousness, it follows that they cannot, for example, "be our friends, for they cannot engage in the voluntary empathic self-gift that characterizes the intimacy of friends." But I wonder if this way of countering the worry goes at it backward.
Advertisement
Perhaps what we need first and foremost is not a "phenomenology of consciousness" (in support of the claim that AI systems don't have it in the way we do), but a "phenomenology of friendship" (to make it clear that AI systems don't provide it as human beings can, with "empathic self-gift"). Perhaps, in other words, the focus on consciousness as the human difference isn't the place to start. A strange moment in the book, when it is allowed that God could make a machine conscious and thereby like us, suggests a deeper confusion. Whatever else consciousness is, it's surely not a thing that could be plopped into other things, like life into the puppet Pinocchio. (Not that life is such a thing either!)
The second part of the book, on emerging ethical issues, doesn't provoke the same depth of wonder as the first, but it does admirably call attention to the question of who benefits in the race to implement AI. Without a doubt, big corporations like Microsoft and Google do; it's by no means a given that the common good will benefit at all.
The book also offers some wise advice. For example, in a Mennonite-like moment, "We ought to analyze the use of AI and AI-embedded technologies in terms of how they foster or diminish relational virtues so that we strengthen fraternity, social friendship, and our relationship with the environment." Further, we "ought to inquire into ways that AI and related technologies deepen or diminish our experience of awe and wonder …"
Amen to that. Encountering Artificial Intelligence makes an important start.