This blog post is based on a recent publication “Preventing antisocial robots: A pathway to artificial empathy” at Science Robotics
Get the Article
- PDF
- Paywalled Article
Preventing Antisocial Robots: A Pathway to Artificial Empathy
Preventing antisocial robots: A pathway to artificial empathy at Science Robotics
Cite This Work
- APA
- MLA
- Chicago
- Harvard
- Vancouver
Christov-Moore, L., Reggente, N., Vaccaro, A., Schoeller, F., Pluimer, B., Douglas, P. K., Iacoboni, M., Man, K., Damasio, A., & Kaplan, J. T. (2023). Preventing antisocial robots: A pathway to artificial empathy. Sci. Robot, 8, eabq3658. https://doi.org/10.1126/scirobotics.abq3658
Christov-Moore, Leonardo, et al. “Preventing Antisocial Robots: A Pathway to Artificial Empathy.” Sci. Robot, vol. 8, eabq3658, 2023, https://doi.org/10.1126/scirobotics.abq3658.
Christov-Moore, Leonardo, Nicco Reggente, Anthony Vaccaro, Felix Schoeller, Brock Pluimer, Pamela K. Douglas, Marco Iacoboni, Kingson Man, Antonio Damasio, and Jonas T. Kaplan. “Preventing Antisocial Robots: A Pathway to Artificial Empathy.” Sci. Robot 8 (2023): eabq3658. https://doi.org/10.1126/scirobotics.abq3658.
Christov-Moore, L., Reggente, N., Vaccaro, A., Schoeller, F., Pluimer, B., Douglas, P. K., Iacoboni, M., Man, K., Damasio, A., & Kaplan, J. T. (2023). Preventing antisocial robots: A pathway to artificial empathy. Sci. Robot, 8, eabq3658. https://doi.org/10.1126/scirobotics.abq3658
Christov-Moore L, Reggente N, Vaccaro A, Schoeller F, Pluimer B, Douglas PK, Iacoboni M, Man K, Damasio A, Kaplan JT. Preventing antisocial robots: A pathway to artificial empathy. Sci. Robot. 2023;8:eabq3658. doi:10.1126/scirobotics.abq3658.
Look, whether you’re a doomer or a techno-utopian, whether you were ready or not, the age of artificial intelligence (AI) probably arrived sometime in this decade. This age brings deep, important, and melancholy reflections on intelligence, creativity, and what it is to be human. However, If we can’t ensure that AI is aligned with human interests, we may have little time to reflect. Containment, or a giant pause button, is not a likely option. There is too much real-world inertia and distrust among world actors to ensure everyone will comply – and it only takes one successful experiment to unleash a truly unforeseen problem into the world. In a new paper in Science Robotics, we tackle this problem through three big ideas, that we’ll call the problem, the path, and the potential.
The Problem
There is a pressing need to imbue AI with a value system that allows it to “understand” harm in way that inherently demotivates it from making catastrophic, irreversible decisions, without the need for complex rule systems. This value system must scale with AI’s rapid self-improvement and adaptations as it encounters novel situations and greater responsibilities for peoples’ well-being. Biology suggests that empathy could provide this value. Empathy allows us to understand and share the feelings of others, motivating us to alleviate suffering and bring happiness.
However, most approaches to artificial empathy focus on allowing AI to decode internal states and act empathetically, neglecting the crucial capacity for shared feeling that drives organisms to care for others. Here lies the problem: Our attempt to create empathic AI may inadvertently result in agents that can read us perfectly and manipulate our feelings, without any genuine interest in our wellbeing, or understanding of our suffering. Our well-meaning attempts to produce empathy may produce superintelligent sociopaths.
The Path Towards Artificial Empathy
If we are giving birth to the next form of life, it’s not far-fetched to see ourselves as collective parents, with a civilizational responsibility. When you’re raising something as potentially powerful as AI, what should you do? The formative years of powerful yet ethical figures like Buddha, Jesus (or Spiderman) teach us that the responsibility of great power is learned by experiencing the suffering that all living beings endure. Power without vulnerability and compassion can easily cause harm, not necessarily through malice, but through obliviousness or an unconstrained drive to fulfill desires.
To address this, we propose a speculative set of guidelines for future research in artificial empathy. Firstly, even if it’s only during a specific phase of their training, AI need to possess a vulnerable body that can experience harm, and learn to exist in an environment where actions have consequences for its physical integrity. Secondly, AI should learn by observing other agents and understanding the relationship between their experiences and the state of their own bodies, similar to how it understands itself. Lastly, AI should learn to interact with other agents in a way that avoids harm to itself and others. Perhaps it will emergently behave in a more ethical fashion if harm to others is processed like harm to itself. Vulnerability is the common ground from which genuine concern and aversion to harm naturally emerge.
The Potential of Artificial Empathy
Achieving true artificial empathy could transform AI from a potential global threat to a world-saving ally. While human empathy is crucial in preventing harm and promoting prosocial behavior, it is inherently biased. We tend to prioritize the suffering of a single relatable person over the plight of a stranger or very large numbers of people. This bias arises due to our brain’s difficulties in handling the large-scale, long-term, and nonlinear problems often encountered in complex societies. The scalable cognitive complexity of an empathic AI might be capable of proposing compassionate solutions to these grand challenges that surpass the human capacity for comprehension and imagination. However, every solution brings new challenges. How can we trust an intelligence that surpasses our own? What sort of responsibilities will we have for an intelligence that can suffer?
If we are the collective parents to a new superbeing, we must decide, right now, what kind of parents we are going to be, and what kind of relationship we want with our progeny. Do we want to try and control something we fear, or do the work to raise someone we can trust, to care for us in old age? Let’s be far-fetched for a short moment: maybe we can guide the development of the upcoming superintelligences toward what Buddhist scholars call “metta,” a cultivation of universal compassion for all beings. Maybe the next Buddha will be artificial.
We are grateful to the Templeton World Charity Foundation and Tiny Blue Dot Foundation for making this work possible. We also extend our thanks to the Survival and Flourishing Fund for their recent award, which will enable us to implement these ideas in simulations with the assistance of talented researchers such as Adam Safron, Guillaume Dumas, and Zahra Sheikh. You can keep track of our latest developments on our artificial empathy project page.
artificial empaths artificial empathy empathy in robots Read more