empathy pain touch dissimilarity

Empathy from Dissimilarity In Neural Responses To Touch and Pain

E4001, Manuscript, Proceedings, PSAI

Cite This Work

  • APA
  • MLA
  • Bibtex

Lulla, R., Christov-Moore, L., Vaccaro, A., Reggente, N., Iacoboni, M., & Kaplan, J. (2024). Empathy from Dissimilarity: Multivariate Pattern Analysis of Neural Activity During Observation of Somatosensory Experience. Imaging Neuroscience. https://doi.org/10.1162/imag_a_00110

Lulla, Rishi, et al. “Empathy from Dissimilarity: Multivariate Pattern Analysis of Neural Activity During Observation of Somatosensory Experience.” Imaging Neuroscience, Jan. 2024, doi:10.1162/imag_a_00110.

@article{Lulla_Christov-Moore_Vaccaro_Reggente_Iacoboni_Kaplan_2024, title={Empathy from Dissimilarity: Multivariate Pattern Analysis of Neural Activity During Observation of Somatosensory Experience}, url={https://doi.org/10.1162/imag_a_00110}, DOI={10.1162/imag_a_00110}, journal={Imaging Neuroscience}, author={Lulla, Rishi and Christov-Moore, Leonardo and Vaccaro, A. and Reggente, Nicco and Iacoboni, Marco and Kaplan, Jonas}, year={2024}, month=jan }

Empathy: A Deeper Look

Empathy involves both understanding and sharing in the states of others. It’s been relatively established that empathy is related to our ability to simulate and internalize another’s experience as if it is happening to us, referred to as the ‘simulationist’ theory of empathy. However, how these simulations translate into empathic ability remains unclear. In an article titled ‘Empathy from Dissimilarity: Multivariate Pattern Analysis of Neural Activity during Observation of Others’ Somatosensory States’, researchers from the University of Southern California and the Institute for Advanced Consciousness Studies investigate the relationship between internal simulations and empathic traits. They question whether the importance of these simulations depends on not only the strength of the simulation but more so the distinguishability across simulated states.

Brain Patterns and Simulation

To evaluate this theory using patterns of neural activity, researchers recruited 70 healthy participants to undergo MRI imaging while observing videos intended to simulate certain sensory states. The videos consisted of a hand experiencing painful and tactile stimulation and a hand in isolation as control. They used advanced multivariate analysis techniques to delve into the granularity of neural activity, such as differences in neural patterns when simulating pain versus touch. This allowed them to probe whether the key to the simulationist theory lay within the relationship between differences in neural patterns of simulated states and empathic ability.

Dissimilarity as a Key Factor

This article evaluates empathy through the lens of ‘pattern dissimilarity’ rather than overall activation during observed experiences of others, analyzing areas of the brain in which pattern dissimilarity was predictive of empathic traits. This proved to be more useful than traditional methods of evaluating neural responses that rely on average activation levels rather than activity patterns. Researchers discovered that pattern dissimilarity was predictive of empathic traits in the same areas of the brain that would be engaged if the participant was experiencing the observed stimulation themselves. This sheds light on the intricacies of somatosensation, our bodily perception of the senses, that contribute to empathic ability.

Implications for Understanding Empathy

These findings show how pattern dissimilarity may provide deeper information than traditional analysis methods when researching cognitive functions such as empathy. Researchers suggest that the distinguishability of simulated internal states in somatosensory areas of the brain is predictive of an individual’s sympathetic reactions to the distress of others. Perhaps it’s not only the level of brain activity during internal simulation, but more so the uniqueness and distinguishability of that brain activity that leads us to feel for and understand others.

Read more
artificial empathy could create an artificial boddhisattva

Sociopathic Superintelligences, Artificial Empathy, and Robot Bodhisattvas, Oh My!

E4001, Manuscript, PSAI

This blog post is based on a recent publication “Preventing antisocial robots: A pathway to artificial empathy” at Science Robotics

Get the Article

  • PDF
  • Paywalled Article

Preventing Antisocial Robots: A Pathway to Artificial Empathy

PDF

Preventing antisocial robots: A pathway to artificial empathy at Science Robotics

Cite This Work

  • APA
  • MLA
  • Chicago
  • Harvard
  • Vancouver

Christov-Moore, L., Reggente, N., Vaccaro, A., Schoeller, F., Pluimer, B., Douglas, P. K., Iacoboni, M., Man, K., Damasio, A., & Kaplan, J. T. (2023). Preventing antisocial robots: A pathway to artificial empathy. Sci. Robot, 8, eabq3658. https://doi.org/10.1126/scirobotics.abq3658

Christov-Moore, Leonardo, et al. “Preventing Antisocial Robots: A Pathway to Artificial Empathy.” Sci. Robot, vol. 8, eabq3658, 2023, https://doi.org/10.1126/scirobotics.abq3658.

Christov-Moore, Leonardo, Nicco Reggente, Anthony Vaccaro, Felix Schoeller, Brock Pluimer, Pamela K. Douglas, Marco Iacoboni, Kingson Man, Antonio Damasio, and Jonas T. Kaplan. “Preventing Antisocial Robots: A Pathway to Artificial Empathy.” Sci. Robot 8 (2023): eabq3658. https://doi.org/10.1126/scirobotics.abq3658.

Christov-Moore, L., Reggente, N., Vaccaro, A., Schoeller, F., Pluimer, B., Douglas, P. K., Iacoboni, M., Man, K., Damasio, A., & Kaplan, J. T. (2023). Preventing antisocial robots: A pathway to artificial empathy. Sci. Robot, 8, eabq3658. https://doi.org/10.1126/scirobotics.abq3658

Christov-Moore L, Reggente N, Vaccaro A, Schoeller F, Pluimer B, Douglas PK, Iacoboni M, Man K, Damasio A, Kaplan JT. Preventing antisocial robots: A pathway to artificial empathy. Sci. Robot. 2023;8:eabq3658. doi:10.1126/scirobotics.abq3658.

Look, whether you’re a doomer or a techno-utopian, whether you were ready or not, the age of artificial intelligence (AI) probably arrived sometime in this decade. This age brings deep, important, and melancholy reflections on intelligence, creativity, and what it is to be human. However, If we can’t ensure that AI is aligned with human interests, we may have little time to reflect. Containment, or a giant pause button, is not a likely option. There is too much real-world inertia and distrust among world actors to ensure everyone will comply – and it only takes one successful experiment to unleash a truly unforeseen problem into the world. In a new paper in Science Robotics, we tackle this problem through three big ideas, that we’ll call the problem, the path, and the potential.

The Problem

There is a pressing need to imbue AI with a value system that allows it to “understand” harm in way that inherently demotivates it from making catastrophic, irreversible decisions, without the need for complex rule systems. This value system must scale with AI’s rapid self-improvement and adaptations as it encounters novel situations and greater responsibilities for peoples’ well-being. Biology suggests that empathy could provide this value. Empathy allows us to understand and share the feelings of others, motivating us to alleviate suffering and bring happiness.

a sociopathic robot that has explicitly programmed artificial empathy

However, most approaches to artificial empathy focus on allowing AI to decode internal states and act empathetically, neglecting the crucial capacity for shared feeling that drives organisms to care for others. Here lies the problem: Our attempt to create empathic AI may inadvertently result in agents that can read us perfectly and manipulate our feelings, without any genuine interest in our wellbeing, or understanding of our suffering. Our well-meaning attempts to produce empathy may produce superintelligent sociopaths.

The Path Towards Artificial Empathy

If we are giving birth to the next form of life, it’s not far-fetched to see ourselves as collective parents, with a civilizational responsibility. When you’re raising something as potentially powerful as AI, what should you do? The formative years of powerful yet ethical figures like Buddha, Jesus (or Spiderman) teach us that the responsibility of great power is learned by experiencing the suffering that all living beings endure. Power without vulnerability and compassion can easily cause harm, not necessarily through malice, but through obliviousness or an unconstrained drive to fulfill desires.

a robot learns artificial empathy by first learning compassion, especially with regard to alignment to human wants and needs

To address this, we propose a speculative set of guidelines for future research in artificial empathy. Firstly, even if it’s only during a specific phase of their training, AI need to possess a vulnerable body that can experience harm, and learn to exist in an environment where actions have consequences for its physical integrity. Secondly, AI should learn by observing other agents and understanding the relationship between their experiences and the state of their own bodies, similar to how it understands itself. Lastly, AI should learn to interact with other agents in a way that avoids harm to itself and others. Perhaps it will emergently behave in a more ethical fashion if harm to others is processed like harm to itself. Vulnerability is the common ground from which genuine concern and aversion to harm naturally emerge.

The Potential of Artificial Empathy

Achieving true artificial empathy could transform AI from a potential global threat to a world-saving ally. While human empathy is crucial in preventing harm and promoting prosocial behavior, it is inherently biased. We tend to prioritize the suffering of a single relatable person over the plight of a stranger or very large numbers of people. This bias arises due to our brain’s difficulties in handling the large-scale, long-term, and nonlinear problems often encountered in complex societies. The scalable cognitive complexity of an empathic AI might be capable of proposing compassionate solutions to these grand challenges that surpass the human capacity for comprehension and imagination. However, every solution brings new challenges.  How can we trust an intelligence that surpasses our own? What sort of responsibilities will we have for an intelligence that can suffer?

If we are the collective parents to a new superbeing, we must decide, right now, what kind of parents we are going to be, and what kind of relationship we want with our progeny. Do we want to try and control something we fear, or do the work to raise someone we can trust, to care for us in old age?

If we are the collective parents to a new superbeing, we must decide, right now, what kind of parents we are going to be, and what kind of relationship we want with our progeny. Do we want to try and control something we fear, or do the work to raise someone we can trust, to care for us in old age? Let’s be far-fetched for a short moment:  maybe we can guide the development of the upcoming superintelligences toward what Buddhist scholars call “metta,” a cultivation of universal compassion for all beings. Maybe the next Buddha will be artificial.

a depiction of what the eventuality of imbuing AI with artificial empathy could look like: an artificial buddha

We are grateful to the Templeton World Charity Foundation and Tiny Blue Dot Foundation for making this work possible. We also extend our thanks to the Survival and Flourishing Fund for their recent award, which will enable us to implement these ideas in simulations with the assistance of talented researchers such as Adam Safron, Guillaume Dumas, and Zahra Sheikh. You can keep track of our latest developments on our artificial empathy project page.

Read more

Cognitive Science Below the Neck: Toward an Integrative Account of Consciousness in the Body

E4001, Proceedings, PSAI, Review
section-a8ead51
Get the Article

Cognitive Science Below the Neck: Toward an Integrative Account of Consciousness in the Body

PDF

Cognitive Science Below the Neck: Toward an Integrative Account of Consciousness in the Body

Article

Cite This Work

Christov‐Moore, L., Jinich‐Diamant, A., Safron, A., Lynch, C., & Reggente, N. (2023). Cognitive science below the neck: Toward an integrative account of consciousness in the body. Cognitive Science, 47(3). https://doi.org/10.1111/cogs.13264

Christov‐Moore, Leonardo, et al. “Cognitive Science below the Neck: Toward an Integrative Account of Consciousness in the Body.” Cognitive Science, vol. 47, no. 3, 2023, https://doi.org/10.1111/cogs.13264.

Christov‐Moore, Leonardo, Alex Jinich‐Diamant, Adam Safron, Caitlin Lynch, and Nicco Reggente. “Cognitive Science below the Neck: Toward an Integrative Account of Consciousness in the Body.” Cognitive Science 47, no. 3 (2023). https://doi.org/10.1111/cogs.13264.

Christov‐Moore, L. et al. (2023) “Cognitive science below the neck: Toward an integrative account of consciousness in the body,” Cognitive Science, 47(3). Available at: https://doi.org/10.1111/cogs.13264.

Christov‐Moore L, Jinich‐Diamant A, Safron A, Lynch C, Reggente N. Cognitive Science Below the Neck: Toward an Integrative Account of Consciousness in the Body. Cognitive Science. 2023 Mar;47(3).

 

Text Body

Cognitive Science Below the Neck: Toward an Integrative Account of Consciousness in the Body

Despite historic and recent evidence that our beliefs can have drastic effects on bodily function, we seem to lack a model of how this might work. We believe this is due in large part to a failure to consider that computational processes we attribute to cognition may be occurring below the neck, and to a lack of a language by which we could describe beliefs as something that can be instantiated within the body.

In a recent paper, we proposed that we expand the scope of cognitive science to include the body and develop a formal language to describe the relationship between cognitive and bodily systems. To do so, we propose to integrate the best parts of three contemporary accounts that deal with mind and body.

Firstly, parametrically deep allostasis (PDA), a two-level Bayesian inference model, can help us understand how affective valence (the positivity or negativity of a feeling) arises from our bodily experiences. At the surface level, the model uses sensory information to anticipate our homeostatic needs. At the deep level, it continuously tracks the fitness of the surface-level models, indexing fitness as affective valence. This model frames the role of our slow, deep feelings in statistical language that can allow us to possibly speak of beliefs in terms of signaling and computation in interoceptive systems.

Secondly, embodied predictive interoception coding (EPIC) provides a biologically plausible implementation of PDA. EPIC describes a predictive system in the central nervous system that takes inputs from the body via the interoceptive nervous system. It senses precision-weighted ascending homeostatic/metabolic and exteroceptive signals in highly laminated sensory "rich club" hubs and issues allostatic predictions that drive descending allostatic control signals. 

Finally, Carvalho and Damasio's functional/anatomical account of the interoceptive nervous system (INS) provides a crucial, holistic field of view that permits for unique forms of computation in systems below the neck. They frame the spatiotemporally diffuse properties of interoception and affect (described in PDA) as products of INS physiology, with a neurobiological framing that “matches up” well with the cortical field of view of the EPIC model.

 

https://i0.wp.com/advancedconsciousness.org/wp-content/uploads/2023/04/ChristovMoore_beliefs_descending_from_the_brain_and_being_refle_d62ebb55-61aa-46d1-ac3d-b2bf50c6e758-1.jpg?resize=770%2C800&ssl=1

 

Combined, these complementary accounts can expand the scope of cognitive science below the neck, using a formal language that allows us to speak of beliefs in terms of signaling that can be studied within CNS/INS interactions. Beliefs can be enacted in bodily function and influence declarative awareness, while “beliefs” in bodily signaling can emerge to impact conscious thought. This approach can deepen our understanding of belief, ritual, and set/setting in research and clinical outcomes, with potential implications for treating psychopathology and effecting therapeutic change. Novel methodological developments will be needed to trace signaling in the transition from CNS to INS as beliefs translate into bodily change, and vice versa. A field of view that encompasses cortical and interoceptive anatomy and computational processes, along with a formal language for belief transmission and enactment, can transform mind-body mysteries into novel science and therapy.

Read more