Neuroethics Meets Artificial Intelligence
- Source: The Neuroethics Blog
The history of Artificial Intelligence (AI) is inextricably intertwined with the history of neuroscience. Since the early days of AI, scientists turned to the human brain as a source of guidance for the development of intelligent machines. Unsurprisingly, many pioneers of AI such as Warren McCulloch were trained in the sciences of the brain. Modern AI borrowed most of its vocabulary from neurology and psychology. For instance, computational models consisting of networks of interconnected units —one of the most common approaches to AI— are called Artificial Neural Networks (ANN). Each unit is called an “artificial neuron.” Several areas of research in AI are labelled through neuropsychological categories such as computer vision, machine learning, natural language processing etc. It’s not just a matter of terminology. ANNs, for example, are actually inspired by and based on the functioning of biological neural networks that constitute animal nervous systems.
In spite of this intimate link between AI and neuroscience, ethical reflections on these two disciplines have developed quite independently of each other and with little interaction between the two research communities. On the one hand, the AI ethics community has focused primarily on issues such robot rights, algorithmic transparency, biases in AI systems, and autonomous weapons. On the other hand, the neuroethics community has primarily focused on issues such as pharmacological enhancement, brain interventions, neuroimaging, and free will.
However, in the face of recent technological developments, these two communities can no longer afford to operate in silos. Several ethically-sensitive developments are occurring at the interface between neuroscience and AI. One is the testing of AI algorithms in clinical neuroscience research for predictive and diagnostic purposes. Machine learning algorithms, for instance, have been successfully trained to detect early signs of Alzheimer’s disease and mental illness from brain scans. In the face of recent technological developments, these two communities can no longer afford to operate in silos. Several ethically-sensitive developments are occurring at the interface between neuroscience and AI. These findings hold great potential for improving current diagnostic protocols and enabling earlier, patient-tailored therapy. At the same time, the prospect of automated diagnostics challenges established models of doctor-patient relationship, opens the risk of algorithmic discrimination for certain patient groups (e.g. those underrepresented in the datasets used to train the algorithms) and raises privacy concerns, especially if relevant information regarding these indicators of illness falls in to the hands of employers and health insurance providers. Another interesting area of research at the neuroscience-AI interface is brain simulations, i.e. the attempt to use AI models to create virtual simulations or functional representations of brain activity. Although still in the initial stages of development, this area of research promises to yield scientific and ethical benefits. If successful, brain simulations will —in the long run— provide neuroscientists with extra tools to explore how the brain works. For example, functioning computer models of the human brain (or parts of the brain) could allow researchers to conduct some types of research on mental illness and behavior without some of the typical methodological and ethical challenges of animal models and human subjects research. Furthermore, brains and machines are getting closer and closer through the so-called brain-computer interfaces (BCIs). These systems establish a direct connection pathway between a brain and an external computer system such as a robotic limb, a personal computer, a speech synthesizer or an electronic wheelchair. It goes without saying that AI software is essential to enable this direct brain-computer communication. BCIs might be a game-changer for several classes of neurological patients. In certain cases, they already are. In parallel, they present our community with novel ethical challenges such the implications for personal identity and psychological continuity of human-AI coupling, the privacy and security of the data collected by and feeding the BCI, and the prospect of using BCI not only to restore function among neurological patients but also as an alternative method for cognitive enhancement of healthy people. Finally, the weaponization of AI is extending to neurotechnology, BCI in particular. As BCIs and other neurotechnologies are being increasingly tested in military research, ethical issues of dual-use and neurosecurity become more pressing.
Given this relationship between neuroscience and AI, the neuroethics and the AI-ethics communities must no longer operate in silos but pursue greater mutual and cross-disciplinary exchange. Creating a common ethical discourse at the brain-AI interface will likely yield benefits for both fields. Creating a common ethical discourse at the brain-AI interface will likely yield benefits for both fields. There are already some positive examples. For instance, the International Neuroethics Society (INS) has often featured panel discussions on AI and other emerging technologies. In November 2018, researchers gathered in Mexico to discuss the “Neuroethical Considerations of Artificial Intelligence”. In May 2019, researchers at LMU Munich organised a conference titled “(Clinical) Neurotechnology meets Artificial Intelligence” focusing on the ethical, legal and social implications of the two fields. Researchers involved in international brain initiatives, such as the Human Brain Project and the US Brain Initiative, have been discussing the value of computational models in neuroscience and the impact of artificial systems on the study of consciousness. Meanwhile, several publications have tackled ethical issues at the intersection of the two domains.
That being said, developing a unified framework for the ethics of brains and machines is no easy task. Ethical disagreement is frequent within each community, let alone across them. A forthcoming review of international AI ethics guidelines revealed substantial divergence on interpretation and practical implementation of ethical principles. Meanwhile, neuroethicists strongly disagree on the need for regulation of nonclinical uses of neurotechnology. However, the aim of creating a common ethical discourse at the brain-AI interface should not be moral agreement. On the contrary, disagreement can foster intellectual progress. The optimal response of the ethics community to developments in neuroscience and AI is not consensus but rather mutual awareness, information exchange, and, possibly, collaboration.
This post is reprinted with permission and originally appeared on The Neuroethics Blog, hosted by the Center for Ethics, Neuroethics Program at Emory University.