Disrupting Diagnosis: Speech Patterns, AI, and Ethical Issues of Digital Phenotyping
- Source: The Neuroethics Blog
Diagnosing schizophrenia can be complex, time-consuming, and expensive. The April seminar on The Future Now: (NEEDs) Neuroscience and Emerging Ethical Dilemmas at Emory focused on one innovative effort to improve this process in the flourishing field of digital phenotyping. Presenter and NeuroLex founder and CEO Jim Schwoebel had witnessed his brother struggle for several years with frequent headaches and anxiety, and saw him accrue nearly $15,000 in medical expenses before his first psychotic break. From there it took many more years and additional psychotic episodes before Jim’s brother began responding to medication and his condition stabilized. Unfortunately, this experience is not uncommon; a recent study found that the median period from the onset of psychotic symptoms until treatment is 74 weeks.
Naturally, Schwoebel thought deeply about how this had happened and what clues might have been seen earlier. “I had been sensing that something was off about my brother’s speech, so after he was officially diagnosed, I looked more closely at his text messages before his psychotic break and saw noticeable abnormalities,” Schwoebel told Psychiatric News. For Schwoebel, a Georgia Tech alum and co-founder of the neuroscience startup accelerator NeuroLaunch, this was the spark of an idea. Looking into the academic literature he found a 2015 study led by researchers from Columbia University who applied machine learning to speech from a sample of participants at high risk for psychosis. They found that the artificial intelligence correctly predicted which individuals would transition to psychosis over the next several years.
Schwoebel went on to found NeuroLex Laboratories, which is developing technology to analyze speech samples for diagnostic purposes. For NeuroLex, schizophrenia is only one of several neuropsychiatric disorders that may be diagnosable by AI-mediated linguistic analysis. In their early stages, depression, Alzheimer’s (AD) and Parkinson’s disease (PD) also may affect the brain in unseen ways that algorithms can identify before a clinician. Early diagnosis of these conditions could have a profound impact on patient outcomes and the development of future treatments.
While there are no disease-reversing cures available for any of these disorders currently, preventing psychotic episodes is a major goal in schizophrenia treatment and early diagnosis at least provides the opportunity for intervention. For neurodegenerative diseases, the case for early diagnosis may be even more compelling. By the time many patients are diagnosed with a neurodegenerative disease such as AD or PD, the disease has already done catastrophic and perhaps irreversible damage to the brain. Clinical trials that include such patients would therefore be doomed before they even begin. Knowing about the disease earlier could provide more opportunity for treatment, and would also have practical benefits for families wanting to plan for future care.
NeuroLex is far from the only startup in the digital phenotyping space. Jeff Arnold, the founder of WebMD co-founded Sharecare, which offers analysis of phone calls to determine stress level among other personalized medical services. The former director of the US National Institute of Mental Health, Dr. Thomas Insel co-founded Mindstrong, which measures countless aspect of smartphone use as indicators of mental and emotional health. We know of Facebook’s efforts to identify users who may be suicidal and, it is safe to assume, Facebook and Google are interested in (if not already performing) other digital phenotyping analyses.
More efficient, more accurate, and earlier diagnosis of neuropsychiatric diseases could provide real benefits for patients, their families, and researchers. However, there are also real ethical issues that need to be considered. First, like any application involving AI, there is an increasing appreciation that bias may be a major hurdle. Related to speech, Schwoebel noted that there are obvious regional (think Brooklyn vs. Birmingham) as well as racial, ethnic, and gender differences in how people speak and the words that they choose. He emphasized the importance of a diverse training set, which would hopefully head off harmful and embarrassing situations like when a Google Photos algorithm generated some of the most racist tags imaginable. Yet the really scary part may be that the public will likely never know about most of the discrimination that happens in the background as we browse the web. Diverse training in the research phase and transparent computations would likely help avoid systemic biases but it is doubtful that they could be eliminated completely.
Speech and voice data may contain everything from the most mundane to the very personal and sensitive and thus privacy issues could also present a significant challenge, both legally and ethically. From a legal perspective in the United States, different states have varying wiretapping and voice recording statutes. In some states, it is legal to record a conversation with the consent of only one party, in others the consent of all parties is required. This may seem like an easy ethical judgment – simply go with the stricter regulation and get everyone’s consent – but it probably is not that simple in practice. Another important consideration is how the speech data is collected. AI-powered home assistants like Google Home and Amazon Echo and even many smartphones and televisions are listening, and the owner of the device, not to mention other people within earshot, may not know exactly what they have consented to having recorded. Just recently, Amazon was asked to explain why exactly an Echo emailed an audio recording of a conversation that a Portland woman had at home to one of her husband’s co-workers.
Lastly, there are concerns about how this technology could be used. Predictive data related to an individual’s likelihood to develop neuropsychiatric conditions that may result in long periods of disability would be very valuable to insurance companies and employers, as only two examples. In a one party voice recording consent state, an applicant for insurance or a job would not even need to agree to submit to this sort of analysis. In this era of deregulation, without obvious appetite for increased oversight at the Federal level, it will likely be up to the private sector to police itself and decide on ethical principles to guide the development and implementation of these technologies. After all, the more incidents of ugly AI bias and gross disregard for privacy that make it into the public view, the less interest there will be in adopting these technologies, which could seriously hamper their potential for good. There is little doubt that digital phenotyping in its many forms has the potential not only for improved efficiency at drastically lower cost, but also to enhance the ability and extend the reach of clinicians. Thoughtfully considering and addressing these concerns and should improve chances of reaching that potential, not hinder them.
This post is reprinted with permission and originally appeared on The Neuroethics Blog, hosted by the Center for Ethics, Neuroethics Program at Emory University.