Home Science Mental Health Apps Are Not Keeping Your Data Safe

Mental Health Apps Are Not Keeping Your Data Safe

0
Mental Health Apps Are Not Keeping Your Data Safe

[ad_1]

Imagine calling a suicide prevention hotline in a crisis. Do you ask for their data collection policy? Do you assume that your data are protected and kept secure? Recent events may make you consider your answers more carefully.

Mental health technologies such as bots and chat lines serve people who are experiencing a crisis. They are some of the most vulnerable users of any technology, and they should expect their data to be kept safe, protected and confidential. Unfortunately, recent dramatic examples show that extremely sensitive data has been misused. Our own research has found that, in gathering data, the developers of mental health–based AI algorithms simply test if they work. They generally don’t address the ethical, privacy and political concerns about how they might be used. At a minimum, the same standards of health care ethics should be applied to technologies used in providing mental health care.

Politicorecently reported that Crisis Text Line, a not-for-profit organization claiming to be a secure and confidential resource to those in crisis, was sharing data it collected from users with its for-profit spin-off company Loris AI, which develops customer service software. An official from Crisis Text Line initially defended the data-exchange as ethical and “fully compliant with the law.” But within a few days the organization announced it had ended its data-sharing relationship with Loris AI, even as it maintained that the data had been “handled securely, anonymized and scrubbed of personally identifiable information.”

Loris AI, a company that uses artificial intelligence to develop chatbot-based customer services products, had used data generated by the over 100 million Crisis Text Line exchanges to, for example, help service agents understand customer sentiment. Loris AI has reportedly deleted any data it received from Crisis Text Line, although whether that extends to the algorithms trained on that data is unclear.

This incident and others like it reveal the rising value placed on mental health data as part of machine learning, and they illustrate the regulatory gray zones through which these data flow. The well-being and privacy of people who are vulnerable or perhaps in crisis is at stake. They are the ones who bear the consequences of poorly designed digital technologies. In 2018, U.S. border authorities refused entry to several Canadians who had survived suicide attempts, based on information in a police database. Let’s think about that. Noncriminal mental health information had been shared through a law enforcement database to flag someone wishing to cross a border.

Policy makers and regulators need evidence to properly manage artificial intelligence, let alone its use in mental health products.

We surveyed 132 studies that tested automation technologies, such as chatbots, in online mental health initiatives. The researchers in 85 percent of the studies didn’t address, either in study design, or in reporting results, how the technologies could be used in negative ways. This was despite some of the technologies raising serious risks of harm. For example, 53 studies used public social media data—in many cases without consent—for predictive purposes like trying to determine a person’s mental health diagnosis. None of the studies we examined grappled with the potential discrimination people might experience if these data were made public.

Very few studies included the input of people who have used mental health services. Researchers in only 3 percent of the studies appeared to involve input from people who have used mental health services in the design, evaluation or implementation in any substantive way. In other words, the research driving the field is sorely lacking the participation of those who will bear the consequences of these technologies.

Mental health AI developers must explore the long-term and potential adverse effects of using different mental health technologies, whether how the data are being used, or what happens if the technology fails the user. Editors of scholarly journals should require this to publish, as should institutional review board members, funders and so on. These requirements should accompany urgent adoption of standards that promote lived experience in mental health research.

In policy, most U.S. states give special protection to typical mental health information, but emerging forms of data concerning mental health appear only partially covered. Regulations such as the Health Insurance Portability and Accountability Act (HIPAA) do not apply to direct-to-consumer health care products, including the technology that goes into AI-based mental health products. The Federal Drug Administration (FDA) and Federal Trade Commission (FTC) may play roles in evaluating these direct-to-consumer technologies and their claims. However, the FDA’s scope does not seem to apply to health data collectors, such as well-being apps, websites and social networks, and so excludes most “indirect” health data. Nor does the FTC cover data gathered by non-profit organizations, which was a key concern raised in the case of Crisis Text Line.

It is clear that generating data on human distress concerns much more than a potential invasion of privacy; it also poses risks to an open and free society. The possibility that people will police their speech and behavior in fear of the unpredictable datafication of their inner world, will have profound social consequences. Imagine a world where we need to seek expert “social media analysts” who can help us craft content to appear “mentally well” or where employers habitually screen prospective employees’ social media for “mental health risks.”

Everyone’s data, regardless of whether they have engaged with mental health services, may soon be used to predict future distress or impairment. Experimentation with AI and big data are taking our everyday activities to fine new forms of “mental health–related data”—which may elude current regulation. Apple is currently working with multinational biotechnology company Biogen and the University of California, Los Angeles, to explore using phone sensor data such as movement and sleep patterns to infer mental health and cognitive decline.

Crunch enough data points about a person’s behavior, the theory goes, and signals of ill health or disability will emerge. Such sensitive data create new opportunities for discriminatory, biased and invasive decision-making about individuals and populations. How will data labeled as “depressed” or “cognitively impaired”—or likely to become those things—impact a person’s insurance rates? Will individuals be able to contest such designations before data are transferred to other entities?

Things are moving fast in the digital mental health sector, and more companies see the value in using people’s data for mental health purposes. A World Economic Forum report values the global digital health market at $118 billion worldwide and cites mental health as one of the fastest-growing sectors. A dizzying array of start-ups are jostling to be the next big thing in mental health, with “digital behavioral health” companies reportedly attracting $1.8 billion in venture capitalin 2020 alone.

This flow of private capital is in stark contrast to underfunded health care systems in which people struggle to access appropriate services. For many people, cheaper online alternatives to face-to-face support may seem like their only option, but that option creates new vulnerabilities that we are only beginning to understand.

IF YOU NEED HELP

If you or someone you know is struggling or having thoughts of suicide, help is available. Call or text the 988 Suicide & Crisis Lifeline at 988 or use the online Lifeline Chat.

This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.

[ad_2]

LEAVE A REPLY

Please enter your comment!
Please enter your name here