3 Questions: Should AI Systems Be Labeled Like Prescription Drugs?

3 Questions: Should AI Systems Be Labeled Like Prescription Drugs?

AI in Healthcare: Should It Be Labeled Like Medicine?

With artificial intelligence (AI) systems increasingly being integrated into healthcare, concerns about their safe deployment are rising. AI models can sometimes generate inaccurate information, exhibit bias, or malfunction unexpectedly, which can have serious consequences in medical settings. These issues raise the question: should AI systems in healthcare be labeled in a way similar to prescription drugs, which come with detailed warnings and usage instructions?

In a recent commentary in Nature Computational Science, MIT’s Associate Professor Marzyeh Ghassemi and Boston University’s Associate Professor Elaine Nsoesie argue for the introduction of responsible-use labels on AI systems. Such labels, they propose, would help mitigate the potential harms caused by AI in healthcare.

Why Do We Need Responsible-Use Labels for AI in Healthcare?

In a healthcare environment, doctors often rely on technology or treatments that they do not fully understand. This lack of understanding can range from the fundamental mechanisms behind certain drugs to the complexities of advanced medical devices. For instance, while a clinician may not know how to service an MRI machine, there are certification systems in place through federal agencies like the U.S. Food and Drug Administration (FDA) to ensure the device’s safe usage.

However, AI models and algorithms often bypass such approval and monitoring processes. Many studies have shown that predictive models, especially those powered by AI, require more rigorous evaluation and surveillance. Generative AI, in particular, has been demonstrated to produce outputs that are not always appropriate, robust, or unbiased. Without proper surveillance, it becomes difficult to identify and address problematic responses.

AI systems currently deployed in hospitals might carry biases or inaccuracies that could affect patient outcomes. Introducing responsible-use labels could ensure that these models don’t perpetuate biases inherent in past clinical decision-making processes.

What Should These Labels Include?

According to Ghassemi and Nsoesie, AI labels should clearly convey the time, place, and manner of a model’s intended use. For example:

  • When was the model trained, and what data was used?
  • Does it include data from the Covid-19 pandemic, which drastically affected healthcare practices?
  • Where was the data collected, and how was the model optimized for that specific population?

This information could help users understand the model’s “potential side effects” and “adverse reactions.” For instance, a model trained on data from one region may perform poorly when applied in another, leading to incorrect predictions or decisions.

Additionally, AI models that are flexible and can be used for multiple tasks may require additional labeling to specify approved versus unapproved uses. If a model is designed to generate billing codes from clinical notes, it may not be suitable for making critical decisions like determining which patients should be referred to specialists.

In general, the goal is to ensure transparency. Just as no drug is perfect and comes with risks, AI models must be disclosed as limited tools that require careful consideration before implementation.

Who Should Be Responsible for Labeling AI Systems?

The responsibility for labeling AI systems should begin with the developers and those deploying the models in real-world settings. If an AI system is intended for human-facing applications, especially in safety-critical environments like healthcare, the developers should be required to provide clear labels based on established frameworks. These claims should be validated before the model is deployed.

Moreover, the process of labeling itself encourages developers to carefully consider the limitations and risks associated with their models. If developers know that they will need to disclose details about the model’s training data, they may be more inclined to ensure that the data is comprehensive and representative of the population it will serve.

Ultimately, agencies within the Department of Health and Human Services could play a role in overseeing the validation and enforcement of these labels, ensuring that AI systems are held to the same safety standards as medical devices and prescription drugs.

If you’re interested in understanding more about AI’s impact on healthcare, you may also want to explore how AI co-pilots are reducing burnout and increasing efficiency for doctors.

On Key

Related Posts

stay in the loop

Get the latest AI news, learnings, and events in your inbox!