Do AI systems need warnings – like medications?
Published on October 1, 2024
The use of artificial intelligence also poses risks. Researchers from the USA are therefore now calling for warnings for AI systems, similar to those for prescription drugs.
AI systems are becoming more and more sophisticated and are therefore increasingly being used in safety-critical situations - including in the healthcare system. Researchers from the USA are therefore calling for these systems to be used appropriately so that "responsible use" in the healthcare system can be ensured.
MIT Professor Marzyeh Ghassemi and Professor Elaine Nsoesie from Boston University are therefore calling for warnings in a commentary in the journal Nature Computational Science - similar to those for prescription drugs.
Do AI systems in the healthcare system need warnings?
Devices and drugs used in the US healthcare system must first go through a certification system. This is done, for example, by the federal Food and Drug Administration (FDA). Once they are approved, they are then monitored by.
However, models and algorithms - with and without AI - largely bypass this approval and long-term monitoring, as MIT professor Marzyeh Ghassemi complains. "Many previous studies have shown that predictive models need to be more carefully evaluated and monitored," she explains in an interview.
This applies especially to the newer generative AI systems. Existing research has shown that these systems are "not guaranteed to work appropriately, robustly or unbiased". This can lead to distortions that could go undetected due to a lack of monitoring.
This is what the labeling of AI could look like
Professors Marzyeh Ghassemi and Elaine Nsoesie are therefore calling for responsible instructions for the use of artificial intelligence. These could follow the FDA's approach to creating prescription labels.
AI guide for free! Sign up now for our weekly newsletter BT kompakt. As a thank you, we are giving you our AI guide with 25 AI tools that you should currently know about!
As a society, we have come to understand that no pill is perfect - there is always some risk. We should have the same understanding for AI models. Every model - with or without AI - has limitations.
These labels could make clear the time, place and type of intended use of an AI model. They could also contain information about the time period over which the models were trained and with which data.
This is important, according to Ghassemi, because AI models that have only been trained in one place tend to perform worse when they are to be used in another place. However, if users have access to the training data, for example, this could sensitize them to "potential side effects" or "adverse reactions".
Also interesting:
Source: Basicthinking.de