Artificial intelligence is entering a particularly sensitive phase: the practice of medicine. In the United States, several recent trials are exploring the possibility of AI systems participating in the prescription of medications, within settings that are still limited but highly symbolic. This movement, which is both technological and regulatory, opens up an unprecedented debate: can we entrust a machine with a decision as critical as a medical prescription, traditionally reserved for healthcare professionals?
This development does not amount to blanket authorization, but rather to a series of supervised trials and legislative proposals aimed at anticipating a transformation that is already underway. Here, AI is not intended as a substitute for doctors, but as a tool capable of assisting, optimizing, and, in some cases, automating simple decisions. Yet even within this limited framework, the issue extends far beyond technology; it touches on responsibility, ethics, and trust in the healthcare system.
Concrete experiments, but still limited
In the United States, certain pilot programs already allow AI systems to renew prescriptions in specific cases, particularly for low-risk treatments or those already approved by a doctor. In Utah, for example, a pilot program has authorized the use of an automated system to manage recurring prescriptions under indirect supervision. The goal is clear: to reduce the administrative burden on healthcare professionals and speed up access to treatments for patients.
At the same time, some startups are exploring more advanced applications, particularly in the field of mental health, where hybrid systems combine automated recommendations with human validation. These experiments are still being conducted under strict supervision, but they demonstrate that the line between assistance and decision-making is beginning to blur. AI is no longer limited to making suggestions; it is beginning to play an active role in the medical process.
A trend driven by pressure on the healthcare system
The emergence of these solutions can be largely attributed to growing pressures on healthcare systems. In the United States, nearly 20% of areas are considered to be understaffed with healthcare professionals, and wait times for care can be as long as several weeks1. In this context, the automation of certain tasks appears to be a potential solution for improving the overall efficiency of the system.
In particular, AI makes it possible to process large volumes of medical data, identify patterns, and provide rapid recommendations. For simple, repetitive tasks—such as prescription renewals or managing standardized treatments—these tools can save a significant amount of time. However, this focus on optimization should not obscure the complexity of medical reasoning, which also relies on experience, intuition, and human interaction.
A key issue: medical liability
One of the major challenges posed by this development concerns liability. If an AI system prescribes an inappropriate treatment, who is liable—the developer, the healthcare facility, or the supervising physician? This question remains largely unresolved and constitutes a significant barrier to wider adoption.
Current systems attempt to address this issue by maintaining some level of human oversight, however minimal. AI serves as a support tool, not as an autonomous decision-making authority. However, as systems become more accurate and autonomous, the line between the two becomes increasingly blurred. The risk is not only technical; it is also legal and ethical.
Between technological capabilities and medical limitations
Recent advances in artificial intelligence in medicine are undeniable. In certain fields, such as image analysis or diagnostic assistance, AI models are achieving performance levels comparable to—or even surpassing—those of humans on specific tasks. According to a study published in *Nature*, some AI systems can detect medical abnormalities with over 90% accuracy in controlled settings2.
However, these results should not be extrapolated to the field of medicine as a whole. Prescribing is not merely a technical decision; it requires a comprehensive understanding of the patient, their medical history, their circumstances, and their preferences. AI can process data, but it does not perceive humans in all their complexity. It is this aspect that makes it difficult to fully automate medicine.
An ethical debate at the heart of innovation
The introduction of AI into medical prescribing raises profound ethical questions. Can we entrust a decision that directly affects an individual’s health to an algorithmic system? How can we ensure fairness, avoid bias, and guarantee the transparency of decisions?
These issues are all the more important given that AI systems can replicate—or even amplify—biases present in the data used to train them. An error in a recommendation system can have far more serious consequences in a medical context than in other fields. Caution is therefore a central principle in the deployment of these technologies.
Toward augmented medicine, but not automated medicine
Current trends are not leading to a future of medicine without doctors, but rather to a future of medicine enhanced by artificial intelligence. AI can improve efficiency, reduce certain workloads, and assist healthcare professionals, but it does not replace clinical judgment, the doctor-patient relationship, or human responsibility.
In this context, the role of the physician is evolving; they are becoming a supervisor, an interpreter, and a guarantor of the quality of decisions. AI is becoming just one tool among many—powerful, but used within established guidelines. The goal is not to replace humans, but to enhance their ability to provide care.
A technological frontier that is redefining medicine
The experiments conducted in the United States mark an important milestone in the evolution of medicine. They demonstrate that AI can be applied to increasingly sensitive areas, while also highlighting its limitations and the necessary precautions.
The future of medical prescribing will likely be neither fully automated nor completely unchanged. It will strike a balance between technological innovation and human responsibility. AI can transform medicine, but it is humans who define its rules, limits, and applications.
Learn more
The testing of AI capable of prescribing treatments is part of a broader transformation of the role of artificial intelligence in healthcare, spanning medical assistance and a redefinition of responsibilities. On a related topic, check out our article “Your Health Explained by AI: OpenAI Crosses a Threshold with ChatGPT Health”, which analyzes how AI systems are gradually being integrated into the care pathway and access to medical information.
References
1. U.S. Health Resources & Services Administration. (2025). Health Workforce Shortage Areas.
https://www.hrsa.gov
2. Nature Medicine. (2023). AI performance in medical diagnostics.
https://www.nature.com

