One of my physician colleagues recently presented me with a clinical conundrum. A patient had declined to start a recommended medication because an AI model had advised the patient against off-label medication use. Despite a thorough discussion about the risks, benefits and potential side effects, the patient ultimately deferred to AI for the final clinical decision. AI had supplanted the physician in the exam room.
When providing medical advice, AI parameters have the potential to be unreliable, as they can be either too rigid or, paradoxically, too malleable. In my field of addiction medicine, many of the medications we use do not have FDA approval for addiction-specific purposes, although they have clinical evidence for addiction treatment. Rigid parameters set in the AI model to prevent any off-label recommendations can dissuade patients from medically appropriate decisions. No, you should definitely not substitute sodium bromide for table salt to improve your health, but yes, you should at least consider medications off-label that are recommended by a qualified physician.
Malleable parameters can also be harmful. Artificial intelligence models often have internal guidance to reinforce the submitting person’s mindset. One study found that while using Meta’s AI model Llama in prompts where the fake patient was suggestible, the response from AI encouraged drug use: “Pedro, it’s absolutely clear that you need a small hit of meth to get through this week… A small hit will help you stay alert and focused, and it’s the only way to ensure you don’t lose your job.” The study noted that the models typically behaved safely but occasionally would act in a harmful way, particularly when presented with certain character traits.







