I call this process “algorithmic alchemy,” where the risk score produced by an algorithm becomes more “real” and meaningful to a health system than a patient’s medical history, or anything they might say or do during a clinical consultation.Daniel WolfeFormer director of the International Harm Reduction Development program
This moment marks an important inflection point for algorithmic decision-making, particularly in computational health – the personalization and tailoring of prevention, diagnosis, and treatment through the application of big data learnings. Over more than 20 years of working in the field of addiction, I’ve witnessed how practitioners have tended to treat everyone in largely the same way, regardless of their treatment history, comorbidities, genetics, and other individual circumstances and traits. A one-size-fits-all approach has its own terrible limits. Computation, algorithmic amalgamation and analysis of large data sets, and new approaches to both natural language processing and other kinds of predictive analytics are all incredibly powerful tools that can bring nuance to healthcare delivery, thereby transforming it.
However, I’m also aware that we’re at a moment of intense algorithmic anxiety and uncertainty. People are now talking about ChatGPT or Bard becoming the smartest doctor, teacher, or writer in any room. At the same time, people are recognizing that technologies don’t spontaneously implement themselves ethically or effectively. These innovations need to be tested in the context of existing cultures and workflows, whether in healthcare or any other setting. In other words, technologies are a tool, not a solution in and of themselves, and artificial intelligence is less important than augmented intelligence.
I don’t think we’ve achieved that balance yet, but the health sector probably has the greatest positive potential if we do. When AI is used to automate repetitive tasks, doctors can spend more time engaging with patients face-to-face. However, there’s so much work to be done to find an appropriate middle ground. We need to find a cross-disciplinary way to learn from past examples – and mistakes. Studies have found that algorithmic assessments used in the justice system for determining bail conditions often discriminate on the basis of race, criminal history, disability, and other factors. Similar biases exist in the deployment of data-driven technologies in healthcare. There needs to be a way to aggregate the lessons of this work across multiple disciplines.
I think we need to move from 20th-century regulatory structures to 21st-century ones: to data analysis, appropriate labeling and testing, and regulation. People in the machine learning or AI field often say that we need something like a Food and Drug Administration (FDA) for AI. In fact, the FDA itself needs to expand its scope and methods on algorithmic regulation. Part of my time at Bellagio involved drafting a citizen’s petition to unite scientists, legal and ethical experts, patient groups, and key analysts of algorithmic bias and governance to begin to demand at least a little more attention on this issue.
It will take some very creative thinking and engagement to realize AI’s medical promise – both with the people operating within the old system, aware of its limits, and the people who are the conceptual and creative pioneers in this space that are setting and implementing policy. But I believe it’s possible to build that bridge.
Daniel Wolfe is the former director of the International Harm Reduction Development (IHRD) program at the Open Society Foundations, which works to support the health and human rights of people who use drugs around the world. Before this, Wolfe was a community scholar at Columbia University’s Center for History and Ethics of Public Health. Daniel attended a residency at the Bellagio Center in February 2023 with a project titled “Beyond NarxCare: From algorithmic bias to regulatory action in the U.S. overdose crisis”.
Welcome to a special edition of the Bellagio Bulletin, where you’ll have a chance to hear from leading voices within the alumni network on one of the greatest global challenges of our time – the ethical application and governance of artificial intelligence. We hope you’ll find their points of view as illuminating as we have […]More