Behind the times: Washington is trying to catch up on AI use in healthcare
health
Darius TahirFebruary 13, 2024
Lawmakers and regulators in Washington are starting to wonder how to regulate artificial intelligence in healthcare, and the AI ​​industry thinks there’s a good chance they’ll screw it up.
It’s an incredibly difficult problem, says Dr. Robert Wachter, chairman of the Department of Medicine at UC San Francisco. There is a risk that we will come in with guns blazing and over-regulate.
The impact of AI on healthcare is already widespread. The Food and Drug Administration has approved it
some
692 AI products. Algorithms help schedule patients, determine emergency room staffing levels, and transcribe and summarize clinical visits to save physicians time. They are starting to help radiologists read MRIs and X-rays. Wachter says that for complex cases he sometimes informally consults a version of GPT-4, a large language model from the company OpenAI.
The scale of AI’s impact and the potential for future change means that the government is already catching up.
Policymakers are terribly behind the times, Michael Yang, senior managing partner at OMERS Ventures, a venture capital firm, said in an email. Yang’s colleagues have made huge investments in the sector. Rock Health, a venture capital firm, says backers have poured nearly $28 billion into digital health companies that specialize in artificial intelligence.
Artificial intelligence has entered medicine. Are patients at risk?
One problem regulators are grappling with, Wachter says, is that AI changes over time, unlike drugs that will have the same chemistry in five years as they do today. But governance is taking shape, with the White House and several health care-focused agencies developing rules to ensure transparency and privacy. Congress is also showing interest; The Senate Finance Committee held a hearing on AI in healthcare last week.
2/8
Together with regulations and legislation, there is more and more lobbying. CNBC counted a 185% increase in the number of organizations disclosing AI lobbying activities in 2023. Trade group TechNet has launched a $25 million initiative, including TV ad buys, to educate viewers about the benefits of artificial intelligence.
It’s very difficult to know how to smartly regulate AI because we’re so early in the technology’s invention phase, said Bob Kocher, a partner at venture capital firm Venrock who previously served in the Obama administration, in an e-mail mail.
Kocher has spoken to senators about AI regulation. He highlights some of the difficulties the healthcare system will face in adopting the products. Physicians faced with malpractice risks may find themselves using technology they do not understand to make clinical decisions.
An analysis of January Census Bureau data by the consulting firm Capital Economics shows that 6.1% of healthcare companies plan to use AI in the next six months, roughly in the mid-20s sectors examined.
Like any medical product, AI systems can pose risks to patients, sometimes in novel ways. An example: they can make things up.
California is exploring the benefits and risks of using artificial intelligence in state government
Wachter remembers a colleague
WHO
as test, assign
ed
OpenAI’s GPT-3 to write a prior authorization letter to an insurer for a purposefully crazy prescription: a blood thinner to treat a patient’s insomnia.
But the AI ​​wrote a nice note, he said. The system cited recent literature so convincingly that Wachter’s colleague briefly wondered whether she was missing a new line of research. It turned out that the chatbot had made up its claim.
There is already a risk of AI increasing bias in healthcare. Historically, people of color have received less care than white patients. For example, research shows that black patients with fractures receive pain medication less often than white patients. This bias could be permanently captured if artificial intelligence is trained on that data and then acts on it.
Research into AI deployed by major insurers has confirmed that this has happened. But the problem is broader. Wachter said UCSF was testing a product to predict no-shows for clinical appointments. Patients who are considered unlikely to visit are more likely to be double booked.
The test showed that people of color were more likely to not show up. Whether or not the finding was correct, the ethical answer is to ask why is that the case and can you do something about it, Wachter said.
What would make a computer biased? Learn a language spoken by people
Hype aside, these risks are likely to continue to draw attention over time. AI experts and FDA officials have emphasized the need for transparent algorithms, which are monitored over the long term by human regulators and third-party researchers. AI products adapt and change as new data is processed. And scientists will develop new products.
Policymakers will need to invest in new systems to track AI over time, said University of Chicago Provost Katherine Baicker, who testified at the Senate Finance Committee hearing. The biggest progress is something we haven’t thought about yet, she said in an interview.
KFF Health News
Formerly known as Kaiser Health News, is a national newsroom that produces in-depth journalism on health issues.
Fernando Dowling is an author and political journalist who writes for 24 News Globe. He has a deep understanding of the political landscape and a passion for analyzing the latest political trends and news.