California is exploring the benefits and risks of using artificial intelligence in state government
California politics, artificial intelligence
Queenie WongNovember 24, 2023
Artificial intelligence that can generate text, images and other content could help improve state programs, but also comes with risks, according to a report released Tuesday by the governor’s office.
Generative AI can help quickly translate government materials into multiple languages, analyze tax claims to detect fraud, summarize public comments, and answer questions about state services. Still, the use of technology
the analysis warned,
also comes with concerns about data privacy, disinformation, equality and bias.
When used ethically and transparently, GenAI has the potential to dramatically improve service outcomes and increase access to and use of government programs, the report says
declared
.
The 34-page report, ordered by Gov. Gavin Newsom provides a look at how California could apply the technology to state programs, even as lawmakers grapple with how to keep people safe
Californians
without hindering innovation.
Concerns about the safety of AI have divided tech executives. Leaders like billionaire Elon Musk have sounded the alarm that the technology could lead to the destruction of civilization, noting that if people become too dependent on automation, they will eventually forget how machines work. Other technology executives have a more optimistic view of AI’s potential to help save humanity by making it easier to fight climate change and disease.
At the same time, major tech companies, including Google, Facebook and Microsoft-backed OpenAI, are competing with each other to develop and release new AI tools that can produce content.
The report also comes at a time when generative AI is reaching another major inflection point. Last week, the board fired ChatGPT maker OpenAI
are
CEO Sam Altman for not being consistently candid in his communications with the board and putting pressure on the company
AI sectorAI’s future
in chaos.
On Tuesday evening, OpenAI said it had reached “an agreement in principle” for Altman
return as CEO
and the company appointed members of a new board. The company faced pressure to rehire Altman from investors, technology executives and employees who threatened to quit. OpenAI has not made public any details about what led to Altman’s surprise ouster, but the company reportedly did.
disagreements
about keeping AI safe and making money at the same time. A nonprofit board controls OpenAI, an unusual governance structure that allowed the CEO to be kicked out.
Newsom called the
AI
report an important first step as the state weighs some of the security concerns associated with AI.
“We took a nuanced, measured approach to understanding the risks of this transformative technology while exploring how to leverage its benefits,” he said in a statement.
Improvements in AI could benefit California’s economy. The state is home to 35 of the world’s top 50 AI companies, and data from Pitchfork shows the GenAI market could reach $42.6 billion by 2023, the report said.
Some of the risks outlined in the report include spreading false information and providing dangerous medical advice to consumers
And
allowing the creation of harmful chemicals and nuclear weapons. Data breaches, privacy and bias are also major concerns, along with whether AI will eliminate jobs.
“Given these risks, the use of GenAI technology should always be evaluated to determine whether this tool is necessary and useful to solve a problem compared to the status quo,” the report said.
While the state works on guidelines for using generative AI, the report states that state employees should adhere to certain principles in the meantime to protect Californians’ data. For example, state employees are not allowed to provide data about Californians
generative AI GenAI
tools like ChatGPT or Google Bard or use unapproved ones
Gen AI
tools on state devices, the report said.
AI
‘S
potential use goes beyond that
just now
state government. Police
agencies
The Los Angeles Police Department, for example, plans to use AI to analyze officers’ tone and word choices in bodycam videos.
California’s efforts to regulate some safety issues, such as bias surrounding AI, have not gained much traction in the last legislative session. But lawmakers have introduced new bills to address some of the risks of AI when they return in January, such as protecting entertainment workers from being replaced by digital clones.
Meanwhile, regulators around the world are still figuring out how to protect people from the potential risks of AI. In October, President Biden issued an executive order outlining standards around safety and security as developers create new AI tools. AI regulation was
an important topic of discussion
the Asia-Pacific Economic Cooperation Meeting last week in San Francisco.
During a panel discussion with executives from Google and Facebook’s parent company Meta, Altman said he thought Biden’s executive order was a good start, even though there were areas for improvement. Current AI models are fine, he said, and strict regulation is not necessary, but he expressed concerns about the future.
At some point, when the model can deliver the equivalent output of an entire company, then an entire country, and then the entire world, like maybe we want some kind of collective global oversight of that, he said, a day before he was fired as CEO of OpenAI.
Fernando Dowling is an author and political journalist who writes for 24 News Globe. He has a deep understanding of the political landscape and a passion for analyzing the latest political trends and news.