Worried about AI? How California lawmakers plan to address the risks of the technology in 2024

(SOPA Images / SOPA Images / LightRocket via Getty Images)

Worried about AI? How California lawmakers plan to address the risks of the technology in 2024

California politics, artificial intelligence

Queenie Wong

Dec. 28, 2023

Jodi Long was caught off guard by the cage filled with cameras meant to capture images of her face and body.

“I was a little panicked because before I got in there, I said I didn’t remember this being in my contract,” the actor said.

The filmmakers needed her digital scan, Long was told, because they wanted to make sure her arms were positioned correctly in a scene where she is holding a computer-generated character.

That moment in 2020 stuck with Long, president of SAG-AFTRA’s Los Angeles local organization, as she negotiated protections around the use of artificial intelligence when actors went on strike. In November, the actors’ guild reached a deal with Hollywood studios that, among other things, required permission and compensation for the use of an employee’s digital replica.

Unions aren’t the only ones trying to limit the potential threats of AI. Together with Governor Gavin Newsom, he signed an executive order on AI in September. California lawmakers have introduced a raft of legislation paving the way for more regulation in 2024. Some proposals aim to protect workers, combat AI systems that can contribute to gender and racial bias, and establish new requirements to protect against misuse of AI for cybercrime, weapons development and propaganda.

However, whether California lawmakers will succeed in passing AI legislation remains unclear. They will face lobbying from tech companies worth billions, including Microsoft, Google and Facebook, political powerhouses that have successfully blocked several AI bills introduced this year.

Artificial intelligence has been around for decades. But as technology rapidly develops, the ability of machines to perform tasks associated with human intelligence has raised questions about whether AI will replace jobs, fuel the spread of misinformation, or even lead to the extinction of humanity will suffer.

As lawmakers try to regulate AI, they are also trying to understand how the technology works so they don’t hinder its potential benefits while trying to limit its dangers.

One of the key challenges is that this technology is dual-use, meaning that the same kind of technology that could, for example, lead to massive improvements in healthcare could also potentially be used to cause quite serious harm, says Daniel Ho, a professor at Stanford. The university’s law school that advises the White House on AI policy.

Politicians are feeling a sense of urgency, pointing to the resistance they have already faced in their efforts to control some of the mental health and child safety challenges that are being exacerbated by social media and other technological products. While some tech executives say they are not opposed to regulation, they have also said critics are exaggerating the risks and expressing concern that they will face a patchwork of rules that vary around the world.

TechNet, a trade group that includes a variety of companies such as Apple, Google and Amazon, outlines on its website what members would and would not support when it comes to AI regulation. For example, TechNet says policymakers should avoid blanket bans on artificial intelligence, machine learning or other forms of automated decision-making and not force AI developers to publicly share proprietary information.

State Assembly Member Ash Kalra (D-San Jose) said policymakers aren’t doing that

R

Let tech companies regulate themselves.

As a legislator, my intent is to safeguard and protect the public and workers from risks that may arise from unregulated AI, Kalra said. Those in the industry have different priorities.

According to an April report from Goldman Sachs, AI could impact 300 million full-time jobs.

In September, Kalra introduced legislation that would give actors, voice artists and other employees a way to nullify vague content

R

actions that allow studios and other companies to use artificial intelligence to digitally clone their voices, faces and bodies. Kalra said he has no plans to shelve the bill, which is backed by SAG-AFTRA, for now.

Federal lawmakers have also introduced legislation aimed at protecting workers’ voices and likenesses. President Biden signed an executive order on AI in October, noting how the technology could improve productivity but also displace workers.

Duncan Crabtree-Ireland, SAG-AFTRA’s national executive director and chief negotiator, said he believes it is important that both state and federal lawmakers regulate AI without delay.

It has to come from different sources [be] curated in a way that creates the ultimate image we all want to see, he said.

Policymakers outside the US have already made progress. In December, the European Parliament and EU member states reached a landmark agreement on the AI ​​law, calling the proposal “the world’s first comprehensive AI law.” The legislation includes a different set of rules based on how risky AI systems are and would also require AI tools that generate text, images and other content, such as OpenAI’s ChatGPT.

Unpleasant

publish which copyrighted data was used to train the systems.

As federal and state lawmakers refine legislation, workers are seeing how AI impacts their jobs and testing whether current laws provide sufficient protections.

Tech companies including Microsoft-backed OpenAI, Stability AI, Facebook parent Meta and Anthropic are facing lawsuits over allegations that they used copyrighted works by artists and writers to train their AI systems. On Wednesday, the New York Times filed a lawsuit against Microsoft and OpenAI, accusing the tech companies of using copyrighted work to create AI products that would compete with the news channel.

Tim Friedlander, president and co-founder of the National Assn. of voice actors, said

its members

are missing out on jobs because some companies have decided to use AI-generated speech. Actors have also claimed that their voices are being cloned without their consent or compensation, a problem for musicians

sight

also.

One of the difficult things right now is that there’s no way to prove something is human or synthetic, or to prove where the voice comes from, he said.

Worker protections are just one issue surrounding AI that California lawmakers will look to address

in 2024.

Senator Scott Wiener (D-San Francisco)

in September

introduced the Law on Safety in the Field of Artificial Intelligence,

of which September

The goal is to address some of the biggest risks of AI, he said, including the technology’s potential misuse in chemical and nuclear weapons, election interference and cyberattacks. While lawmakers don’t want to stifle innovation, they also want to be proactive, Wiener said.

If you don’t get ahead of it, it could be too late and we’ve seen that with social media and other areas where we should have at least put in place broad regulatory systems before the problem started, he said.

Lawmakers are also concerned that AI systems could make mistakes that lead to unequal treatment of people based on protected characteristics such as race and gender. Assemblymember Rebecca Bauer-Kahan (D-Orinda) is sponsoring a bill that would prohibit any person or entity from deploying an AI system or service involved in making consequential decisions

S

in algorithmic discrimination.

Concern

S

that algorithms can amplify gender and racial biases because of the data used to train the computer systems is an ongoing problem in the tech industry. For example, Amazon scrapped an AI recruiting tool because it showed bias against women after its computer models were trained with resumes mainly from men, Reuters reported in 2018.

Passing AI legislation has already proven difficult. Bauer-Kahan’s bill did not even come to a vote in the House of Representatives. An analysis of the legislation, AB 331, found that several industries and companies expressed concerns that it was too broad and would result in overregulation in this area.

Still, Bauer-Kahan said she plans to reintroduce the bill in 2024, despite the opposition she faced last session.

It’s not that I want these tools to go away, but I want to make sure that when they come to market, we know they are non-discriminatory, she said. That balance is not too much to ask.

Trying to figure out which issues to prioritize when it comes to the potential risks of AI is another challenge politicians will face in 2024, as controversial bills can be difficult to pass in an election year.

Without agreement on at least some idea of ​​how to prioritize harms, and which ones are most urgent, it could become difficult to figure out what the most effective form of intervention might be, said Ho, the Stanford Law School . professor.

Despite all the fears surrounding AI, Long says she remains optimistic about the future.

She has starred in blockbuster films such as Marvel’s Shang-Chi and the Legend of the Ten Rings, and in 2021 became the first Asian American to win a Daytime Emmy for Outstanding Supporting Performance in the Netflix show “Dash and Lily.”

My industry is a collaborative process between many people, she said. And as long as people get our stories out there, I think we’ll be fine.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

spot_imgspot_img

Hot Topics

Related Articles