AI Legislation: What to Expect on the CA Congress Floor in 2025
- municjournal
- Jul 12
- 8 min read
Since the release of Large Language Model (LLM) ChatGPT in 2022, $41 billion has been invested in Bay Area AI startups. Out of the top 50 AI companies globally, California is home to 32 of them. More definitively, the AI boom of the 2020s has created a new subculture of AI workers, who have turned SF neighborhoods into tech enclaves, earning the Bay Area the nickname of “Cerebral Valley.” AI-powered tools and platforms have pervaded American culture on a similar plane as the 1980s home computer revolution. In response to the AI boom, California politicians are now looking to introduce legislation and exploring private collaboration.

Illustration by Aleena Gao
Written by Alice Zhao and Jasmine Li
Edited by Catherine Qin
Last year, Newsom co-hosted the GenAI summit, inviting industry professionals and leading researchers to discuss how the state could use AI to improve the lives of Californians. He has headed an initiative to incorporate AI developed by companies such as Microsoft, Anthropic, and NVIDIA into improving customer service and reducing traffic within the Departments of Technology, Tax and Fee Administration, and Transportation. Newsom also signed over 20 AI bills, many of which concerned issues such as deepfakes, children’s exposure to AI, and access to AI detection software. One bill, SB 896, the Generative Artificial Intelligence Accountability Act, regulates state agencies’ use of AI. Several other bills concerning the use of AI are currently on the floor.
SB 7 seeks to limit the use of automated decision systems (ADS) in employment decisions, requiring significant human oversight over hiring, promotion, and termination of workers. In the bill, ADS is defined as “any computational process derived from machine learning, statistical modeling, data analytics, or artificial intelligence that issues simplified output… that is used to assist or replace human discretionary decision making”—permitted for workplace use only if the ADS does not use personal status data (religious beliefs, citizenship, etc.). SB 7 further outlaws ADS that conduct predictive behavior analysis and the application of such tools to inform employee compensation. The bill aims to increase transparency, mandating that companies must provide written pre-use and post-use notices to workers. After the post-use notice, employees have a 30-day window to appeal any ADS-made decision, and employers must provide a human reviewer to evaluate an appeal within 2 weeks.
Introduced by Representative Bauer-Kahan of CA State Assembly District 16, AB 1018 aims to prevent AI algorithmic discrimination relating to automated decision tools across all economic sectors, such as housing, employment, education, and healthcare. Bauer-Kahan assigned increased responsibility to individuals who are customizing AI tools outside of the scope of their intended purpose, as well as companies using ADS in any way, requiring mandatory transparency measures, annual audits, and impact evaluations. Noncompliance may be punished with fines of up to $25,000. This bill builds upon the contents of Bauer-Kahan’s proposal from last year (AB 2930), which addressed similar issues but was pulled from the floor before it reached Newsom’s desk after the Senate Appropriations Committee limited the scope of the bill.
SB 53 establishes outlines for the community development of CalCompute, a public cloud computing cluster, under safe, ethical, and equitable deployment practices. Moreover, the bill expands whistleblower protections: legislation prohibits companies from enforcing policies that prevent employees from disclosing or correcting false statements on critical risks, such as cyberattacks and uncontrolled AI. This comes in response to increasing lobbyist action from tech companies; notably, the bill was introduced a week after OpenAI whistleblower Suchir Balaji’s death in November 2024.
Opposition & Support
Google and OpenAI were strong opponents of Bauer-Kahan’s bill when it was introduced as AB 2930 in 2024. OpenAI even expanded its lobbying team in December 2024 to discourage artificial intelligence regulation across the nation. Their opinions on AB 1018 are not expected to change. With emerging overseas tech competitors, the shortage of data for AI training, and President Trump’s cuts to AI research, tech companies are now especially wary of any factor that will limit their growth.
Other opponents of AB 2930 included Kaiser Permanente and Sutter Health. Notably, these companies integrated artificial intelligence into their treatment processes as early as March of 2024, using tools such as predictive analytics for treatment and generative AI for secretarial work. Bauer-Kahan’s advance on targeting extensive profiling by increasing transparency requirements may inhibit further adoption of AI technologies in the healthcare industry. Other opponents raise concerns on how exactly Human Resource departments would carry out impact assessments, a continued struggle in outlining practical responses to AI-related problems.
Supporters of AB 2930 were limited, but included civil rights organizations such as the Center for Democracy and Technology, the Center on Race and Digital Justice, and Consumer Reports. Dissatisfied with the lack of federal oversight on this issue, many supporters see these measures as necessary to protect consumers and workers as a bold first step that may provide a framework for AI legislation in other states or at the federal level. Furthermore, supporters see this as an opportunity to make California’s AI legislation more comprehensive.
On a federal level, House Republicans are moving to pass a budget reconciliation plan that would halt state efforts to manage AI usage for the coming decade. This move, however, is unsurprising. Since the beginning of the technological age, Silicon Valley tech giants have partnered with Republicans to fight business regulation. Cyberlibertarianism, for example, emerged from the late 20th-century clash of San Francisco hippie culture and Silicon Valley hackers. Cyberlibertarians viewed the budding “free” World Wide Web as a revolutionary solution against big government; consequently, the movement grew from the early support of Republicans like Reagan and Bush. Now, they have returned to political power, in the form of tech CEO and investor elites, such as Elon Musk and Peter Thiel, whose companies (SpaceX and Palantir, respectively) receive hefty federal contracts. Over the last 16 years, $2.5 billion of taxpayer money has been invested in Palantir’s AI-powered analytics for the defense industry. The anti-regulatory movement, composed of tech elites and pro-business politicians, continues to worry about America’s waning lead in the “AI race” and will seemingly stop at nothing to keep the money flowing.
Impacts
California, however, is not the first state to propose AI legislation. Colorado passed the nation’s first comprehensive AI framework with its Senate Bill 205 in 2024, and even more than a year later, it is still considered the most comprehensive legislation. This legislation is slated to go into effect starting in 2026, focusing on enforcing transparency and preventing algorithmic discrimination from AI systems that may make high-impact decisions in the lives of their consumers, much like Bauer-Kahan’s AB 1018. However, pushback from tech companies has pushed the state government to ask the bill’s sponsors to revise it to curb potential negative impacts on Colorado startups. Similarly, legislators in Utah and Texas have introduced and passed bills regulating consumer protection and high-risk systems, respectively, although the laws in both states focus on a much narrower scope. Still, both states have since decreased the scope of the laws. This pushback mirrors the intense backlash against bills to regulate AI usage from the growing tech sector in California.
Many see the increased discourse surrounding Californian AI legislation as an opportunity for the state to be a leader in AI legislation, providing guidelines for future federal regulations and encouraging transparency and accountability. Protecting consumer interests and ensuring these new AI-based systems are equitable is a key concern of the government. However, many tech companies seek to amend or oppose these bills on the basis that increasing regulation could hinder innovation and growth, which is critical to the emerging AI industry. For the 20% of California workers who are employed in the tech sector, which has recently experienced mass layoffs, the AI boom serves as a silver lining for job opportunities. Now, lawmakers continue to search for ways to protect the American workforce and prevent a repeat of the devastating unemployment rates during the 1980s digital revolution.
One part of the solution might be SB 7, nicknamed the “No Robo Bosses Act.” As more Americans begin to sit in automated video interviews and have their resumes combed through by artificial intelligence programs, SB 7 will keep humans in management positions and in charge of employment decisions. However, the bill may not be effective in mitigating the current trend of technological unemployment, as most of those laid off are lower-level software developers. Instead, SB 7 preemptively eliminates the possibility of a fully automated workforce. Still, SB 7 protects the right of employees to appeal AI-backed employment decisions, which could help prevent new power abuses under corporations that employ ADS.
SB 53 wages a different battle amidst the AI revolution: aiming to fight privatization itself through two outlets. The bill would first provide governmental protection to whistleblowers against big AI companies. Furthermore, SB 53 brings CalCompute back to the table, after Newsom struck it down last year in SB 1047, and California must consider its potential impacts. Investments in a large-scale public AI network in California would be revolutionary: CalCompute could provide new entry points for small startups and serve as an important barrier to an AI monopoly. By offering supercomputing power as a public resource, the network would deprioritize the commercial interests of tech giants, promoting market innovation in a slowing AI sector. Key digital public infrastructure (DPI), such as the Internet and GPS, has transformed national industries. The success of SB 53 in California could usher in a new age of DPI around the world.
Bauer-Kahan’s AB 1018 aims to address upcoming algorithmic discrimination in various economic sectors, including healthcare. Health insurance companies are currently investing millions into robust artificial intelligence systems that track at-risk individuals in order to optimize business. 96 percent of American hospitals now use predictive models in their electronic health records. Many legislators see immediate response as a necessity, including North Dakota Governor Kelly Armstrong, who banned insurance companies from using AI to make claim denials just this April. In a 2025 study, researchers found that 44 percent of hospitals that self-reported using predictive models evaluated their systems for bias, and two-thirds evaluated their models for accuracy. The findings stress the need for policy intervention on evaluation measures, considering a recent result that “an increasing number of empirical analyses [have revealed] racial and other forms of bias in algorithms that perpetuate or exacerbate inequities.” To increase bias and accuracy evaluations nationwide, the study suggests connecting under-resourced hospitals to technical assistance, which is often better managed on a state or municipal level. Furthermore, the researchers point out the number of hospitals using third-party or self-developed models, which are currently outside the scope of federal regulation and may warrant extra consideration. Local legislation, pushed by politicians such as Bauer-Kahan, may be one solution to this problem.
Despite signing over 20 laws aimed at regulating the influence and development of AI just last year, Newsom has shown interest in working with the AI industry in the past. Outside of encouraging public-private sector collaboration with the GenAI summit, Newsom has also signed an executive order to study the risks and progress of AI development and carefully begin implementing AI within the state government. Newsom has discussed the idea of using generative AI tools to make the state government more efficient by proposing systems to analyze traffic data and strategies to reduce congestion on the roads while also improving customer service processes. Furthermore, he has implemented an initiative with NVIDIA to utilize AI resources in community college courses to aid educators, students, and workers, creating new pathways for residents to improve their skills and boost their careers. As the home to the AI boom and one-of-a-kind collaborations with industry giants, California has proven to be the center of AI research, collaboration, and development, and with these new bills now on the table, the state may very soon be a leader in AI development as well. However, legislators must work carefully to balance the interests of the consumers and the developers.
Comments