HARTFORD — National and statewide organizations will gather at the Legislative Office Building in Hartford on Wednesday morning, February 26th, to testify on behalf of amending and strengthening Connecticut’s proposed artificial intelligence legislation.
The bill contends with algorithmic decision-making systems, and marks a significant step toward comprehensive AI legislation. If enacted with some amendments, the bill could be a bulwark piece of law for algorithmic decision systems (ADSs) and AI. Including protections for discrimination in ADSs could create a much-needed balance in the harms, risks, and rewards of high-risk AI in consequential decision making. Many positives of the bill require some amendments in order to make it a workable solution for Connecticut consumers and workers.
A coalition of groups — ranging from racial justice to consumer protection to labor to democracy and technology organizations — testifies today on Senate Bill 2, An Act Concerning Artificial Intelligence. Artificial intelligence has long been touted as an industry-forward field in Connecticut’s regulatory landscape, and S.B. 2 is a step forward towards more equitable AI systems. The coalition — including groups such as NAACP of Connecticut Statewide Conference, Consumer Reports, EPIC, the ACLU of Connecticut, and Center for Democracy and Technology — also sent a letter [attached] urging a stronger version of the bill than what is raised in committee this week.
“Policymakers should also strengthen the law and further protect Connecticut workers and consumers,” says the letter. The coalition’s message includes: advocating for building on existing civil rights protections by prohibiting the sale or use of discriminatory AI decision systems; requirements to mitigate discriminatory harms discovered through strengthened impact assessments; clarifying public and private sector legalities; strengthening enforcement through a right to redress; and expanding the law’s transparency provisions, among other asks.
“Connecticut played a crucial role in kickstarting the national conversation about managing AI risks in 2024,” says the letter. “It has an opportunity in 2025 to lead the nation with innovative policy that places common-sense guardrails on the development and use of AI and automated decision-making systems.”
The group of organizations says it cannot support the legislation unless certain changes are made, including loophole-free definitions, transparency issues, recasting of impact assessments, and closing some loopholes in the legislation. Protections included in the current bill need more transparency and enforcement mechanisms.
“This bill has the potential to offer consumers in Connecticut critical baseline rights — including the right to an explanation when AI makes a high stakes decision about them,” said Grace Gedye, policy analyst at Consumer Reports. “It has the potential to provide an essential patch for Connecticut’s civil rights and consumer protection laws for the AI era. But right now, there are loopholes that could completely undercut the law. They must be tightened up.”
“We commend Senator Maroney for his diligent work on this bill and for his nationwide work raising awareness of the need for policies that advance responsible AI,” said Alexandra Reeve Givens, executive director of the Center for Democracy & Technology. “The bill's definitions need to be broadened and some loopholes closed to ensure consumers and workers receive the information they need when companies use AI to make key decisions that impact their lives. If those changes are made, SB 2 would bring much-needed transparency and accountability to AI-driven decisions. We hope the General Assembly takes this opportunity to advance strong legislation that empowers its workers and consumers and ensures that advances in AI benefit everyone.”
“Connecticut can be a national leader in artificial intelligence regulation,” said ACLU of Connecticut executive director David McGuire. “But we deserve a transparent scheme that examines, eliminates, and readjusts systems and knowledge bases rooted in bias. AI systems have also disproportionately harmed marginalized communities by reinforcing existing discrimination in housing, hiring, and healthcare. Strong anti-discrimination protections must be a core part of this legislation to ensure AI serves everyone fairly. Our testimony today provides ideas to strengthen these protections."
“Automated decision-making systems are used to make important decisions about people in Connecticut,” said Corrie Betts, NAACP of Connecticut Statewide Conference. “This bill requires stronger anti-discrimination protections and closing of loopholes in order for our organization to support it. We know there is a significant risk of harm to historically marginalized communities, including Black and Brown people, and we know the most vulnerable people interacting with these systems do not enjoy the same privileges as others. We are pleased to be part of this coalition and expect our voices to be heard and heeded.”
"Companies use opaque and unproven AI systems to make life-altering decisions about Connecticut residents every day, despite clear evidence that these systems are often biased or inaccurate," said Kara Williams, EPIC Law Fellow. "Connecticut residents—and all Americans—deserve to have transparency into these systems and rights if they are harmed by companies using AI. S.B. 2 is a step in the right direction, and EPIC is happy to continue working with lawmakers to further strengthen the bill."