Should AI Decide Your Credit Score? Democrats Say No.

Democrats warned that widespread use of AI to evaluate critical consumer decisions from credit to housing is “dangerous for both our financial system and our entire economy.”

Democrats on Thursday introduced the Algorithmic Accountability Act of 2023, a bill aimed at preventing AI from perpetuating discriminatory decision-making in sectors like finance, housing, health, employment and education.

The bill, sponsored by Sens. Ron Wyden (D-Ore.) and Cory Booker (D-N.J.) in the Senate and Rep. Yvette Clarke (D-N.Y.) in the House, would require companies using AI to test algorithms for bias and would publicly disclose the existence of those algorithms in a Federal Trade Commission registry. It would also staff up the agency so it could enforce the law.

“We know of too many real-world examples of AI systems that have flawed or biased algorithms: automated processes used in hospitals that understate the health needs of Black patients; recruiting and hiring tools that discriminate against women and minority candidates; facial recognition systems with higher error rates among people with darker skin,” Booker said in a statement.

The bill comes as both parties in Congress are taking a skeptical look at emerging applications of AI in hearings and closed-door gatherings with major tech figures.

Federal agencies are also scrambling to clarify how existing guardrails on business apply to AI. On Tuesday, the Consumer Financial Protection Bureau stressed that lenders must use “specific and accurate reasons” when taking an adverse action, such as lowering someone’s credit limit, even when AI is involved. The announcement is intended to keep creditors from using large data sets to make opaque, unfair lending decisions.

In a Wednesday hearing of the Senate Banking Committee, senators expressed trepidation around the technology’s potential to upend the financial system in unforeseen ways and the evidence that AI has already been used to “supercharge” discrimination in lending and financial products.

“The Silicon Valley ethos of move-fast-and-break-things is dangerous for both our financial system and our entire economy,” said committee Chair Sherrod Brown (D-Ohio). “If emerging technologies aren’t covered by existing rules on the books, then we must pass new ones, to create real guardrails.”

Separately, Sens. Richard Blumenthal (D-Conn.) and Josh Hawley (R-Mo.) of the Senate Judiciary subcommittee on privacy and technology released a bipartisan framework for regulating the use of AI in national security, consumer and employment decisions and privacy breaches.

Blumenthal and Hawley endorsed the need for a new federal oversight body and for licensing and export controls of AI models with “high-risk” uses, such as facial recognition and data policies.

They also called for mandatory disclosures to alert users when they are interacting with or seeing content created by an AI model or when AI is being used to make an adverse decision, and to give independent researchers easy insight into the models and their pitfalls. Their framework also floats strict limits on AI and children.

Tech companies, meanwhile, are lobbying for an “industry-led” oversight process over more strident federal regulations.

Popular in the Community

Close

What's Hot