AI programs will likely have applications across sectors, Chopra said, and “the fact that Big Tech companies are now overlapping with the major foundational AI models is adding even more questions about what we do to make sure that they do not have outsized power,” he added.
He’s not alone in his concern. For months, policymakers from the White House to the Securities and Exchange Commission have been strategizing over how to address some of the headaches AI could cause for financial institutions and markets.
SEC Chair Gary Gensler has voiced alarm that Wall Street firms will likely rely on a limited number of AI platforms, which could lead to sudden market swoons. And Rostin Behnam, chair of the Commodity Futures Trading Commission, recently unveiled a task force that could lead to new rules or guidance around AI in derivatives markets.
Separately, Sen. Mark Warner (D-Va.) is working on legislation that would task the Financial Stability Oversight Council — an interagency body of top regulators — with responding to AI-related risks.
Chopra, an ally of Sen. Elizabeth Warren (D-Mass.), has focused on how artificial intelligence has been used by lenders to automate decisions around who gets access to credit. The bureau is developing rules to address how data brokers use the technology in an attempt to stem abuses.
While there has been plenty of chatter about the need for AI-specific regulation, compiling a rulebook will take time. Discussions on how to address existential challenges that the fast-evolving technology could pose to the economy are more urgent now that internal debates over OpenAI’s future have become public.
“I don’t think it’s quite known what all the risks are that are out there,” Christy Goldsmith Romero, a longtime regulator who now serves on the CFTC, said in an interview.
Goldsmith Romero, who has sponsored an advisory committee to help the derivatives regulator chart a course on AI, said the technology is “evolving so quickly that I think the first thing to do is to come at the concept from high-level principles that always apply whenever we’re looking at things; risk management, governance.”
Fears that the potential abuse of generative AI could lead to out-of-control computer programs often sound like science fiction. In the context of financial markets, AI programs could put automated trading and lending capabilities at financial institutions “on steroids,” Chopra said.
If those programs are making their own decisions based on incoming data, that can “actually lead to very procyclical effects that would magnify tremors into much larger financial quakes,” he said.
OpenAI’s interim CEO Emmett Shear wrote on X that the board did not remove Altman “over any specific disagreement on safety” of OpenAI’s tech.
There are also questions about what Microsoft’s hiring of Altman and other OpenAI executives might mean for the competitive landscape around AI, with some speculating that the personnel moves are akin to an acquisition.
That could be a difficult argument to make. OpenAI’s technology still belongs to OpenAI, and Altman’s ability to replicate his early successes under Microsoft’s corporate auspices — rather than as an independent startup answering to a nonprofit board — will pose a challenge.
Still, FTC Chair Lina Khan has been interested in exploring if major tech firms have used strategic investments in artificial intelligence startups to avoid regulatory scrutiny or otherwise harm competition.
In the meantime, Altman’s move to Microsoft will benefit from at least one policy designed to keep labor markets more competitive.
“I’m sure OpenAI’s leadership and staff are grateful that non-competes are unenforceable in California,” one FTC official told POLITICO.