As with most hearings, the session offered more questions than answers — the biggest, of course, being whether Congress has any stomach for regulating a new industry at all.
Here are three big unknowns that now hang over Washington’s efforts to control a profound and disruptive new technology.
Do we need a new federal agency?
Surprisingly, this one may have bipartisan support. Senate Judiciary Chair Sen. Dick Durbin suggested the need for a new agency dedicated to overseeing the development of artificial intelligence — possibly an international one. “We’re dealing with innovation that doesn’t necessarily have a boundary. We may create a great U.S. agency — and I hope that we do — that may have jurisdiction over U.S. corporations and U.S. activity that doesn’t have a thing to do with what’s gonna bombard us from outside,” he said.
Republican Sen. Lindsey Graham, the ranking member on the Judiciary Committee, chimed in by backing the idea of an agency that would issue licenses for powerful new AI tools.
For his part, Altman was on board with both the agency and the licensing idea during the hearing — and says he’s looking globally, not just nationally, for regulations.
“It’s difficult. It sounds sort of like a naive thing,” he said to a group of reporters after the hearing. “But we’ve done it for other industries. IAEA did it. And I think this is a technology we should treat with that level of seriousness.”
Not everyone agrees, though: Fellow panelist Christina Montgomery, IBM’s chief privacy and trust officer, said existing oversight is enough to govern AI. Echoing a more familiar industry talking point, she said more would stifle innovation.
Who owns the data that AI trains on?
The biggest and most powerful AI platforms — the “large language models” made by OpenAI and others — are built on massive amounts of existing data, much of which is made by people who had no idea their work would be used to train a piece of software.
Front and center for Sen. Marsha Blackburn was the issue of who should own all the AI-generated material produced by large language models trained on copyrighted works. The Tennessee senator introduced antitrust legislation to break up TicketMaster in the last Congress.
While Altman did not have much to offer in terms of solutions to the growing community of creators angry about their work being used to train large language models, he did say during the hearing that people should be able to opt out of having their data be used to train those models.
Tomorrow, this issue will be front-and-center at a House Judiciary Subcommittee about the intersection of generative AI and copyright law.
How much will AI influence the 2024 election?
Chatbots are very good at simulating human speech and writing, and Sen. Josh Hawley brought up AI’s ability to sway people’s opinions in the 2024 election cycle, saying it could be used to target undecided voters in an election cycle.
Later, Sen. Amy Klobuchar offered her own concerns that ChatGPT could provide inaccurate information to voters about the election itself.
Altman didn’t get defensive. In fact, he agreed.
“It’s one of my areas of greatest concerns — the more general capability of these models to manipulate, to persuade, to provide sort of one-on-one disinformation,” said Altman.
Both Altman and Gary Marcus, an AI expert who also sat on the panel, sought to distinguish generative AI in the tech policy conversation from the discussion around the algorithms that social media platforms use to recommend content.
In an apparent effort to distinguish OpenAI from the social media platforms under congressional fire for their content moderation and recommendation policies, Altman emphasized that OpenAI’s AI models did not maximize for audience engagement.