Sens. Young, Rounds Seek AI Consensus With Schumer
The goal of this summer’s Senate briefings on artificial intelligence is to reach agreement on legislation that allows technological innovation and protects individual privacy, Sens. Todd Young, R-Ind., and Mike Rounds, R-S.D., told us Tuesday.
Sign up for a free preview to unlock the rest of this article
Timely, relevant coverage of court proceedings and agency rulings involving tariffs, classification, valuation, origin and antidumping and countervailing duties. Each day, Trade Law Daily subscribers receive a daily headline email, in-depth PDF edition and access to all relevant documents via our trade law source document library and website.
Senate Majority Leader Chuck Schumer, D-N.Y., hosted the first of three AI briefings Tuesday with help from Young, Rounds and Sen. Martin Heinrich, D-N.M. (see 2306130056). Future sessions will include a closed national security briefing for senators on the technology's global threats.
Any legislation the group comes up with should allow AI to “flourish” while protecting consumer privacy, said Rounds, noting continued AI innovation from U.S. adversaries like China. There will be legislation at some point, but the main goal now is to get members briefed on the benefits and risks, he said.
Members need to start thinking about how the risks can be mitigated, said Young. Asked if he can strike a bipartisan agreement with Schumer like he did with chips legislation (see 2208090062), he said this process will be more “consultative” with various committees from the outset. It’s unclear what legislation will look like, but “we do anticipate regulating in this space through regular order,” he said. Sens. Richard Blumenthal, D-Conn., and Josh Hawley, R-Mo., are also pursuing potential legislation separately.
Schumer spoke on the Senate floor Wednesday, saying the first briefing had a “really strong turnout from both parties.” He noted even the creators of the latest AI technology admit they're struggling to keep up with societal impacts: “They don’t have a firm grip on how this technology works now, and even less of a grip on how it will work in the future.” He highlighted the issue of “explainability,” or making AI’s inner workings understandable to the public.
Schumer is “following a very good path of setting up these briefings and kind of getting us to understand the issues,” said Sen. Angus King, I-Maine. The biggest issue for King is to create watermarking or labeling so people know when they’re viewing AI-generated content. “The problem is the technology has gotten so good that you can’t distinguish between something you see and reality,” he said. “In a democracy that’s based upon information, that’s a dangerous situation.”
IBM, Microsoft and OpenAI urged Congress to issue AI regulations (see 2305250037) and several agencies, including the FTC (see 2304250054 and 2305180050), are exploring how statutory authority applies to the technology.
NTIA closed public comment Monday on its query into how policies should be designed to ensure AI can be trusted (see 2304120036). A bipartisan group of 22 state attorneys general wrote the agency Monday, urging regulators to consider a “risk-based approach” that recognizes how some AI applications “require greater oversight.” AI decision-making that has “legal or other significant effects on people” is one area of concern, they said. The group included AGs from Colorado, South Carolina, South Dakota, Virginia, California and New York, as well as Washington, D.C.
Regulators should be “mindful” about the limitations of AI assurance and auditing and should avoid allowing such mechanisms to “turn into inconsequential and performative check-box exercises,” Mozilla told NTIA: Agencies should instead pay close attention to “aspects such as independence, auditability, and benchmarks or standards.” Mozilla encouraged NTIA to enable “more purpose-driven innovation and growth in the tech sector.”