NTIA Expects to Issue AI Report ‘Later This Year’
NTIA expects to issue a report on responsible AI policies “later this year,” Associate Administrator-Policy Analysis and Development Russ Hanser said Thursday.
Sign up for a free preview to unlock the rest of this article
Timely, relevant coverage of court proceedings and agency rulings involving tariffs, classification, valuation, origin and antidumping and countervailing duties. Each day, Trade Law Daily subscribers receive a daily headline email, in-depth PDF edition and access to all relevant documents via our trade law source document library and website.
The agency publicly released nearly 1,500 comments in June on its query into how policies should be designed to ensure AI technology is trustworthy (see 2306160052). About 200 of the comments were from companies and groups, and the rest from private individuals and academics, said Hanser during a Georgetown Law event. By contrast, NTIA’s query on its broadband, equity, access and deployment program garnered some 500 comments.
“There’s no information policy issue right now as prominent as” AI, and there isn’t a better “collection of tech policy expertise” than there is at NTIA, said Hanser, who worked in the FCC’s Competition Policy Division 2003-2005. “Our plan is to issue a report later this year. I’m confident that’s going to happen.” It’s an important issue with “some urgency to it,” he said.
He noted several federal agencies are exploring the issue (see 2306140043). The FTC said Thursday its Competition Bureau is focused on ensuring open and fair competition in the face of paradigm-shifting technology like generative AI. If society becomes increasingly reliant on AI, “those who control its essential inputs could wield outsized influence over a significant swath of economic activity,” the FTC said. The agency said the Competition Bureau and Office of Technology will use their “full range of tools to identify and address unfair methods of competition.”
Hanser noted the technology poses risks to national security and human rights. Policymakers should be questioning whether AI technology does what a company says, whether bias has been mitigated and why AI systems make certain decisions, he said. NTIA can’t set rules and enforce the law, but it can use its expertise to advance the debate, he said: NTIA’s job is to help explain how the technology works and find consensus-based solutions.
He noted the Commerce Department and Secretary Gina Raimondo have weekly, almost daily, discussions with foreign counterparts, most notably partners in the EU and Japan. “People know” the U.S. is a leader in this space by economic and technological “virtue,” he said.
There seems to be a “race” to be the first regulator of AI, said Georgetown Law professor Anupam Chander. It’s not clear that being first is “right,” he said: China, for example, already has AI laws on the books. The European Parliament approved a proposed law on AI last month (see 2306140039), and tech groups are warning the U.S. not to follow the EU’s example on overly prescriptive regulation (see 2306270054).
Acting swiftly on regulation has advantages, said Georgetown Law professor Neel Sukhatme. For one, the winners and losers of AI regulation haven’t been identified, so it might be easier to move forward and “get things done” now, he said. Regulation could help preempt the “lobbying industrial complex” from emerging in the first place, said University of Houston Law Center assistant professor Nikolas Guggenberger. Mehtab Khan, a fellow for the Yale Information Society Project's Wikimedia Initiative, noted the discussion about AI somewhat mirrors the discussion on privacy legislation, and she questioned whether policymakers have learned any lessons from that.