AI Regulation Seen as 'Trendy' but Premature, TPI Aspen Forum Speakers Say
ASPEN, Colorado -- State and federal lawmakers have significant interest in regulating AI, but that may be premature, said industry, government and academic experts Monday at Technology Policy Institute's Aspen Forum. Later, Oren Etzioni, Allen Institute for AI CEO, was critical of what he said was AI alarmism. Speakers also discussed Congress' tech, media and telecom legislative priorities (see 2308210009).
Sign up for a free preview to unlock the rest of this article
Timely, relevant coverage of court proceedings and agency rulings involving tariffs, classification, valuation, origin and antidumping and countervailing duties. Each day, Trade Law Daily subscribers receive a daily headline email, in-depth PDF edition and access to all relevant documents via our trade law source document library and website.
Regulation “is trendy right now, [but] policymakers need to take a step back,” said Halie Craig, Senate Commerce Committee Republican senior adviser. She said the U.S.’ largely unregulated approach to the internet let its internet industry grow far more than Europe’s, which took a more precautionary approach. She said too-broad AI regulation could similarly hamper growth of a domestic AI economy. Maintaining a competitive edge over China means being cautious about regulation, Craig said. Some harms talked about for AI, such as misinformation, exist independent of AI, so putting AI in some unique category and regulating it misses the mark of addressing the harms themselves, she said.
Google Chief Economist Hal Varian said being able to identify the true harms from AI, where regulation will make sense, will require patience. "It's too early to determine that now," he said. Echoed Cisco Director-Business and Human Rights Katie Shay, “we need to give space to this technology to grow and develop” and leave regulatory focus to highest-risk use cases. Companies need to be thinking about preparing their workforces to interact with AI, Shay said.
AI works differently from other forms of IT because computers traditionally followed instructions, while AI learns, said Danielle Li, Massachusetts Institute of Technology Sloan School of Management associate professor. Algorithms' effectiveness is limited by the data they're given, and instead of regulation, policymakers perhaps need to think more about "data infrastructure" and ensuring algorithms have access to better data, making them more effective, she said.
The major tech companies' voluntary AI commitments the White House announced last month (see 2307210043) are gravely concerning because the administration is talking about their becoming binding obligations, Craig said. She said committing the industry to standards that potentially only the largest players can meet could harm innovation and productivity.
Etzioni said the commitments include some good principles, such as transparency, but some items are glaring in their absence from the commitments, such as distinctions between AI research -- which should be unregulated -- and AI deployment. He said there also should be a differentiation between regulation of the technology itself vs. regulation of its applications.
Speakers were also critical of tech experts' call for a six-month "pause" in training AI systems more powerful than GPT-4. Etzioni said such a moratorium would accomplish nothing other than give international competitors a lead. He said the call seems to be largely symbolic, reflective of the notion that AI is moving too fast. "There is an AI arms race, and it is worrying," but even more worrying is the alternative of DOD not pursuing military-related AI capabilities while U.S. adversaries continue to do so, he said.