FCC Clamps Down on Voice-Cloning Technologies in Robocalls
FCC commissioners moved quickly to approve 5-0 a declaratory ruling prohibiting voice-cloning technology in robocall scams. Chairwoman Jessica Rosenworcel circulated the ruling a week ago. The agency issued a notice of inquiry exploring the issue in November (see 2311160028).
Sign up for a free preview to unlock the rest of this article
Timely, relevant coverage of court proceedings and agency rulings involving tariffs, classification, valuation, origin and antidumping and countervailing duties. Each day, Trade Law Daily subscribers receive a daily headline email, in-depth PDF edition and access to all relevant documents via our trade law source document library and website.
The ruling finds voice-cloned calls violate the Telephone Consumer Protection Act. That’s in keeping with a recommendation by attorneys general from 25 states and the District of Columbia, which asked the agency to clarify that calls mimicking human voices are considered “an artificial voice” under the TCPA (see 2401170023).
“We confirm that the TCPA’s restrictions on the use of ‘artificial or prerecorded voice’ encompass current AI technologies that generate human voices,” the ruling said: “Calls that use such technologies fall under the TCPA and the Commission’s implementing rules, and therefore require the prior express consent of the called party to initiate such calls absent an emergency purpose or exemption.” The finding will “deter negative uses of AI and ensure that consumers are fully protected by the TCPA when they receive such calls,” the ruling said.
The FCC’s three Democrats issued written statements, which were attached to the ruling. Commissioners Brendan Carr and Nathan Simington didn’t do so.
In January, some potential voters in New Hampshire received a call, purportedly from President Joe Biden, urging that they skip the state’s presidential primary, said Commissioner Geoffrey Starks. “The use of generative AI has brought a fresh threat to voter suppression schemes and the campaign season with the heightened believability of fake robocalls,” he said.
Congress and the administration are focused on whether and how to regulate AI, Starks added. “In the interim, agencies that have existing authority to regulate AI should work hard to address harmful uses of the technology.”
AI “can confuse us when we listen, view, and click, because it can trick us into thinking all kinds of fake stuff is legitimate,” Rosenworcel said. The ruling means when robocallers violate the prohibition, state AGs “can go after the bad actors behind these robocalls and seek damages under the law,” she said.
“Responsible and ethical implementation of AI technologies is crucial to strike a balance, ensuring that the benefits of AI are harnessed to protect consumers from harm rather than amplify the risks they face in an increasingly digital landscape,” said Commissioner Anna Gomez.
Hall Estill’s Aaron Tifft warned that the FCC’s action will lead to more litigation. The ruling “provides a wide area of vagueness that plaintiff counsels will use to hold businesses acting in good faith liable for using some form of technology to craft messages to customers and recipients,” Tifft said in an email: “There will be many plaintiffs who will be looking for a quick payday and will argue that almost everything coming from a computer is considered using AI technologies to communicate.”
The ruling was “desperately needed” and will “meaningfully protect consumers from rapidly spreading AI scams and deception,” said Robert Weissman, president of Public Citizen. However, it doesn’t address all problems, because noncommercial calls are exempt, he said. Accordingly, the ruling “will not cure the problem of AI voice-generated calls related to elections -- the kind of problem evidenced by the faked President Biden robocalls in New Hampshire,” Weissman said.
“The fake robocalls received by voters in New Hampshire represent only the tip of the iceberg,” said Ishan Mehta, Media & Democracy Program director at Common Cause: “It is critically important that the FCC now use this authority to fine violators and block the telephone companies that carry the calls.”
The wireless industry has used AI for years, but what’s changing is that everyone is using the technology now, said Peter Jarich, head of GSMA Intelligence, during a Mobile World Live webinar Thursday. There’s been a “democratization” of AI and that makes its "responsible use a particularly important topic,” he said. Traditionally, telecom carriers are viewed as slow to adopt AI, but with generative AI “you see them at the forefront,” he said. “Across the stack, telcos are paying a lot of attention to this,” he said.
Generative AI is still in the early stages of deployment, Jarich said. In a recent GSMA survey, 5% of providers said they’re deploying it in some form “at scale,” while another 13% say they’re taking initial steps toward commercial deployment. But more than half say they’re already conducting tests, he said.
Providers are concerned about the security of AI, Jarich said. They’re also watching policymakers, he said.
From the start, cloud-based software provider Salesforce's focus was that its technology be used “in safe and responsible ways,” said Kathy Baxter, principal architect-responsible AI and tech. For example, it didn't want its Einstein Vision product used for facial recognition, she said. “That was one of the very first red lines … that we put in place for our AI technology,” she said.