26 Attorneys General, Industry Stake Claims on AI-Generated Calls
Twenty-six attorneys general urged the FCC to use its AI notice of inquiry to clarify that AI-generated calls mimicking human voices are considered “an artificial voice” under the Telephone Consumer Protection Act. Reply comments on a November notice of inquiry (see 2311160028) were due Tuesday and posted Wednesday in docket 23-362. In initial comments, CTIA and USTelecom urged that the FCC allow flexibility in how providers use AI (see 2312200039).
Sign up for a free preview to unlock the rest of this article
Timely, relevant coverage of court proceedings and agency rulings involving tariffs, classification, valuation, origin and antidumping and countervailing duties. Each day, Trade Law Daily subscribers receive a daily headline email, in-depth PDF edition and access to all relevant documents via our trade law source document library and website.
“Based upon the comments received, it is apparent that AI technologies will continue to both rapidly develop and permeate an already complex telecommunications ecosystem,” said AGs from 25 states and the District of Columbia. “Any type of AI technology that generates a human voice should be considered an ‘artificial voice’ for purposes of the TCPA,” they said: “If any TCPA-regulated entity wants to call a consumer utilizing such technology, it should follow the TCPA’s requirements, including those with respect to prior express written consent.”
Robocalls are an “annoyance” and “cause real financial damage,” said California AG Robert Bonta (D). “AI technology presents opportunities for new levels of deception by bad actors,” he added. Bonta called on the FCC to “take this opportunity to underscore that existing laws" like the TCPA “can be used to protect consumers against this threat.” Classifying AI-generated human voice calls “as a type of artificial voice is a step in the right direction in preventing consumers from receiving unwanted and potentially dangerous robocalls,” he said.
Incompas said the record shows providers are already using AI to reduce the threat presented by illegal robocalls and robotexts to consumers. For example, it cited AI-based tools from Twilio and Microsoft. “AI is being used by providers and their analytics partners to identify illegal calling patterns and conduct network-based blocking of robocalls,” the group said.
The FCC should ensure that AI technologies used in call blocking, labeling or call presentation "are applied in a non-discriminatory and competitively neutral manner,” Incompas added. The agency “should primarily address bad actors' use of AI to initiate illegal robocalls and robotexts and must be complementary to the Administration’s holistic approach to the development and use of AI.”
Responses to the NOI make clear the importance of being able to “definitively identify the calling party,” said ZipDX, which offers a robocall surveillance platform. “One thing is certain: As they have done with other technologies, nefarious actors are going to embrace AI to further their exploits, and their methods are going to evolve at a rapid pace,” ZipDX said: The ability to quickly spot and disrupt such calls "will be one of our best defenses.”
Transaction Network Services, which offers analytics and anti-robocall tools, also argued that carriers already are relying on AI to detect illegal robocalls and texts "quickly and accurately." The evolution of AI “only promises to enhance the efficacy of these approaches and to strengthen analytics offerings in order to better shield the public against the scourge of robocalls,” the company said.