FCC Group Hears Deepfake Biden Robocall Signals Growing Risk as Elections Loom
The FCC moved quickly and effectively to clamp down on a January robocall that created a deepfake of the voice of President Joe Biden urging recipients to skip the New Hampshire primary (see 2402060087). However, preventing similar fakes may prove more difficult, Greg Bohl, chief data officer at Transaction Network Services, warned the FCC’s Consumer Advisory Committee Wednesday. CAC is focused on AI this term (see 2404040040).
Sign up for a free preview to unlock the rest of this article
Timely, relevant coverage of court proceedings and agency rulings involving tariffs, classification, valuation, origin and antidumping and countervailing duties. Each day, Trade Law Daily subscribers receive a daily headline email, in-depth PDF edition and access to all relevant documents via our trade law source document library and website.
The January Biden robocalling scheme was just the beginning, Bohl said during the virtual meeting. He expects there will be “more of this as we get closer to the elections,” adding, “that one was easily traceable.” It came from within the U.S., Bohl said, and the perpetrator was caught. “It is not always that easy.”
Criminals also use deepfakes for extorting money from the public, based on fear, Bohl explained. Often the public doesn’t trust the authorities or doesn’t know whom to contact, he said. Law enforcement agencies lack "the bandwidth to pursue all these bad actors” and “there are long investigative cycles.” Bohl said the FCC’s recent order prohibiting voice-cloning technology in robocall scams (see 2402080052) gives law enforcement the best tool they have to go after fraudulent robocallers.
Bad actors “continually change their tactics and their approaches and campaigns, so we have to be able to adaptively learn and understand this,” Bohl said. Voice cloning tools are “really, really accurate,” he said. New tools are developed constantly and often cost only a few hundred dollars, he said. Many bad actors are hard to trace and operate outside the U.S., he said.
Collecting data is easy for bad actors, Bohl said: “The internet is bubbling over with free personalized data ready for the taking.” A voicemail message on your phone that’s longer than three seconds can be easily cloned, he said. “They already have your number, your name, and you are giving them your voice,” he said. “You are setting yourself up to be attacked.”
Raul Rojo, attorney adviser in the FCC Enforcement Bureau, played the Biden robocall for CAC members. It has all the earmarks of a successful fake, he said. Biden's voice “seemed relaxed,” he said: “This seemed authentic. The pauses are correct” and it was all done with AI.
The delivery of robocalls is “an incredibly complex ecosystem,” Rojo noted. Though we think a call is placed, a cell tower picks it up and transmits it to the recipient, with just two carriers involved, “that isn’t the case at all." Instead, "there can be eight, nine, 10, 11 service providers between [the caller] and the consumer that actually receives the call,” he said. The FCC is “learning every day” about AI and how it’s being used. “It’s changing every day, and we are trying to adapt.”
In the case of the Biden robocall, the FCC issued subpoenas to the Industry Traceback Group for records of calls placed in New Hampshire. Eventually, the agency traced the call to Lingo Telecom and Steve Kramer. Both were fined (see 2405230033).
Kramer made the FCC’s job easier, Rojo said. “He just would not stop talking about what he did with the calls” and spoke multiple times with the FCC and DOJ. “He acknowledged that he did it on purpose,” Rojo said.
The FCC’s goal is to move quickly, Rojo said. Quick enforcement can disrupt calls, prevent them from being made and “stop consumers from being taken advantage of or providing information they should not provide over the phone and stop people from being victimized,” he said. The next step is investigation and acting against the callers, he said.
One troubling aspect of the New Hampshire robocall is that the caller used a spoofed ID, CAC co-Chair Claudia Ruiz, civil rights analyst at UnidosUS, said. The receiver is more likely to then answer a call and think it’s legitimate, she said.
The call violated the Telephone Consumer Protection Act and the Truth in Caller ID Act regardless of the use of an AI-enabled deepfake, though that’s what got the headlines, said co-Chair John Breyault, vice president-public policy, telecommunications and fraud at the National Consumers League. Rojo agreed.