The FTC was unanimous in finalizing a rule that will allow it to seek civil penalties against companies sharing fake online reviews, the agency announced Wednesday. Approved 5-0, the rule will help promote “fair, honest, and competitive” markets, Chair Lina Khan said. Amazon, the Computer & Communications Industry Association and the U.S. Chamber of Commerce previously warned the FTC about First Amendment and Section 230 risks associated with the draft proposal (see 2310030064). The rule goes into effect 60 days after Federal Register publication. It allows the agency to seek civil penalties via unfair and deceptive practices authority under the FTC Act. It bans the sale and purchase of fake social media followers and views and prohibits fake, AI-generated testimonials. The rule includes transparency requirements for reviews that people with material connections to businesses write. Moreover, it bans companies from misrepresenting the independence of reviews. Businesses are also banned from “using unfounded or groundless legal threats, physical threats, intimidation, or certain false public accusations to prevent or remove a negative consumer review,” the agency said.
Section 230
New Mexico Attorney General Raul Torrez (D) is working with state lawmakers on legislation aimed at holding social media platforms more accountable for disseminating deepfake porn, he told us Wednesday.
Companies like Meta intentionally target children and must be held more accountable for social media-related harm, attorneys general from New Mexico and Virginia said Wednesday. New Mexico AG Raul Torrez (D) and Virginia AG Jason Miyares (R) discussed potential solutions to online child exploitation during the Coalition to End Sexual Exploitation Global Summit that the National Center on Sexual Exploitation and Phase Alliance hosted. Torrez said the tech industry received an “extraordinary grant” through Communications Decency Act Section 230, which Congress passed in 1996 to promote internet innovation. Section 230 has been a hurdle to holding companies accountable, even when they knowingly host illegal activity that’s harmful to children, Torrez added. Miyares said AGs won't wait for legislators in Washington to solve the problem, noting state enforcers' success in the courts. Tech companies shouldn’t be able to use Section 230 as a shield from liability while also acting as publishers and removing political content they disfavor, Miyares added. Torrez acknowledged he and Miyares disagree on many things, but they agree on the need to increase liability and accountability of tech platforms when it comes to children.
Vermont’s lawsuit alleging Meta designed Instagram with the intention of addicting young users can proceed, a superior court judge ruled last week (docket 23-CV-4453). Superior Court Judge Helen Toor denied Meta’s motion to dismiss, saying the company’s First Amendment and Communications Decency Act Section 230 arguments didn't persuade her. Vermont alleges Meta violated the Vermont Consumer Protection Act by intentionally seeking to addict young users through methods it knows are harmful to mental and physical health. The company misrepresented its intentions and the harm it’s “knowingly causing,” the state argued. Vermont is seeking injunctive relief and civil damages. In Meta's request for dismissal, it argued the state lacks jurisdiction, the First Amendment and Section 230 bar the claims, and state enforcers failed to offer a valid claim under state law. The court heard oral argument July 3. The state noted more than 40,000 Vermont teens use Instagram and about 30,000 do so daily. The company uses targeted advertising and other features to maximize the amount of time teens spend on the app. Toor said the First Amendment protects companies' speech, but it doesn’t protect against allegations that a company is manipulating younger users. She noted Section 230 protects a company against liability for hosting third-party content, but it doesn’t shield from liability when a company engages in illegal conduct. Vermont isn’t seeking to hold Meta liable for content it hosts, she said: “Instead, it seeks to hold the company liable for intentionally leading Young Users to spend too much time on-line. Whether they are watching porn or puppies, the claim is that they are harmed by the time spent, not by what they are seeing.” Attorney General Charity Clark filed the lawsuit in October.
FCC Commissioner Brendan Carr’s Project 2025 ties likely won’t damage his chances of becoming the agency's chair if Donald Trump is elected president in November, even though the Trump campaign has distanced itself from the project (see 2407110054). Commissioner Nathan Simington is listed as a project adviser but didn’t write a chapter, as Carr did, or play a more public role.
A bipartisan group of senators on Wednesday formally filed legislation that would establish liability for sharing AI-driven content without the original creator’s consent. Sens. Chris Coons, D-Del.; Marsha Blackburn, R-Tenn.; Amy Klobuchar, D-Minn.; and Thom Tillis, R-N.C., introduced the Nurture Originals, Foster Art and Keep Entertainment Safe (No Fakes) Act (see 2310120036). The measure would hold individuals, companies and platforms liable for creating and hosting such content. “Generative AI can be used as a tool to foster creativity, but that can’t come at the expense of the unauthorized exploitation of anyone’s voice or likeness,” Coons said in a statement. The Computer & Communications Industry Association said the bill is “well-intentioned,” but as written it would “undermine Section 230, place limits on freedom of expression, and shrink fair use.” In addition, it lacks provisions protecting fair use and free expression, said Brian McMillan, vice president-federal affairs: “We understand the risks of false information that appears real, as our members deploy many algorithmic tools to identify and respond to deepfakes. This legislation emphasizes liability over support for these efforts.”
The Senate voted 86-1 Thursday to advance two kids’ safety bills, with Sen. Rand Paul, R-Ky., casting the lone no vote (see 2407240057).
A district court dismissed one count of NetChoice’s 11-count complaint that argued Section 230 of the Communications Decency Act preempts Utah’s Minor Protection in Social Media Act. Utah Attorney General Sean Reyes (R) had sought dismissal of the count (see 2406030026), arguing that nothing in the state law is inconsistent with Section 230. “The court concludes the challenged provisions impose liability for conduct that falls beyond the protections Section 230 affords NetChoice members,” Judge Robert Shelby of the U.S. District Court of Utah ruled (case 2:23-cv-00911-RJS-CMR). “The Act’s prohibitions on the use of autoplay, seamless pagination, and push notifications are not inconsistent with Section 230.” The question is whether those bans “treat NetChoice members as the publisher or speaker of the third-party content they disseminate,” wrote Shelby in NetChoice v. Reyes. They don’t, he said. “The Act’s prohibitions focus solely on the conduct of the covered website -- the website’s use of certain design features on minors’ accounts -- and impose liability irrespective of the content those design features may be used to disseminate.” The judge added, “NetChoice’s interpretation of Section 230 as broadly immunizing websites from any liability for design decisions related to how a site ‘disseminate[s] and display[s] third-party speech’ is unmoored from the plain text of Section 230 and unsupported by the caselaw NetChoice cites.” In a statement Tuesday, NetChoice stressed that the court dismissed only one claim and that its First Amendment and other federal preemption claims remain in play. “We look forward to seeing Utah in court in August,” said Chris Marchese, NetChoice Litigation Center director.
Stop Project 2025 Task Force founder Rep. Jared Huffman of California and 15 additional House Democrats asked FCC Inspector General Fara Damelin and other federal watchdogs Wednesday to investigate “potential ethics violations” by Republican FCC Commissioner Brendan Carr related to his writing the telecom chapter of the Heritage Foundation’s Project 2025 manifesto. Carr, seen as the front-runner to lead the FCC if former President Donald Trump wins a second term (see 2407120002), urged in the Project 2025 chapter to roll back Communications Decency Act Section 230 protections for tech companies, deregulate broadband infrastructure and restrict Chinese companies. Trump has disavowed Project 2025 and its proposals.
Sens. Ron Wyden, D-Ore., and Rand Paul, R-Ky., remain opposed to the Kids Online Safety Act, which is preventing Senate Majority Leader Chuck Schumer, D-N.Y., from moving the bill by unanimous consent (see 2406200053).