Google maintains a search market monopoly by self-preferencing products to the detriment of smaller competitors who offer superior local search results, Yelp said in an antitrust lawsuit filed against Google on Wednesday. Yelp filed the lawsuit in the U.S. District Court for the Northern District of California. It claims Google has engaged in “numerous” anticompetitive practices, including stealing search data from Yelp, self-preferencing its own results and using algorithms to steer online traffic away from Yelp. In addition, the lawsuit claims that when Google tried to buy Yelp in 2009, it recognized the quality of Yelp results. When Yelp “rebuffed Google, Google began a years-long mission to stymie Yelp’s ability to reach consumers through Google’s dominant general search platform.” Google, in a statement said, Yelp’s claims are “meritless” and “not new.” Similar claims were “thrown out years ago by the FTC, and recently by the judge in the DOJ’s case,” Google said, referring to Judge Amit Mehta’s recent decision (see 2408050052). “On the other aspects of the decision to which Yelp refers, we are appealing. Google will vigorously defend against Yelp’s meritless claims.” Yelp CEO Jeremy Stoppelman on Wednesday claimed Google “manipulates its results to promote its own local search offerings above those of its rivals, regardless of the comparative poorer quality of its own properties, exempting itself from the qualitative ranking system it uses for other sites.”
A state court allowed an Iowa lawsuit against TikTok that claims the social media company duped parents about children’s access to inappropriate content. The Iowa District Court for Polk County in Des Moines denied TikTok’s motion to dismiss the state’s Jan. 17 petition in a ruling this week. While the court also denied Iowa’s motion for preliminary injunction, Iowa Attorney General Brenna Bird (R) said in a Wednesday statement that the decision is “a big victory in our ongoing battle to defend Iowa’s children and parents against the Chinese Communist Party running TikTok. Parents deserve to know the truth about the severity and frequency of dangerous videos TikTok recommends to kids on the app.” Bird claimed TikTok violated the Iowa Consumer Fraud Act through misrepresentations, deceptions, false promises and unfair practices, which allowed it to get a 12+ rating on the Apple App Store despite containing content inappropriate for kids aged 13-17. “Considering the petition as a whole, the State has submitted a cognizable claim under the CFA,” wrote Judge Jeffrey Farrell. TikTok doesn’t get immunity from Section 230 of the Communications Decency Act because the state’s petition “addresses only the age ratings, not the content created by third parties,” the judge added. However, Farrell declined to preliminarily enjoin TikTok since the state hasn’t “produced any evidence to show an Iowan has been viewed and harmed” by videos with offensive language or topics. The judge said, “The State presented no evidence of any form to show irreparable harm.” TikTok didn’t comment Wednesday.
TikTok engages in “expressive activity” when its algorithm curates content. Accordingly, the platform can’t claim Section 230 immunity from liability when that content harms users, the Third U.S. Circuit Court of Appeals ruled Tuesday (docket 22-3061). A three-judge panel remanded a district court decision dismissing a lawsuit from a mother of a 10-year-old TikTok user who unintentionally hung herself after watching a “Blackout Challenge” video on the platform. The U.S. District Court for the Eastern District of Pennsylvania dismissed the case, holding TikTok was immune under Communications Decency Act Section 230. The Third Circuit reversed in part, vacated in part and remanded the case back to the district court. Judge Patty Schwarz wrote the opinion, citing U.S. Supreme Court findings about “expressive activity” in Moody v. NetChoice (see 2402270072). SCOTUS found that “expressive activity includes presenting a curated compilation of speech originally created by others.” Schwarz noted the court’s holding that a platform algorithm reflects “editorial judgments” about compiling third-party speech and amounts to an “expressive product” that the First Amendment protects. This protected speech can also be considered “first-party” speech under Section 230, said Schwarz. According to the filing, 10-year-old Nylah Anderson viewed the Blackout Challenge on her “For You Page,” which TikTok curates for each individual user. Had Anderson viewed the challenge through TikTok’s search function, TikTok could have been viewed as more of a “repository of third-party content than an affirmative promoter of such content,” Schwarz wrote. Tawainna Anderson, the child’s mother who filed the suit, claims the company knew about the challenge, allowed users to post videos of themselves participating in it and promoted videos to children via an algorithm. The company didn’t comment.
California should “probably pass” legislation allowing the attorney general to pursue civil penalties against large AI developers if they cause “severe harm” to residents, X Chief Technology Officer Elon Musk said Tuesday. Tech entrepreneur Andrew Ng spoke against SB-1047 in May, calling it a “ridiculous” regulation (see 2405300064). “This is a tough call and will make some people upset, but, all things considered, I think California should probably pass the SB 1047 AI safety bill,” Musk said. “For over 20 years, I have been an advocate for AI regulation, just as we regulate any product/technology that is a potential risk to the public.” The law would apply to companies training models with more than $100 million in computing power, and in incidents that cause at least $500 million in damage to critical infrastructure or mass casualties.
The 9th U.S. Circuit Court of Appeals’ decision to partially uphold an injunction against a California age-appropriate social media design law (see 2408160015) means similar legislation at the federal level is likely unconstitutional, a policy expert at the International Center for Law & Economics said Monday. Innovation policy scholar Ben Sperry argued that duty of care provisions in the Kids Online Safety Act, which the Senate passed last month 91-3 (see 2407300042), likely violate the First Amendment. The 9th Circuit found the Age-Appropriate Design Code Act’s (AB-2273) impact assessment requirement is violative because it requires that platforms make judgments about what online content could harm children. Sperry argued that under KOSA, platforms would be incentivized to censor all but “the most benign speech” to avoid triggering children’s anxiety or to avoid “bullying” claims.
Texas’ social media age-restriction law violates the First Amendment because it limits access to content in a way that prohibits free speech, the Computer & Communications Industry Association and NetChoice said Friday in a lawsuit seeking to block the new measure (docket 1:24-cv-00849) (see 2407300030). In a filing with the U.S. District Court of Western Texas, the trade associations requested a preliminary injunction against HB-18. Set to take effect Sept. 1, HB-18 requires that social media platforms obtain parental consent before allowing minors to use their services. CCIA Senior Vice President Stephanie Joyce said in a statement that Texas failed to justify the law’s “invasive and onerous” age-gate restrictions: “In keeping with the Supreme Court’s recent holding that Texas’s previous attempt to regulate speech likely violates the First Amendment, HB18 should be barred from becoming effective. Such attempts to dictate what users can access online are antithetical to a free society.”
Washington, D.C., can proceed with its price-fixing antitrust claim against Amazon, the District of Columbia Court of Appeals ruled Thursday. The ruling reversed a trial court’s dismissal of the case (docket 22-CV-0657). The district's then Attorney General Karl Racine (D) filed the lawsuit in 2021, claiming Amazon used “restrictive contract provisions and agreements” to fix prices across the internet and prevent third-party sellers from lowering prices on other websites. In 2022, the Superior Court of the District of Columbia sided with Amazon, dismissing the case without providing specific reasons. The AG’s office described the dismissal as a “confusing oral ruling” that misapplied antitrust law. District officials claimed the trial court misconstrued aspects of a “restraint-of-trade claim” and “failed to accept the ... factual allegations as true,” according to the filing. In an opinion for the appeals court, Associate Judge Corinne Ann Beckwith wrote, “We hold that the District alleged sufficient facts to survive the motion to dismiss and therefore reverse the judgment of the Superior Court.” AG Brian Schwalb (D) said Thursday: “We will continue fighting to stop Amazon’s unfair and unlawful practices that have raised prices for District consumers and stifled innovation and choice across online retail.” Amazon, in a statement Thursday, said it disagrees with the city’s claims and looks forward to “presenting facts in court that demonstrate how good these policies are for consumers.” Amazon doesn’t “highlight or promote offers that are not competitively priced,” it said. “It’s part of our commitment to featuring low prices to earn and maintain customer trust, which we believe is the right decision for both consumers and sellers in the long run.”
Provisions in California’s age-appropriate social media design law likely violate the First Amendment, the 9th U.S. Circuit Court of Appeals ruled Friday in a victory for NetChoice (docket 23-2969) (see 2407170046). A three-judge panel found the Age-Appropriate Design Code Act’s (AB-2273) impact assessment requirement likely violates the First Amendment because it requires that platforms make judgments about what online content could harm children. The ruling, issued by Judge Milan Smith, affirms a district court decision enjoining enforcement of the law’s Data Protection Impact Assessment requirement. However, the court remanded the case back to the district court for further consideration on other aspects of the law. It’s “unclear from the record” whether other challenged provisions “facially violate the First Amendment,” or the unconstitutional aspects can be separated from valid provisions of the law, the court said. NetChoice is “likely to succeed” in showing that the law’s requirement that “covered businesses opine on and mitigate the risk that children may be exposed to harmful or potentially harmful materials online facially violates the First Amendment,” Smith wrote. The U.S. District Court for the Northern District of California in September granted NetChoice's request for a preliminary injunction. The lower court ruled the state has “no right to enforce obligations that would essentially press private companies into service as government censors, thus violating the First Amendment by proxy.” California Attorney General Rob Bonta (D) appealed. NetChoice Litigation Center Director Chris Marchese called the decision a victory for free expression: “The court recognized that California’s government cannot commandeer private businesses to censor lawful content online or to restrict access to it.” Bonta’s office didn’t comment Friday.
The FTC was unanimous in finalizing a rule that will allow it to seek civil penalties against companies sharing fake online reviews, the agency announced Wednesday. Approved 5-0, the rule will help promote “fair, honest, and competitive” markets, Chair Lina Khan said. Amazon, the Computer & Communications Industry Association and the U.S. Chamber of Commerce previously warned the FTC about First Amendment and Section 230 risks associated with the draft proposal (see 2310030064). The rule goes into effect 60 days after Federal Register publication. It allows the agency to seek civil penalties via unfair and deceptive practices authority under the FTC Act. It bans the sale and purchase of fake social media followers and views and prohibits fake, AI-generated testimonials. The rule includes transparency requirements for reviews that people with material connections to businesses write. Moreover, it bans companies from misrepresenting the independence of reviews. Businesses are also banned from “using unfounded or groundless legal threats, physical threats, intimidation, or certain false public accusations to prevent or remove a negative consumer review,” the agency said.
Elon Musk on Monday posted an explicit meme in response to warnings from an EU official about European content moderation obligations. Musk published the post, on his platform X, saying, “To be honest, I really wanted to respond with this Tropic Thunder meme, but I would NEVER do something so rude & irresponsible!” Earlier on Monday, European Commissioner-Internal Market and Services Thierry Breton, in a letter to Musk, urged him to be mindful of the platform’s obligations under the EU’s Digital Services Act as the billionaire and X prepared for that evening's live interview with Republican presidential nominee Donald Trump. “We are monitoring the potential risks in the EU associated with the dissemination of content that may incite violence, hate and racism in conjunction with major political -- or societal -- events around the world, including debates and interviews in the context of elections,” Thierry wrote. X is already the subject of a European Commission investigation for possible DSA violations. The EC launched that inquiry in December, and on July 12 issued preliminary findings related to X’s “blue checkmarks” (see 2407120001). There are also two other probes, an EC spokesperson said at a briefing Tuesday. She noted the DSA doesn’t tackle specific comments but sets due diligence requirements that very large platforms active in the EU must follow. The EC intervenes in cases of systemic issues only. As such, she couldn't comment on Musk’s interview with Trump, the spokesperson said. Breton’s letter to Musk expressed general concerns and observations about the interview under the DSA and wasn’t meant to interfere in U.S. elections, she added.