Technologies are emerging to combat deepfakes, but rules might be needed, panelists said at a Tuesday webinar hosted by the Convention of National Associations of Electrical Engineers of Europe (EUREL). Deepfake technology enabled some beneficial uses, but it's increasingly difficult to distinguish between real and fake content and people, said Sebastian Hallensleben, chairman of German EUREL member VDE e.V. One common argument is that AI fabrications aren't a problem because we can use other AI systems to detect them. As deepfakes become more sophisticated, there will be more countermeasures, causing a "detection arms race," said Hallensleben. What's needed is a "game-changer" to show what's real online and what isn't, Hallensleben said. He's working on "authentic pseudonyms," identifiers guaranteed to belong to a given physical person and to be singular in a given context. This could be done through restricted identification along the lines of citizens' ID cards; a second route is through self-sovereign identity (SSI). If widely used, authentic pseudonyms would avoid the "authoritarian approach" to deepfakes, Hallensleben said. SSI is a new paradigm for creating digital ID, said Technical University of Berlin professor Axel Kupper. The ID holder (a person) becomes her own identity provider and can decide where to store her identity documents and what services to use. The infrastructure is a decentralized, tamper-proof, distributed ledger. The question is how to use the technology to mitigate the use of automated content creation, Kupper said. Many perspectives besides technology must be considered for cross-border identification infrastructure, including regulation, governance, interoperability and social factors, said Tanja Pavleska, a researcher at the Joef Stefan Institut Laboratory for Open Systems and Networks in Slovenia. Trust applies in all those contexts, she said. Asked whether the proposed EU AI Act should classify deepfakes as high-risk technology, she said such fakes aren't just done by a single player or type of actor, so rules aimed at single points might be difficult. All panelists agreed the EU general data protection regulation should be interpreted to cover voice and facial data.
Dugie Standeford
Dugie Standeford, European Correspondent, Communications Daily and Privacy Daily, is a former lawyer. She joined Warren Communications News in 2000 to report on internet policy and regulation. In 2003 she moved to the U.K. and since then has covered European telecommunications issues. She previously covered the U.S. Occupational Safety and Health Administration and intellectual property law matters. She has a degree in psychology from Duke University and a law degree from the University of Tulsa College of Law.
The EU and U.S. may not ever agree on some tech regulation but must cooperate, panelists said Tuesday at an EU40-The Network of Young Members of the European Parliament webcast. Both sides of the Atlantic are at a defining moment to meet the challenges of the digital and green economies and must work together, said Mariya Gabriel, European commissioner for innovation, research, culture, education and youth. The EU/U.S. policy agenda includes innovation, which the newly created Trade and Tech Council (TTC) is tackling, she said. “We need to set the global agenda together.” Slovenian Minister for Digital Transformation Mark Boris Andrijanic urged “comprehensive, smart regulation” of digital markets, which can be achieved only by working closely with other trusted allies: The EU and U.S. “will never agree on every detail,” but there's no alternative to partnership, he said.
EU governments generally back the European Commission's proposed AI act, said EU Council telecom officials Thursday after a virtual debate on the legislation. Everyone agreed there must be a systematic, unified approach to AI in the single market based on fundamental rights, said Bostjan Koritnik, public administration minister for Slovenia, which holds the current EU presidency. Officials want a horizontal regulatory framework that covers use of AI in all sectors. The vast majority of ministers support the EC's risk-based approach but said many issues need further discussion, such as the scope of the measure, law enforcement aspects and definitions of key terms, the Council said. The debate raised two key questions, said EC Internal Market Commissioner Thierry Breton: The need for a unified approach that generates trust and the necessity of balancing innovation against citizens' confidence in using AI. EU rules will also have to apply to AI producers outside the EU, he said. Rules must be proportionate, limited to what's necessary, adaptable to emerging risks and respectful of values, and must stimulate investment, he said. Governments must ensure access to the data on which AI depends, Breton added. Discussions will continue in the Council's telecom working party, with a compromise proposal expected in November.
Global digital technology rules should be compatible, not identical, speakers said at a Thursday Politico virtual event. There's discussion about the new Trade & Technical Council (TTC) (see 2109290006) because, in an interconnected world, what's relevant in one country becomes relevant in others, said former FCC Chairman Tom Wheeler, now Brookings Institution visiting fellow and Harvard Kennedy School senior fellow: The TTC effort must be to agree on consistent rules.
Stakeholders unveiled their wish lists for the EU-U.S. Trade and Technology Council, before the TTC's first meeting Wednesday in Pittsburgh. The council had planned to approve an agreement on the way forward at the meeting, the European Commission told us. The joint EU-U.S. TTC statement sets out five areas of joint work -- investment screening, export controls, AI, semiconductors and global trade challenges -- and establishes 10 working groups. The export control panel will hold a joint virtual event for stakeholders Oct. 27. The American Chamber of Commerce for the EU set out priorities Tuesday for working groups it expects to be established. It seeks a "transparent and open stakeholder engagement mechanism" to ensure outcomes are supported by business. The European Consumer Organisation said consumer groups "support the voluntary exchange of best practices and information between regulators" as long as it doesn't weaken EU ambitions to better safeguard consumers. It said the EU recently tried to improve the transparency of its cooperation with third countries: "This is a positive process that should continue." The Information Technology Industry Council made requests, saying the TTC's work "can be best supported by a successor agreement to the Privacy Shield." Chips "should top the EU-US partnership agenda," Intel blogged.
The O-RAN Alliance and two of its members said they resolved issues about possible ramifications of the U.S. decision to list three Chinese alliance members on the Commerce Department's entity list. Equipment vendors Nokia and Ericsson had halted activities with the alliance over concerns about possible penalties (see 2109030053). The alliance said Sept. 13 its board "approved changes to O-RAN participation documents and procedures." It's up to individuals members "to make their own evaluation of these changes, [but] O-RAN is optimistic that the changes will address the concerns." The alliance didn't comment on what the changes were. Ericsson told us Tuesday it's now "satisfied" the alliance "found a solution that resolves the issue." Nokia said Wednesday it's "delighted" the alliance's work can now continue and will resume its technical contributions.
Two members of an open radio access network alliance have halted activities over concerns about possible ramifications of the U.S. decision to place three Chinese alliance members on the "entity list" of enterprises deemed security risks. Ericsson and Nokia responded that they remain committed to the project. Resolving the issue could require the O-RAN Alliance to throw out its Chinese members or have the U.S grant an exception, we were told last week.
Tech sector and civil society continue to disagree on aspects of a proposed EU AI regulation, they said. The draft Artificial Intelligence Act (AIA) seeks to create trust in AI technologies via a risk-based approach that categorizes them as prohibited, high-risk or limited risk (see 2104210003). The proposal got generally positive though mixed reviews when it emerged in April. A European Commission consultation that closed Friday showed public interest and consumer groups remain worried about the measure's potential impact on human rights, while technology companies are concerned the law is too broad and could be onerous.
Pass consumer data privacy legislation this term, Rep. Suzan DelBene, D-Wash., told a Friday Brookings Institute webinar. Data flows are "critical to our shared economic future" and nowhere more important than EU-U.S., she said. The European Court of Justice (ECJ) ruling in Schrems II (see 2007160002) left thousands of smaller companies that relied on trans-Atlantic data transfer mechanism Privacy Shield scrambling, she said: The growing patchwork of state privacy laws won't work and won't lead to a PS alternative. Current tools such as standard contractual clauses, binding corporate rules and recent European Data Protection Board guidance are helpful but don't "take away the need for a successor framework," said Workday Chief Privacy Officer Barbara Cosgrove. Talks on a PS replacement are ongoing, said Sharon Bradford Franklin, a director of the Center for Democracy and Technology security and surveillance project: CDT has heard that one is the extent to which the U.S. government can enact measures by the executive branch or Congress to address ECJ concerns. A comprehensive U.S. consumer data privacy law would be helpful, but surveillance laws must change to benefit Europeans and Americans, she said. The big issue is individual redress, said lawyer Peter Swire. There's frustration on the U.S. side about the issue because the U.S. has a good system via Foreign Intelligence Surveillance Act courts, Swire said: "Get over it." He and other panelists said it might be possible to give Europeans an independent review and some pathway to redress in federal courts administratively, via an executive order on surveillance law. Most agreed any solution must ultimately become law. The U.S. looks "really different" from the rest of the world with regard to privacy protection, and it’s hard to make the case that it's a safe place for data, said Swire. The U.S. and EU are considering whether they can align on tech issues such as data governance and AI, and must get a handle on privacy law first because it underpins those areas, said Cameron Kerry, Brookings distinguished visiting fellow-Center for Technology Innovation. The idea of the recently created Tech and Trade Council is to bring like-minded democratic countries together, he said: The U.S. is "the outlier" because it lacks a privacy regime.
U.S. companies selling AI products into Europe will be subject to EU AI laws no matter where they're headquartered, European Commission Legal and Policy Officer Gabriele Mazzini said on a Thursday FCBA webinar. A legislation proposal, which the EC floated in April (see 2104210003), would affect anyone marketing AI into the EU, said Mazzini, of the Directorate-General for Communications Networks, Content and Technology. The EC made clear it wants to promote its vision of AI regulation globally, so similar policies may arise elsewhere, including in the U.S., said Verizon Senior Manager-EU Public Policy Marco Moragon. The proposal aims to address risks of AI technologies, such as enforcement of fundamental rights, consumer and other laws, said Mazzini. Protecting democratic rights and legal principles is a top priority for civil society, said Iverna McGowan, Center for Democracy and Technology Europe Office director: AI could disproportionately affect vulnerable people. Verizon operates in EU markets and wants a consistent, harmonized regime, said Moragon. CDT believes a risk-based and rights-based approach to AI isn't mutually exclusive, said McGowan. She seeks a baseline against which to assess the technology's possible impact on human rights, saying a rights-based approach should start by consulting people about how AI services affect their real-life experiences. The proposal divides AI systems into three groups: prohibited, where there's no societal value from their use; higher risk, which may pose problems but can have beneficial societal/economic benefits; and low risk, where no prior rules will be imposed, but companies will be subject to transparency requirements, said Mazzini. Companies must self-assess risk, which may be burdensome to smaller firms, said Moragon. When the measure refers to users, it means purchasers of AI systems, not end users, said McGowan: Accountability should be clearer on what rights and redress end-users get. A European Parliament decision on which committees will have jurisdiction here isn't likely before September, Mazzini added.