Trade Law Daily is a Warren News publication.

Experts Outline Potential Pathways to BIS Export Controls on AI

The Bureau of Industry and Security could use its existing “catch-all controls” to tighten restrictions around exports of sensitive artificial intelligence models, eliminating the need to develop new regulations to address emerging AI export risks, researchers with Georgetown University's Center for Security and Emerging Technology said this week. The researchers said the catch-all controls -- which allow BIS to restrict exports if there is “knowledge” the item will be used in certain dangerous ways -- may be “sufficient” to “address the use of AI in more traditional national security realms.”

Sign up for a free preview to unlock the rest of this article

Timely, relevant coverage of court proceedings and agency rulings involving tariffs, classification, valuation, origin and antidumping and countervailing duties. Each day, Trade Law Daily subscribers receive a daily headline email, in-depth PDF edition and access to all relevant documents via our trade law source document library and website.

“Taking a step back to determine if our existing laws are sufficient will save us time in the long run and prevent regulatory overreach that could have negative impacts on the competitiveness of U.S. technology and innovation,” wrote Emily Weinstein, a CSET research fellow, and Kevin Wolf, a CSET nonresident senior fellow and an export control lawyer.

The report comes as BIS reportedly considers tightening restrictions on a broader set of artificial intelligence-related semiconductors (see 2306290048). Agency officials have also said they’re exploring export controls on broader AI technologies and applications (see 2210270047), and the CSET report specifically references potential restrictions over large language models that may have applications for foreign militaries.

Weinstein and Wolf, a former BIS official, wrote that they believe BIS’ “long-standing dual-use ‘catch-all’ controls” can “be used to address some of these threats.” They said BIS can use the controls to restrict an item if the exporter has knowledge the export will be used to produce, develop or be used in nuclear energy or explosives; rocket systems, missiles and certain unmanned aerial vehicles; or chemical or biological weapons. They noted that the catch-all controls’ “knowledge” standard “also applies when there is a ‘high probability’ that an export will be for one of these end uses.”

BIS could apply those controls to exports of AI models in a situation where the agency believes there is a risk the export “could be used for, as an example, the development of a missile.” In that instance, “BIS has the authority to inform the foreign company that its planned actions involving otherwise uncontrolled technology and software require a BIS license, which would likely be denied.”

They said the agency could also restrict those exports by using its “U.S. Person” controls, which place license requirements on activities of “U.S. persons” if those activities support the development, production or use of nuclear explosive devices, missiles or chemical or biological weapons. “Thus, for example, if a U.S. person wanted to use an AI model in the development of, for instance, a novel biological weapon anywhere in the world, that activity would already be prohibited” by BIS’ U.S. Person controls.

Weinstein and Wolf also pointed to a provision that passed as part of the FY 2023 defense spending bill, which was aimed to allow BIS to limit the provision of certain sensitive services to foreign intelligence agencies (see 2301060034 and 2212210032). “This new authority should be of interest to those thinking of how to address AI-related concerns because it can be applied even when the underlying AI-related technology or software is open source or otherwise so widely available that it cannot be controlled,” they said. “Using this new authority to control activities of U.S. persons could be an option for U.S. policymakers if and when they are unable to define in list-based controls clear and enforceable technical thresholds for AI systems of concern.”

Although the Biden administration hasn’t yet implemented the new authority, it “will likely be useful in controlling U.S. persons using AI technologies and applications, regardless of their source, to support the intelligence and surveillance activities of foreign governments of concern,” the report said.

A BIS official in March said the agency planned to publish a rule to implement the authority within a week (see 2303210037), but BIS hasn’t yet published it. An agency spokesperson hasn’t commented.