Defense Attys Must Prep For Imminent AI Crime Enforcement

April 4, 2024 - David M. Eskew / Jarrod Schaeffer and Scott Glicksman

This Article – authored by Jarrod Schaeffer and Scott Glicksman – was originally published on April 4, 2024 in Law360’s Expert Analysis Section. The Law360 version is available here.

Ever since ChatGPT burst onto the scene in November 2022, new tools and applications using artificial intelligence and adjacent technologies have proliferated across multiple industries. See, e.g., Bernard Marr, A Short History of ChatGPT and How We Got to Where We Are Today (Forbes, May 19, 2023), available here.

And while governments and regulators have started implementing frameworks and guardrails for use cases of these technologies, federal criminal enforcement related to or involving AI is still relatively rare. But that may soon change.

Many have commented on how AI might facilitate new kinds of crimes, as well as the use of AI by the U.S. Department of Justice itself to uncover, track and prosecute criminal activity. Some of those efforts will require time and deliberation, such as the evaluation envisioned by the DOJ’s recently launched Justice AI initiative.

But white collar practitioners should also expect to see federal criminal enforcement issues involving AI arise in the near term, including even in pending cases.

DOJ Mobilization Regarding AI

As part of Executive Order No. 14110 on the safe and secure development of AI, issued on Oct. 30, 2023, President Joe Biden directed federal agencies, including the DOJ, to evaluate potential uses and pitfalls of AI. See, e.g., 88 Fed. Reg. 210 at 75191, 75211–75212, §§ 7.1(b)(i)(A)–(C), (F), available here.

General Lisa Monaco signaled that federal criminal law enforcement officials had begun working to implement the president’s directives. Deputy Attorney General Lisa O. Monaco, Remarks at the University of Oxford on the Promise and Peril of AI (Dep’t of Justice Feb. 14, 2024), available here.

Calling AI “a double-edged sword” with perhaps “the sharpest blade yet,” Monaco extolled the technology’s “potential to be an indispensable tool to help identify, disrupt, and deter criminals, terrorists, and hostile nation-states,” while recognizing “that AI can lower the barriers to entry for criminals.” She went on to say that AI was “changing how crimes are committed and who commits them — creating new opportunities for wanna-be hackers and supercharging the threat posed by the most sophisticated cybercriminals.” Id.

To combat those threats, Monaco announced the Justice AI initiative, which, “[o]ver the next six months, … will convene individuals from across civil society, academia, science, and industry to draw on varied perspectives” in order “to understand and prepare for how AI will affect the Department’s mission and how to ensure [it] accelerate[s] AI’s potential for good while guarding against its risks.” Id.

That initiative is expected to provide its findings by the end of this year, and may build on prior work by the DOJ’s existing Disruptive Technology Strike Force.

But Main Justice officials are not the only ones who will have a hand in policing AI misuses. (This article focuses solely on federal criminal law enforcement efforts and does not address civil enforcement efforts undertaken by a variety of by federal and state regulators.) The 94 U.S. attorney’s offices around the country also play important — and, in some cases, leading — roles in addressing new issues and trends in law enforcement. Prosecutors in those offices are unlikely to wait for the DOJ’s overall deliberative process to conclude — in fact, some have already charged cases that target crimes involving AI. See Press Release, Founder of Artificial Intelligence Start-Up Charged With Fraud (Dep’t of Justice, Aug. 15, 2023), available here; see also Press Release, Two Men Charged for Operating $25M Cryptocurrency Ponzi Scheme (Dep’t of Justice, Dec. 12, 2023), available here.

And Monaco’s remarks, combined with recent events, suggest that the DOJ is not asking them to wait.

Likely Areas of Interest for Federal Law Enforcement

Where should practitioners expect to see more immediate efforts targeting AI by federal prosecutors and law enforcement agencies? Considering Monaco’s recent remarks alongside prior clues from DOJ officials — and taking account of modern law enforcement practices and procedures — AI is likely to become an early focus in a few key areas.

First, prosecutors and agents will likely focus on how AI can facilitate the commission of familiar crimes, as well as how prosecutors can deploy existing tools to combat such misuses.

Since AI acts as a powerful force-multiplier for a wide range of activities, federal criminal enforcement tactics developed for traditional offenses may be readily adapted to cases where those offenses are made more serious or effective through AI.

This is where practitioners are most likely to first encounter these issues, whether in pending cases, ongoing compliance reviews or new investigations.

Second, prosecutors and agents will likely focus on areas where AI may enable new kinds of crimes that would not be possible otherwise, such as advanced AI-enabled cyberweapons and other sophisticated national security threats. See, e.g., Staying ahead of threat actors in the age of AI (Microsoft, Feb. 14, 2024), available here (describing how “[c]ybercrime groups, nation-state threat actors, and other adversaries are exploring and testing different AI technologies as they emerge, in an attempt to understand potential value to their operations and the security controls they may need to circumvent.”).

While industry professionals have yet to see examples of this conduct, most expect that it will soon confront law enforcement. See id. Combating these threats is likely to require the development of new law enforcement tools and the recruitment of additional personnel.

Third, prosecutors and agents will almost certainly explore how AI can be used to better uncover, track and prosecute all kinds of criminal activity.

As many attorneys are aware already from personal experience, AI can make aspects of their own practices more effective and efficient through advanced data processing, sophisticated pattern identification, and the automation of rote tasks. Those same benefits may be harnessed by prosecutors and federal agents, including through the use of AI-assisted document and evidence review tools, early versions of which have existed for years in various forms of less advanced technology-assisted review.

Traditional Offenses Utilizing AI

While prosecutors and agents are likely to eventually explore these and other areas, there are several reasons why their efforts may focus first on traditional offenses utilizing AI.

To begin with, focusing on how AI facilitates the commission of familiar offenses requires less expenditure of new resources, because prosecutors and agents can bring to bear the traditional investigatory tools and strategies that they use in other cases.

More drastic adaptations or paradigm shifts, on the other hand, may take longer, because AI is just as new for law enforcement as it is for society. That does not mean big changes will not come — just that they might take longer to have an impact.

More fundamentally, prosecutors are likely to be most comfortable addressing AI misuses through conventional legal frameworks. As Monaco has reiterated, “Fraud using AI is still fraud. Price fixing using AI is still price fixing. And manipulating markets using AI is still market manipulation.” Deputy Attorney General Lisa O. Monaco, Remarks at American Bar Association’s 39th National Institute on White Collar Crime (Dep’t of Justice Mar. 7, 2024), available here.

Prosecutors have a long history of repurposing existing statutes and enforcement tools to combat challenges arising from new technologies. For instance, in U.S. v. Chastain this past year, the U.S. Attorney’s Office for the Southern District of New York — which is often on the front lines of emerging issues — invoked the wire fraud statute, 18 U.S.C. § 1343, first enacted in the 1950s (66 Stat. 722, ch. 879, § 18(a) (July 16, 1952)), to prosecute fraud involving non-fungible tokens (See Press Release, Former Employee of NFT Marketplace Sentenced to Prison in First-Ever Digital Asset Insider Trading Scheme (Dep’t of Justice, Aug. 22, 2023), available here, which have a considerably more recent vintage. See, e.g., Sarah Cascone, Sotheby’s Is Selling the First NFT Ever Minted—and Bidding Starts at $100 (Artnet, May 7, 2021), available here (noting that the first-ever NFT was minted in 2014). More recently, in U.S. v. Austad, the Southern District of New York unsealed charges alleging, among other things, that the defendants “used artificial intelligence image generation tools” to advertise sales of stolen account credentials. U.S. v. Nathan Austad & Kamerin Stokes. See, e.g., Press Release, Two More Men Charged With Hacking Fantasy Sports and Betting Website (Dep’t of Justice, Jan. 29, 2024), available here.

Investigatory and Compliance Considerations

Given the range of traditional offenses where AI may be particularly easy to misuse, practitioners should expect the same ingenuity in investigations and prosecutions going forward.

For example, because AI can be used to quickly generate and distribute cutting-edge deepfakes and other professional-looking content, it might be used to induce fraud victims to purchase nonexistent goods or services through convincing advertising, enable or amplify a scheme to generate false identification materials, facilitate a market manipulation scheme through the dissemination of forged company literature (See, e.g., Brian Fung, U.S. Senators Propose Tough Fines for AI-driven Securities Fraud or Market Manipulation (CNN, Dec. 19, 2023), available here, or sow confusion intended to disrupt the electoral process through AI-generated robocalls. See, e.g., Holly Ramer, Political Consultant Behind Fake Biden Robocalls Says He Was Trying to Highlight a Need for AI Rules (Assoc. Press, Feb. 24, 2024), available here.

In fact, it appears that prosecutors may already have launched new inquiries focused on AI in the context of traditional crimes. In January, for instance, Bloomberg Law reported that “[p]rosecutors have started subpoenaing pharmaceuticals and digital health companies to learn more about generative technology’s role in facilitating anti-kickback and false claims violations.” Ben Penn, DOJ’s Healthcare Probes of AI Tools Rooted in Purdue Pharma Case (Bloomberg, Jan. 29, 2024), available here.

Such developments necessitate additional considerations not only by those responding to federal investigative inquiries, but also by compliance professionals.

As to the former, practitioners responding to subpoenas and other investigatory demands should carefully consider the capabilities of clients’ AI tools, their internal controls or other relevant compliance protocols, and potential misuses that may have prompted an inquiry or stimulate further interest from prosecutors.

Those same considerations are also important for compliance departments and those who develop or utilize AI for valid purposes, as Monaco has explicitly cautioned that prosecutors will assess management of AI-related risks when considering future resolutions of compliance and enforcement matters. Monaco, Remarks at American Bar Association’s 39th National Institute on White Collar Crime, (Dep’t of Justice Mar. 7, 2024), supra.

Implicit in that warning is the possibility that even AI created or utilized for proper purposes can be misused, and that the DOJ expects those who develop or use AI to take preventative measures.

In connection with appropriate investigations or compliance inquiries, practitioners should consider whether AI may have been utilized, regardless of whether the relevant conduct appears technologically sophisticated. Some uses of AI — such as text or code generation — may not be readily apparent, but nonetheless should be carefully evaluated. This is especially important as the public becomes increasingly conversant with widely available tools that have a variety of existing lawful uses, such as generative AI applications that create text and images.

Considerations for Plea Negotiations and Sentencing

The implications of this focus on AI misuses likely will also extend beyond the investigatory phase into plea negotiations and sentencing arguments, as prosecutors are likely to seek increased penalties that reflect any greater harm flowing from AI.

In February, Monaco observed that “[l]ike a firearm, AI can also enhance the danger of a crime,” and “[g]oing forward, where … prosecutors can seek stiffer sentences for offenses made significantly more dangerous by the misuse of AI — they will,” in addition to seeking reforms that provide additional penalties “if … existing sentencing enhancements don’t adequately address the harms caused by misuse of AI.” Monaco, Remarks at the University of Oxford on the Promise and Peril of AI, supra.

Monaco doubled down on this last month at the American Bar Association’s National Institute on White Collar Crime, emphasizing that “[w]here AI is deliberately misused to make a white-collar crime significantly more serious, our prosecutors will be seeking stiffer sentences — for individual and corporate defendants alike.” Monaco, Remarks at American Bar Association’s 39th National Institute on White Collar Crime, (Dep’t of Justice Mar. 7, 2024), supra.

This is a logical place for prosecutors to deploy tools and practices targeting AI misuse. Novel theories may be easier to advance in this context, because prosecutors have a lower burden of proof, evidentiary rules apply less stringently than they do at trial, and many provisions of the U.S. Sentencing Guidelines are intended to encompass a broad range of flexible policy considerations.

And to the extent that plea negotiations incorporate agreements over what enhancements apply, litigation risks for prosecutors regarding new twists on certain enhancements may be significantly reduced.

Practitioners should carefully consider these issues in the context of plea negotiations, while recognizing that prosecutors likely have significant leverage with respect to some traditional guidelines enhancements.

For example, a familiar enhancement for offenses that involved sophisticated means (U.S.S.G. § 2B1.1(b)(10)(C)) has been construed broadly to apply not only where an offense relied on specialized computer knowledge, (United States v. Hatala , 552 F. App’x 28, 30 (2d Cir. 2014) (upholding enhancement where defendant “used his extensive knowledge of computer programming and database systems, as well as self-written codes, to bypass professionally-designed security systems”) but also where an offense involved the use of readily available software. United States v. Calderon , 209 F. App’x 418, 419 (5th Cir. 2006) (rejecting, inter alia, argument that “printing checks using a computer program available for purchase by anyone at a local office supply store . . . did not constitute sophisticated means” because “[e]ven though certain aspects of [the] scheme were not sophisticated, the offense as a whole involved sophisticated means”). Prosecutors may seek to apply this enhancement in cases where an offense was facilitated by AI, even if the actual application used is generally commercially available.

Similarly, enhancements targeting the use of authentication features[25] have been applied to items ranging from forged notary seals (United States v. Sardariani, 754 F.3d 1118, 1122 (9th Cir. 2014)) to voice verification data (See, e.g., United States v. Barrogo, 59 F.4th 440, 446 (9th Cir. 2023) (concluding that an “authentication feature” encompasses “non-physical” means of identification like biometric data, including “voice or retina information”)—all things for which sophisticated AI might generate passable forgeries. See, e.g., Press Release, Gartner Predicts 30% of Enterprises Will Consider Identity Verification and Authentication Solutions Unreliable in Isolation Due to AI-Generated Deepfakes by 2026 (Gartner, Inc., Feb. 1, 2024), available here. Prosecutors may seek to apply these enhancements where AI was used to create deepfakes that circumvent advanced identity authentication measures. Practitioners should carefully consider the application of such guidelines provisions and attempt to anticipate how prosecutors may retool other conventional arguments based on the particular facts of a case. And it is important to consider such issues early in the life of a case, so that practitioners are prepared during plea negotiations that may significantly affect later positions taken at sentencing.


As technologies and applications utilizing AI continue to proliferate and new tools are developed, white collar practitioners should expect to encounter AI in federal criminal enforcement matters sooner rather than later. Even as the DOJ deliberates on an overall approach to AI, prosecutors and agents are likely to forge ahead while repurposing traditional strategies and tools. And because an early focus by those actors is likely to be where the misuse of AI facilitates the commission of conventional offenses, practitioners should carefully consider how clients — even those using or developing AI for lawful purposes, or in existing cases otherwise involving only traditional offenses — may use AI, and the significance it could have for an investigation or prosecution.

Read the full article: