Articles You May Have Missed

When the AI Does It, Does That Mean It Is Not Illegal?

Navigating an “Existing Authorities” Regime for AI Regulation

As artificial intelligence (AI) proliferates, so do its legal complications, forcing companies to know the risks and rewards of abiding by AI regulatory expectations as they currently exist in a world where they will surely not stay that way for much longer.

Artificial intelligence (AI) seems to be everywhere you look these days. The launch of OpenAI’s Chat GPT-3 and GPT-4 dominated media headlines. So too have concerns about potential harms caused by AI, ranging from misinformation, to job displacement, to the potential extinction of humanity. Despite these concerns, AI only grows more ubiquitous in daily life. AI chatbots help us buy products online, AI facial recognition helps us get through airport security, and AI applications help doctors diagnose our ailments.

The pervasiveness, potential, and perceived risk of AI are not lost on Congress. Both the House and Senate held hearings on AI in 2023, and legislators introduced a flurry of AI-related bills. To date, however, Congress has not enacted comprehensive AI legislation. The absence of express legislative authority has not deterred regulators from seeking to rein in AI, though. In doing so, these agencies have focused on using their existing authority to regulate the new challenge of AI. As the Federal Trade Commission (FTC) put it, there is “no AI exemption from the laws on the books.”i This reliance on existing legal authorities and enforcement frameworks tracks how federal agencies have often approached cybersecurity regulation. In the absence of comprehensive federal cybersecurity legislation, a key component of the White House’s National Cybersecurity Strategy involves using “existing authorities to set necessary cybersecurity requirements.”ii

As companies evaluate their AI-related risk, then, they cannot simply look to the latest AI-related legislation or regulation. So where should compliance and legal teams focus as they navigate an “existing authorities” approach to regulating AI? Below are several relevant guideposts to consider when evaluating AI-related regulatory risk.

OUTCOME-BASED VS. INTENT-BASED REGULATION
As companies evaluate their relative regulatory risk, one potentially relevant factor is whether liability under the applicable regulatory scheme depends on a party’s intent or knowledge. AI systems notoriously have the potential to take action that their creators neither intended nor anticipated. In one recent example, researchers found that an AI application would engage in insider trading, even when specifically instructed not to do so.iii

These unintended consequences can create significant regulatory risk where the relevant statute or regulation imposes liability on outcomes rather than intent. For instance, a company can violate the federal Fair Housing Act and its implementing regulations if its business practices cause a disparate impact on a protected class, even if that effect was entirely unintended.iv Thus, if a landlord uses an AI system to screen prospective tenants and that system disproportionately disfavors minority applicants, the landlord may face significant regulatory risk even if he or she had no discriminatory intent. Some other regulatory regimes require that a party possess certain intent or knowledge before imposing liability. For example, establishing fraud ordinarily requires showing that a party possessed fraudulent intent.

When assessing whether or how to incorporate AI into your operations, being mindful of the distinction between outcome-based and intent-based regulation can help assess the relative risk your company may face. It also points toward potential ways to mitigate that risk. Companies should consider carefully documenting the business rationale for adopting AI systems, the steps taken to avoid adverse consequences, and the reasons why less risky options are not practical. Where the applicable regulations focus on intent, this contemporaneous documentation can help establish that the company lacked an impermissible intent. It may also help prevent regulators from trying to prove intent by characterizing the company as recklessly disregarding known risks.

Even when regulations focus on outcomes rather than intent, documenting the company’s motives and good-faith efforts to avoid harm can heavily impact a regulator’s prosecutorial discretion. In addition, some outcome-based regulatory regimes provide narrow defenses based on good faith or business necessity. For example, the federal fair-housing regulation discussed above permits a defendant to justify a business practice on the ground that the practice achieved a legitimate, nondiscriminatory purpose and that no less discriminatory alternative would suffice. Documenting the business purpose and the insufficiency of alternatives can provide critical evidence for a company that will later rely on such a defense. In some cases, the exercise of documenting these considerations can also help identify previously overlooked alternative options that can mitigate regulatory risk and even enhance business outcomes.

DISCLOSURE REGARDING THE USE AND OPERATION OF AI
While some regulators are on uncertain footing when using their existing authorities to regulate AI, agencies like the FTC and state attorneys general possess an expansive and well-established tool: statutory authority to challenge “unfair or deceptive acts or practices.”v It should come as no surprise, then, that the FTC has taken a leading role in attempts to police AI, with a particular focus on how companies market or disclose their use of AI.

The most obvious area of FTC focus is where a company deceptively overhypes its AI. As the FTC succinctly puts it: “Keep your AI claims in check.”vi But there can also be risk from saying too little about your use of AI. Material omissions can sometimes deceive just as much as false statements. For example, there may be scenarios where AI-generated content is so true-to-life that the failure to disclose its AI origins is deceptive.vii Similarly, the FTC contends that “people should know if they’re communicating with a real person or a machine.”viii Companies should expect the FTC to closely scrutinize chatbots and similar features used in persuading consumers to buy goods or services.

The FTC’s enforcement approach forces companies to navigate between Scylla and Charybdis: say too much about your AI and risk the perception that you’ve mischaracterized it; say too little and risk the perception that you’ve left out something material. Further complicating the task, an AI system’s internal operations often remain opaque even to its creators. There is no easy solution to this predicament. Compliance teams must work closely with technical and marketing staff to understand how AI works “under the hood,” as well as the intended and foreseeable ways that consumers might interact with it.

USING TOO LITTLE AI
When thinking about AI-related risks, we often focus on the risks that come from using AI. But in some cases, failing to use AI may also bring regulatory risk. As AI becomes more pervasive in business given its many potential benefits, the government likely will come to expect companies to incorporate AI into their compliance and know-your-customer programs.

In some cases, where other regulatory regimes impose an affirmative obligation to identify potential risks, the government might view the absence of AI in these functions as undermining the adequacy of the company’s program. For example, the FTC’s financial-institution cybersecurity regulations require regular penetration testing and monitoring for cyber vulnerabilities.ix Identifying such vulnerabilities is a well-recognized use case for AI. For this reason, the FTC may soon expect that any effective penetration-testing and vulnerability-monitoring regime will include AI, and the failure to use AI may constitute a violation of the relevant regulations. As a result, companies should continually consider how AI developments can enhance their non-AI compliance and risk-management efforts..

ENDNOTES

i. FTC Comment to Copyright Office Docket No. 2023-6, Artificial Intelligence and Copyright, Fed. Trade Comm’n, at 8 (Oct. 30, 2023)

ii. National Cybersecurity Strategy, White House at 8 (March 2023)

iii. Matt Levine, The Robots Will Insider Trade, Bloomberg Law News (Nov. 11, 2023)

iv. 24 C.F.R. § 100.500

v. 15 U.S.C. § 45

vi. Michael Atleson, Keep your AI claims in check, Fed. Trade Comm’n (Feb. 27, 2023)

vii. Michael Atleson, Chatbots, deepfakes, and voice clones: AI deception for sale, Fed. Trade Comm’n (Mar. 20, 2023)

viii. Michael Atleson, The Luring Test: AI and the engineering of consumer trust, Fed. Trade Comm’n (May 1, 2023)

ix. 16 C.F.R. § 314.4(d)

ABOUT THE AUTHORS
Michael Martinich-Sauter is a partner in the St. Louis office of Husch Blackwell LLP. He regularly represents innovative companies in government investigations and regulatory compliance matters.

Rebecca Furdek is a senior associate in the Milwaukee office of Husch Blackwell LLP. She represents individual and corporate clients in both civil litigation and defending against government investigations.

Subscribe to our bi-weekly newsletter Ethisphere Insights for the latest articles, episodes, and updates.

RELATED POSTS

Free Magazine Access!

Fill out the form below, and get access to our Magazine Library

Free Magazine Access!

Fill out the form below, and get access to our Magazine Library

%d