Is the Federal Trade Commission About to Crack Down on AI Claims?
New FTC blog post should be viewed as a warning
The U.S. Federal Trade Commission’s (FTC) mission is “To prevent business practices that are anticompetitive or deceptive or unfair to consumers; to enhance informed consumer choice and public understanding of the competitive process; and to accomplish this without unduly burdening legitimate business activity.”
The agency has a broad mandate and finite resources, which means it cannot actively pursue every potential area of wrongdoing within its scope at all times. Instead, it must choose where to employ its resources to maximize consumer benefit. These decisions can be driven by a variety of factors, including:
Tips of fraudulent or deceptive activity by a company or a particular industry segment.
The influence of politicians, lobbyists, and other political actors with a particular agenda.
The observation of popular trends.
Artificial intelligence qualifies for all three of these categories today. That still does not mean the FTC will pay close attention to AI. However, when the FTC says it is interested in a type of business, you should take note.
Keep Your AI Claims in Check
Michael Atleson, an Attorney in the FTC’s Division of Advertising Practices, published a blog post late last month titled Keep Your AI Claims in Check. Atleson’s main point:
What exactly is “artificial intelligence” anyway? It’s an ambiguous term with many possible definitions. It often refers to a variety of technological tools and techniques that use computation to perform tasks such as predictions, decisions, or recommendations. But one thing is for sure: it’s a marketing term. Right now it’s a hot one. And at the FTC, one thing we know about hot marketing terms is that some advertisers won’t be able to stop themselves from overusing and abusing them.
In a 2021 FTC blog post, Elisa Jillson, an attorney in the Bureau of Consumer Protection, issued another clear warning specifically about AI product claims.
Don’t exaggerate what your algorithm can do or whether it can deliver fair or unbiased results. Under the FTC Act, your statements to business customers and consumers alike must be truthful, non-deceptive, and backed up by evidence. In a rush to embrace new technology, be careful not to overpromise what your algorithm can deliver.
Atleson goes on to highlight four areas the FTC will typically consider when evaluating a potential case:
Are you exaggerating what your AI product can do?
Are you promising that your AI product does something better than a non-AI product?
Are you aware of the risks?
Does the product actually use AI at all?
Have you read about or used an AI product in the past few months that appears to be violating one or more of these principles? So have I.
Regulation vs. Enforcement
Don’t confuse the FTC’s interest in AI as a new type of regulation. The ideas expressed by Atleson and Jillson are about enforcing laws that prohibit false marketing claims.
I am skeptical about new regulations that intend to curb the use of AI. Even well-intentioned regulations typically carry unintended consequences that can have severe adverse effects.
Regulators are at a significant disadvantage because they typically do not work with the technologies they oversee and cannot implement changes at the pace of innovation. As a result, they often regulate in a way that unintentionally favors the interests of specific influential actors in politics and business at the expense of others. In addition, the rules are often applied so late that they address a reality that no longer exists. So, regulators should use caution when forming new rules over dynamic markets.
Enforcement of existing laws is a different matter. We all saw the onslaught of YouTube videos about getting rich overnight using ChatGPT and could immediately see misleading advertising in action. However, there are venture-funded technology companies that are also making over-the-top claims about what their generative AI can do for users. And it appears some are saying they employ AI for features that are simply rules-based procedures.
The Downside of Faking It
Misleading claims are the empty calories of marketing. They make you feel good in the moment and sometimes lead to short-term gains in sales or market positioning. However, they tend to be followed by a crash in activity and reputation that can be hard to recover.
Making matters worse, this approach is too often justified as the fake-it-until-you-make-it strategy that some people believe is necessary to succeed in technology segments. The biggest problem is that false claims tend to become amplified, undermining belief in the value of even valid claims.
AI product vendors should take note of the FTC’s interest. The blog post signals the agency is looking for some examples they can prosecute to address wrongdoing and also send a message to the industry. It also means that everyone else may soon benefit from more realistic claims and a little less hyperbole.
With that said, I am concerned that the conviction of AI companies making false claims will be used as a pretext for new regulation. That is where the cascade of unintended consequences will gather momentum and could undermine the many benefits that AI is already delivering. Let’s hope the market corrects the bad faith practices before the government machinery feels compelled to act.
If you would like to read a deeper analysis of this topic, I recommend you check out Lance Elliot’s recent column in Forbes.