Robot hand touching to network data

Bringing law enforcement takes a lot of time – many, many months of investigation, analysis, negotiations and process must occur before an agency brings a law enforcement action. But blogs can get out the door far more quickly. And the Federal Trade Commission (FTC) has been blogging fast and furiously – one could say obsessively – about the hot topic of generative artificial intelligence (AI).

It would be an understatement to say that there has been an explosion of discussion about and consumer uptake of a wide range of new and exciting AI tools. The change has been dramatic and quick, and it has not escaped the notice of regulators.

So let’s take a look at the three recent FTC AI blogs discussing how the agency anticipates assessing whether AI tools are being marketed or used in ways that could violate the FTC Act.

Step one for the FTC is to lay the groundwork and emphasize that the same rules apply whether we are talking about AI or mere human decision-making. This is FTC 101, and companies are responsible for any claims that they are making about what their AI can or cannot do. How accurate is your AI? Is it really better than other products out there? Are you creating any risks? And, of course, if the FTC comes knocking, it’s not going to be sufficient to say that the technology “is a ‘black box’ you can’t understand or didn’t know how to test.”

Step two for the FTC is to go broader and make sure that companies making AI tools widely available are thinking through the ramifications – both good and potentially bad – of what they are doing. For this round, the focus is on what people are using your AI tools for and whether there are some bad and unlawful things happening as a result of bad actors using your tools. And yes, there can be legal theories of enforcement for the agency if others are using your tool to deceive, even if that was not your intent. The FTC sets forth a laundry list of possible harms:

Evidence already exists that fraudsters can use these tools to generate realistic but fake content quickly and cheaply, disseminating it to large groups or targeting certain communities or specific individuals. They can use chatbots to generate spear-phishing emails, fake websites, fake posts, fake profiles, and fake consumer reviews, or to help create malware, ransomware, and prompt injection attacks. They can use deepfakes and voice clones to facilitate imposter scams, extortion, and financial fraud. And that’s very much a non-exhaustive list.

And step three for the FTC is to go as broadly as possible and flag issues and concerns that are at the cutting edge of consumer protection, primarily by framing them as practices that could be considered “unfair” by the agency. The blog broadly discusses AI tools that “can influence people’s beliefs, emotions, and behavior.” This, of course, is starting to sound quite a bit like the FTC’s somewhat obsessive focus on dark patterns, and the latest blog owns up to that. Some of the areas flagged where the FTC could arguably assert its unfairness authority are chatbots that confidently report questionable answers or answers that are incorrectly perceived to be neutral or impartial. The FTC specifically flags the use of AI tools that could steer consumers into “harmful decisions in areas such as finances, health, education, housing, and employment.” The FTC also flags the potential existence of undisclosed advertising within a generative AI feature as another area of concern.

There is, of course, a big difference between somewhat grandiose statements in a trilogy of blog posts and actually bringing law enforcement based on those theories. Now, we have written quite a lot about the FTC pushing the bounds of its authority these days, and we are quite confident that there is extensive discussion within the agency about what theories to focus on and what targets to pursue. But if you are involved with marketing or the provision of new and exciting AI tools, take a close look at the FTC trilogy of blog entries on the topic. There is a lot to digest and many issues that are worth some serious consideration.

And, of course, I did have to ask a certain AI tool when the FTC would bring its first case involving generative AI issues. The answer was a bit wishy-washy but accurate, stating:

Given the increasing prevalence of AI and the FTC’s stated focus on technology and data privacy, it is likely that the agency will continue to take actions related to AI in the future. However, it is impossible to predict when and what specific actions the FTC will take.