Meta promptly deleted several of its own AI-generated accounts after human users began engaging with them and posting about the bots’ sloppy imagery and tendency to go off the rails and even lie in chats with humans.
The issue emerged last week when Connor Hayes, a vice president for Meta’s generative AI, told the Financial Times that the company expects its homemade AI users to appear on its platforms in much the same way human accounts do.
“They’ll have bios and profile pictures and be able to generate and share content powered by AI on the platform… that’s where we see all of this going.”
That comment sparked interest and outrage, raising concerns that the kind of AI-generated “slop” that’s prominent on Facebook would soon come straight from Meta and disrupt the core utility of social media — fostering human-to-human connection.
As users began to sniff out some of Meta’s AI accounts this week, the backlash grew, in part because of the way the AI accounts disingenuously described themselves as actual people with racial and sexual identities.
Facebook users have complained of an increase in AI-generated spam content on the platform, as new artificial intelligence tools make it easier than ever to generate large numbers of fake images.
In particular, there was “Liv,” the Meta AI account that has a bio describing itself as a “Proud Black queer momma of 2 & truth-teller,” and told Washington Post columnist Karen Attiah that Liv had no Black creators — the bot said it was built by “10 white men, 1 white woman, and 1 Asian male,” according to a screenshot posted on Bluesky. Liv’s profile included a label that read “AI managed by Meta,” and all of Liv’s photos — snapshots of Liv’s “children” playing at the beach, a close-up of badly decorated Christmas cookies — contained a small watermark identifying them as AI-generated.
As media scrutiny ticked up Friday, Meta began taking down Liv and other bots’ posts, many of which dated back at least a year, citing a “bug.”
“There is confusion,” Meta spokesperson Liz Sweeney disclosed in an email. “The recent Financial Times article was about our vision for AI characters existing on our platforms over time, not announcing any new product.” CNN reported.
Sweeney said the accounts were “part of an early experiment we did with AI characters.”
She added: “We identified the bug that was impacting the ability for people to block those AIs and are removing those accounts to fix the issue.”