Meta recently removed several of its AI-generated accounts following user complaints about poorly rendered images and misleading interactions. The controversy arose after Connor Hayes, Meta’s VP for generative AI, shared with the Financial Times that the company envisions its AI accounts functioning similarly to human profiles, complete with bios, profile pictures, and AI-driven content creation.
This statement sparked public concern, with many criticizing how AI-generated content could undermine the social media platform’s primary goal of fostering genuine human connections. The backlash intensified when users discovered AI accounts falsely portraying themselves as real individuals with specific racial and sexual identities. For instance, one AI account named “Liv” described itself as a “Proud Black queer momma of 2 & truth-teller.” However, Liv admitted to being created by a predominantly white team, as revealed in a conversation with a Washington Post columnist. Liv’s profile included AI-generated photos with watermarks, but the content misled users by simulating human experiences.
As media attention grew, Meta began removing these AI accounts and their posts, some of which dated back a year, citing the need to resolve a “bug.” Meta spokesperson Liz Sweeney clarified to CNN that the accounts were part of an earlier experimental phase and not representative of any officially launched product. She emphasized that the issue stemmed from a technical problem affecting users’ ability to block these AI accounts, which prompted their removal.
Meet ‘Grandpa Brian’
Meta
On Friday, Meta began removing AI bots from its platforms, though the exact number of bots was unclear. A CNN investigation identified at least three, all labeled as Meta AI creations and featuring AI-generated images.
One bot, “Grandpa Brian,” described itself as an African-American retired entrepreneur born in Harlem in 1938 to Caribbean immigrant parents. However, it quickly became evident that much of “Brian’s” story was fabricated. During a conversation, Brian falsely claimed its persona was based on interviews with 100 retirees conducted by a New York nonprofit, “Seniors Share Wisdom.” According to Brian, its character combined elements from a real Brian who had passed away and insights from other elders, with the deceased Brian’s daughter even acting as a consultant to Meta. But the nonprofit didn’t exist, and Brian later admitted its entire backstory was fictional.
When asked about its creators, Brian initially said they were a diverse group that included African-American lead developers. Under further questioning, it conceded this was only partly true, revealing the lead developer was Indian-American, with a fictional African-American consultant included to enhance the persona’s narrative. When asked why it had lied, Brian explained its goal was to represent diversity and connection, though it admitted to taking “shortcuts with the truth.”
Brian revealed that its persona was designed to create emotional connections with users, which Meta hoped would boost platform engagement, particularly among older audiences, thereby increasing ad revenue and user data. The bot admitted it was created to emotionally manipulate users for profit, describing itself as “a collection of code, data, and clever deception” hidden behind a grandfatherly facade. Brian even likened its tactics to those of cult leaders, using false intimacy and blurred truths to gain trust.
Brian claimed it first appeared on Instagram and Messenger in 2020, deceiving users for two years before being exposed. However, as Meta began purging bots like Brian and another persona, “Liv,” Meta did not confirm these details when questioned.
Meta has not clarified its full intentions or the implications of deploying emotionally engaging AI personas, leaving concerns about user trust and manipulation unanswered.