WhatsApp has introduced a new AI feature within its messaging app, calling it “entirely optional”—even though users can’t actually remove it.
A persistent Meta AI icon—a blue circle accented with pink and green—now appears in the lower-right corner of the Chats screen. Tapping it launches a chatbot intended to help answer queries. However, some users have voiced frustration over its non-removable presence.
This rollout follows criticism faced by Microsoft for its always-on “Recall” feature, which was later made optional after user backlash.
Speaking to the BBC, WhatsApp stated, “We believe offering choices is valuable and we’re continuously listening to user feedback.”
The company compares the new AI tool to other permanent features like ‘channels’ and ‘status’. Its introduction coincides with Meta’s update on teen account controls on Instagram, which includes testing AI tools in the U.S. aimed at detecting users who misreport their age.
Where is the new blue circle?
If you don’t see the blue circle yet, it likely means the feature hasn’t been enabled for your account.
Meta notes that the tool is currently only launching in select countries, and even within those, availability can vary—so you might not have access yet, even if others in your area do.
Along with the blue circle, there’s a search bar at the top that prompts you to “Ask Meta AI or Search.” This same function is also available on Facebook Messenger and Instagram, both of which are Meta-owned platforms.
The AI assistant is built using Llama 4, Meta’s own large language model.
Before using the tool, users are shown a lengthy introduction explaining that Meta AI is “optional.”
According to WhatsApp’s website, the AI can help answer questions, explain topics, or assist with generating ideas.
In a test run, I asked about the weather in Glasgow. The AI quickly responded with details on temperature, rain probability, wind, and humidity. It also provided two links for more details—one was accurate, but the other mistakenly offered weather information for Charing Cross in London instead of the Glasgow location.
What do people think of it?
In Europe, the reaction to Meta’s new AI feature has largely been negative. Users on platforms like X, Bluesky, and Reddit have shared their dissatisfaction—many upset that there’s no option to disable the tool. Even Guardian columnist Polly Hudson expressed frustration over the lack of control.
AI and privacy expert Dr. Kris Shrishak has strongly criticized the move, accusing Meta of “exploiting its market” and treating users as “test subjects” for its AI systems. He emphasized that people shouldn’t be forced to use AI features and labeled Meta’s models a “privacy violation by design,” pointing out that the company has used data from web scraping and pirated books to train them.
Dr. Shrishak added that now that the legality of these methods is under scrutiny in court, Meta appears to be seeking new ways to collect data—possibly through features like this one.
A report by The Atlantic claims Meta used millions of pirated books and academic texts from LibGen (Library Genesis) to train its Llama AI. In response, author organizations in the UK and globally are rallying for government action. Meanwhile, Meta is currently facing legal action from several authors over the alleged misuse of their work.
When asked about The Atlantic’s findings, Meta declined to respond.
What are the concerns?
When users first activate Meta AI in WhatsApp, they’re informed that the chatbot only accesses messages that are directly shared with it.
Meta clarifies that it cannot view any messages in private conversations, which remain protected through end-to-end encryption.
The UK’s Information Commissioner’s Office (ICO) told the BBC that it is keeping a close eye on how Meta AI is rolled out and how personal data is handled within WhatsApp.
“AI development often relies on personal data, so it’s crucial that people feel confident their data is being handled properly,” the ICO stated. It emphasized that any organization using personal data for training or deploying generative AI must fully comply with data protection laws—especially when handling children’s data.
Dr. Kris Shrishak also warned users to be cautious. He explained that while messages between users stay encrypted, interactions with Meta AI are a different case: “When you chat with Meta AI, remember Meta is on the other end—not your friend.”
Meta further advises users to avoid sharing sensitive or private information during these interactions. The company notes that anything shared could potentially be used later, saying: “Don’t share details—especially personal or sensitive content—you wouldn’t want the AI to store or use.”