Meta launches standalone AI app to challenge ChatGPT

Company News

by Finance News Network

Meta launches standalone AI app, betting on personalisation and social discovery to challenge ChatGPT

Meta Platforms has released a dedicated Meta AI app, marking its most aggressive move yet in the generative AI race and intensifying competition with OpenAI’s ChatGPT, Google’s Gemini, and Elon Musk’s Grok.

Unveiled at Meta’s inaugural LlamaCon developer event, the new app is built on Meta’s latest large language model, Llama 4, and aims to differentiate itself by drawing on years of user data from Facebook and Instagram to deliver highly personalised, context-aware responses.

“We’re launching the first version of the Meta AI app: the assistant that gets to know your preferences, remembers context and is personalized to you,” Meta said in its announcement.

A social twist on AI interaction

 

The most distinctive feature is the “Discover feed,” a social stream of public AI interactions. Users can see how others — including friends — are using Meta AI, remix popular prompts, and share their own, blending generative AI with the viral logic of platforms like Instagram and TikTok.

This social integration may also preview future directions for other players. OpenAI and X have both hinted at social feeds for AI assistants, but Meta is the first to implement it at scale.

Built-in voice features and full-duplex demo

 

The app puts voice interaction front and centre, with both standard voice functionality and a beta “full-duplex” mode, which enables more dynamic, natural-sounding conversations with overlapping speech and real-time responsiveness — similar to ChatGPT’s advanced voice mode.

The full-duplex feature is currently available in the US, Canada, Australia, and New Zealand, and Meta acknowledges that users may encounter inconsistencies as it continues to refine the technology.

Cross-platform continuity and smart glasses integration

 

Rather than launching from scratch, the Meta AI app replaces the existing Meta View app used to manage Ray-Ban Meta smart glasses. The new interface merges voice assistant capabilities with device management, allowing users to start a conversation on their glasses and continue it in the app or on the web — though not yet the other way around.

The app also supports bidirectional chat continuity across mobile and web, with enhancements for desktop users including better image generation controls and a document editor in limited testing, which allows users to generate or upload files for Meta AI to analyse.

Customisation and control

 

Meta AI can personalise results using data from connected Facebook and Instagram accounts, including liked content and profile details — but only if users opt in via the Meta Accounts Center. Users can also instruct the assistant to remember facts about them — such as dietary restrictions or hobbies — for future interactions.

Meta has emphasised user control: microphone usage is clearly indicated, voice features can be toggled on or off, and no AI-generated content is shared publicly unless explicitly chosen.

Strategic context

 

The app launch aligns with CEO Mark Zuckerberg’s ambition to reach 1 billion people with a highly intelligent, personalised AI assistant by the end of 2025. Meta AI already had 700 million monthly users as of January, integrated into the search bars of WhatsApp, Facebook, Messenger, and Instagram.

Meta has committed to spending up to US$65bn in 2025 on AI infrastructure, and a paid tier for advanced Meta AI capabilities is in the works, though it’s not expected to contribute meaningful revenue until 2026.

Meta reports its first-quarter results Wednesday, with investors closely watching for signs that these substantial AI investments are translating into commercial returns.


Subscribe to our Daily Newsletter?

Would you like to receive our daily news to your inbox?