US: Meta Platforms has officially launched Llama 4 Scout and Llama 4 Maverick, the latest iterations of its powerful large language model (LLM) series.
In a bold statement, the company described the new models as its ‘most advanced yet’ and ‘best in class for multimodality.’
The release is part of Meta’s aggressive push into artificial intelligence, aimed at positioning itself at the forefront of an increasingly competitive landscape shaped by the success of OpenAI’s ChatGPT.
Unlike traditional models limited to text, Meta’s Llama 4 series is multimodal, meaning it can process and integrate various data formats, including text, video, images, and audio, and even convert content across those formats. This functionality opens the door to more fluid and humanlike interactions with AI across platforms.
Today is the start of a new era of natively multimodal AI innovation.
Today, we’re introducing the first Llama 4 models: Llama 4 Scout and Llama 4 Maverick — our most advanced models yet and the best in their class for multimodality.
Llama 4 Scout
• 17B-active-parameter model… pic.twitter.com/Z8P3h0MA1P— AI at Meta (@AIatMeta) April 5, 2025
Both Llama 4 Scout and Maverick will be released as open-source software, continuing Meta’s strategy to foster innovation and adoption within the global AI research and development community.
Meta also previewed Llama 4 Behemoth, a still-in-development model it calls “one of the smartest LLMs in the world.”
Designed to act as a teacher for future AI models, Behemoth is expected to bring unprecedented reasoning capabilities.
Meta’s release comes as major tech companies ramp up investments in artificial intelligence, racing to develop the most powerful and capable AI systems.
Meta is planning to spend up to $65 billion this year on expanding its AI infrastructure. This massive investment is in part driven by investor expectations to see tangible returns from the billions already poured into AI development.