By Katie Paul
NEW YORK (Reuters) – Meta Platforms released the biggest version of its mostly free Llama 3 artificial intelligence models on Tuesday, boasting multilingual skills and general performance metrics that nip at the heels of paid models from rivals like OpenAI.
The new Llama 3 model can converse in eight languages, write higher-quality computer code and solve more complex math problems than previous versions, the Facebook parent company said in blog posts and a research paper announcing the release.
Its 405 billion parameters, or variables that the algorithm takes into account to generate responses to user queries, dwarfs the previous version released last year though is still smaller than leading models offered by competitors.
OpenAI’s GPT-4 model, by contrast, is reported to have one trillion parameters and Amazon is investing in a model with 2 trillion parameters.
The release comes as tech companies are racing to show that their growing portfolios of resource-hungry large language models can deliver significant enough gains in known problem areas such as advanced reasoning to justify the gargantuan sums that have been invested in them.
In addition to its flagship 405 billion parameter model, Meta is also releasing updated versions of its lighter-weight 8 billion and 70 billion parameter Llama 3 models initially introduced in the spring, the company said.
All three new models are multilingual and can handle larger user requests via an expanded “context window,” which Meta’s head of generative AI, Ahmad Al-Dahle, said would improve the experience of generating computer code in particular.
“That was the number one feedback we got from the community,” Al-Dahle told Reuters in an interview, noting that bigger context windows give the models something akin to a longer memory that aids in processing multi-step requests.
Meta releases its Llama models largely free-of-charge for use by developers, a strategy Chief Executive Mark Zuckerberg says will pay off in the form of innovative products and greater engagement on the company’s core social networks. Some investors have raised their eyebrows at the costs entailed, however.
The company also stands to gain if developers opt to use its free models over paid ones, which would undercut the business models of its rivals. With its announcement, Meta touted gains on key math and knowledge tests that may make that prospect more appealing.
Although progress on AI development is notoriously difficult to measure, test results provided by Meta appeared to suggest that its largest Llama 3 model was nearly matching and in some cases besting Anthropic’s Claude 3.5 Sonnet and OpenAI’s GPT-4o, which are widely regarded as the two most powerful frontier models on the market.
On the MATH benchmark of competition level math word problems, for example, Meta’s model posted a score of 73.8, compared to GPT-4o’s 76.6 and Claude 3.5 Sonnet’s 71.1.
The model scored 88.6 on MMLU, a benchmark that covers dozens of subjects across math, science and the humanities, while GPT-4o scored 88.7 and Claude 3.5 Sonnet scored 88.3.
In their paper, Meta researchers also teased upcoming “multimodal” versions of the models due out later this year that layer image, video and speech capabilities on top of the core Llama 3 text model.
Early experiments indicate those models can perform “competitively” with other multimodal models such as Google’s Gemini 1.5 and Anthropic’s Claude 3.5 Sonnet, they said.
(Reporting by Katie Paul, Editing by Louise Heavens)
Comments