Mistral AI also announced a relationship with Microsoft, which would make Mistral Large available through Azure AI Studio and Azure Machine Learning. Mistral AI, a France-based startup, has introduced a new proprietary large language model (LLM) to an increasingly congested AI market, claiming that its new Mistral Large can outperform several key competitors.

In a post on February 26, the Paris-based startup stated that Mistral Large surpassed many major LLMs in a “multitask language understanding” test, with the exception of GPT-4, and fared well in several arithmetic and coding tests.

However, Mistral Large’s performance was not compared to that of xAI’s Grok or Google’s Gemini Ultra, both of which were released in November and early February, respectively. Cointelegraph has reached out to Mistral AI for comment.

The founder and chief scientist of the company, Guillaume Lample, claims Mistral Large is “vastly superior” to Mistral AI’s previous models. Mistral AI also introduced “Le Chat” – an AI conversation interface built on top of its models, similar to how ChatGPT is built on GPT-3.5 and GPT -4.

The company, which received $487 million in funding in December from Nvidia, Salesforce, and Andreessen Horowitz, stated that Mistral Large is fluent in French, Spanish, German, and Italian.

Source: Mistral AI

While Mistral AI’s first model was released under an open-source license, Mistral Large is a closed, proprietary model, akin to OpenAI’s most recent LLMs, which has disappointed some X viewers.

While third-party AI chatbot ranking systems such as Chatbot Arena have not rated Mistral Large, its previous model, Mistral Medium, ranks sixth out of over 60 LLMs.

The hundreds of pairwise ratings in Chatbot Arena are run through a Bradley-Terry model, which employs random sampling to generate an “Elo” rating, estimating which model is most likely to win in direct competition with another model.

Additionally, the company recently revealed a partnership with Microsoft to provide Mistral Large on Azure AI Studio and Azure Machine Learning.

“Microsoft’s trust in our model is a step forward in our journey,” Mistral AI stated about its commercial LLM.

Mistral Large will use Azure’s “supercomputing infrastructure” for training and scaling, and the two companies will also work on AI research and development, according to a Feb. 26 announcement from Microsoft’s Azure AI Platform corporate vice president Eric Boyd.

Mistral Large is only marginally less expensive than GPT-4 Turbo, which costs $10 and $30 per million tokens of input and output, respectively, at $8 and $24.

Shares: