Apple recently unveiled OpenELM, a collection of open-source language models (LLMs) designed specifically for running on devices themselves rather than relying on remote cloud servers. Announced on the AI code-sharing platform Hugging Face, OpenELM comprises eight models in total, offering a range of capabilities.

Source: Hugging Face

These deliberately compact models are optimized for efficient text-based tasks on devices like iPhones and iPads. Four of the models come pre-trained, while the other four are fine-tuned for specific instructions. They range in size from 270 million parameters to 3 billion, with more parameters generally indicating greater potential and performance.

Apple’s foray into open-source AI models aligns with the strategies of its competitors. Microsoft recently introduced its Phi-3 Mini model designed for smartphone use. Benchmarks conducted by Apple show that OpenELM models, particularly the 450 million-parameter instruction-tuned variant, deliver impressive performance in text generation tasks.

This move by Apple signifies a potential shift towards on-device AI processing for iPhones and other Apple devices. Processing language models directly on the device offers several advantages. It can improve privacy by keeping user data on the device, and it can also function without an internet connection, making it useful in areas with limited or no internet access.

The release of OpenELM also indicates Apple’s commitment to fostering collaboration and innovation in the field of AI research. By making these models open-source, Apple allows researchers and developers to explore and adapt them for various purposes, potentially leading to further advancements in on-device AI capabilities.