In the competitive landscape of artificial intelligence, Apple has made a strategic move by releasing eight small AI models, collectively known as OpenELM. These compact tools are designed to run on devices and offline, making them perfect for smartphones.
OpenELM: Apple’s Answer to AI Language Models
Published on the open-source AI community Hugging Face, the models are offered in 270 million, 450 million, 1.1 billion, and 3 billion parameter versions. Users can download Apple’s OpenELM in either pre-trained or instruction-tuned versions.
The pre-trained models provide a base atop which users can fine-tune and develop. The instruction-tuned models are already programmed to respond to instructions, making them more suitable for conversations and interactions with end users.
While Apple hasn’t suggested specific use cases for these models, they could be applied to run assistants that can parse emails and texts, or provide intelligent suggestions based on the data. This approach is similar to one taken by Google, which deployed its Gemini AI model on its Pixel smartphone lineup.
The models were trained on publicly available datasets, and Apple is sharing both the code for CoreNet (the library used to train OpenELM) and the “recipes” for its models. In other words, users can inspect how Apple built them.
Microsoft’s Phi-3: A Competitor on the Horizon
The Apple release comes shortly after Microsoft announced Phi-3, a family of small language models capable of running locally. Phi-3 Mini, a 3.8 billion parameter model trained on 3.3 trillion tokens, is still capable of handling 128K tokens of context, making it comparable to GPT-4 and beating Llama-3 and Mistral Large in terms of token capacity.
Being open source and lightweight, Phi-3 Mini could potentially replace traditional assistants like Apple’s Siri or Google’s Gemini for some tasks, and Microsoft has already tested Phi-3 on an iPhone and reported satisfactory results and fast token generations.
The Future of AI in Apple Devices
While Apple has not yet integrated these new AI language model capabilities into its consumer devices, the upcoming iOS 18 update is rumored to include new AI features that use on-device processing to ensure user privacy.
Apple hardware has an advantage in local AI use, as it combines device RAM with GPU video RAM (or VRAM). This means that a Mac with 32 GB of RAM (a common configuration in a PC) can utilize that RAM as it would GPU VRAM to run AI models. By comparison, Windows devices are hamstrung by separate device RAM and GPU VRAM. Users often need to purchase a powerful 32GB GPU to augment the RAM to run AI models.
However, Apple lags behind Windows/Linux in the area of AI development. Most AI applications revolve around hardware designed and built by Nvidia, which Apple phased out in support of its own chips. This means that there is relatively little Apple-native AI development, and as a result, using AI on Apple products requires translation layers or other complex procedures.
In conclusion, the release of OpenELM by Apple and Phi-3 by Microsoft marks a significant milestone in the evolution of AI language models. As these tech giants continue to innovate, the future of AI looks promising.
You may also like:- How to Choose the Best Penetration Testing Tool for Your Business
- Top 8 Cybersecurity Testing Tools for 2024
- How To Parse FortiGate Firewall Logs with Logstash
- Categorizing IPs with Logstash – Private, Public, and GeoIP Enrichment
- 9 Rules of Engagement for Penetration Testing
- Google vs. Oracle – The Epic Copyright Battle That Shaped the Tech World
- Introducing ChatGPT Search – Your New Gateway to Instant, Up-to-date Information
- Python Has Surpassed JavaScript as the No. 1 Language on GitHub
- [Solution] Missing logstash-plain.log File in Logstash
- Top 7 Essential Tips for a Successful Website