MediaTek Optimizes Dimensity Chipsets for Microsoft’s Phi-3.5 AI Models

Post Reply
priyasng
Verified
Joined: Wed Sep 04, 2024 5:29 pm

MediaTek, a leading semiconductor company, announced on Monday that it has successfully optimized several of its Dimensity chipsets to support Microsoft’s Phi-3.5 small language models (SLMs). This collaboration aims to enhance the efficiency of on-device generative AI tasks by leveraging MediaTek's advanced neural processing units (NPUs).
Image
A New Era for AI on Mobile Platforms
Microsoft introduced the Phi-3.5 series of SLMs in August, featuring three key models:
  • Phi-3.5 MoE (Mixture of Experts): A large-scale AI model with 16x3.8 billion parameters, of which 6.6 billion are typically active during use.
  • Phi-3.5 Mini: A compact model supporting multi-lingual capabilities.
  • Phi-3.5 Vision: Designed for tasks requiring multi-frame image understanding and reasoning.
These open-source AI models, made available on Hugging Face, differ from typical conversational AI. Instead, they operate as instruct models, where users provide specific instructions to achieve desired outcomes.
Image
MediaTek's Optimized Dimensity Chipsets
MediaTek revealed that its Dimensity 9400, Dimensity 9300, and Dimensity 8300 chipsets are now optimized for Phi-3.5 models. This optimization ensures that these mobile platforms can efficiently process and run AI inferences for generative tasks directly on the device, minimizing the need for cloud-based resources.
Key benefits of the optimization include:
  • Reduced Latency: Faster processing of AI tasks.
  • Lower Power Consumption: Improved energy efficiency.
  • Increased Throughput: Higher performance for complex AI operations.
By tailoring the chipset’s architecture, memory access patterns, and hardware design, MediaTek ensures seamless integration of the Phi-3.5 models with its platforms.
Image
Unlocking the Potential of Phi-3.5 Models
Among the Phi-3.5 series, the Phi-3.5 MoE has demonstrated remarkable performance. It boasts a competitive edge over other advanced models like Gemini 1.5 Flash and GPT-4o mini in the SQuALITY benchmark, which evaluates accuracy and readability in text summarization tasks.
The Phi-3.5 Vision and Phi-3.5 Mini bring their own strengths. While Phi-3.5 Vision excels in visual reasoning tasks, Phi-3.5 Mini offers compact versatility with multi-lingual support.
Developer Tools for Seamless AI Integration
Developers can access the Phi-3.5 models directly via Hugging Face or Microsoft’s Azure AI Model Catalogue. For those working with MediaTek’s platforms, the company’s NeuroPilot SDK toolkit provides additional tools for building optimized on-device AI applications. These resources allow developers to tap into the power of Phi-3.5 while utilizing the enhanced capabilities of MediaTek's Dimensity chipsets.
A Leap Forward for On-Device AI
With this announcement, MediaTek continues to solidify its position as a leader in on-device AI innovation. By enabling seamless integration of Microsoft’s advanced Phi-3.5 models, the company is paving the way for more powerful and efficient mobile AI applications, ranging from text summarization to complex image reasoning tasks.
As generative AI continues to evolve, MediaTek’s collaboration with Microsoft highlights the growing importance of optimizing hardware and software to bring cutting-edge technology directly to users' hands.
Post Reply