Microsoft introduced a new lightweight artificial intelligence model on Tuesday, aiming to broaden its customer base with more affordable options. The new model, named Phi-3-mini, is part of Microsoft's strategy to focus on technology that is expected to transform various industries and work processes.
Phi-3-mini is the first of three small language models (SLMs) that Microsoft plans to release. Sébastien Bubeck, Microsoft’s vice president of GenAI research, highlighted its cost-effectiveness, saying, “Phi-3 is not slightly cheaper, it’s dramatically cheaper, we’re talking about a 10x cost difference compared to the other models out there with similar capabilities.”
SLMs like Phi-3-mini are designed to handle simpler tasks. Microsoft positions them as user-friendly solutions suitable for companies with limited resources.
Phi-3-mini is immediately accessible through Microsoft's cloud service platform, Azure’s AI model catalog. Additionally, it can be found on the machine learning model platform Hugging Face and Ollama, a local machine model framework.
The SLM is also compatible with Nvidia's software tool, Nvidia Inference Microservices (NIM). It has been optimized to run efficiently on Nvidia's graphics processing units (GPUs).
Last week, Microsoft invested $1.5 billion in G42, an AI firm based in the UAE. This investment follows Microsoft's collaboration with French startup Mistral AI to offer their models on its Azure cloud computing platform.
In summary, Microsoft's launch of Phi-3-mini represents a strategic move towards offering cost-effective AI solutions to a broader audience, backed by significant investments and partnerships in the AI sector.