Meta, the parent company of Facebook, has unveiled an upgraded version of its code generation model, Code Llama, marking a significant milestone in AI development. The latest iteration boasts an impressive 70 billion parameters, making it the largest version to date, surpassing previous versions with parameters ranging from seven to 34 billion. Code Llama's latest release includes three variations: a base version, a Python coding fine-tuned version, and an instruct-tuned version optimized for natural language inputs.
Described by Meta as "one of the highest performing open models available today," the new Code Llama models are open-source and available under the same licensing terms as their predecessors, allowing for commercial applications. However, access to the models requires completion of a form.
Meta CEO Mark Zuckerberg expressed pride in the advancements of the new Code Llama, noting that its innovations will be incorporated into future iterations such as Llama 3 and upcoming Meta models.
The original version of Code Llama debuted in August, with Meta touting the latest release as the most performant base for fine-tuning code generation models. Meta envisions the new models as a foundation for further development within the AI community.
In terms of performance, the new versions of Code Llama outshine their predecessors and surpass rival code generation systems like StarCoder, Codex from OpenAI, and Palm-Coder. The Instruct version notably achieved a score of 67.8 on the HumanEval Pass@1 test, surpassing GPT-4's reported score by 0.8 points.
Meta's release of the enhanced Code Llama model marks a significant advancement in AI capabilities, offering promising opportunities for further innovation and development within the coding and AI communities.