Alibaba-backed Moonshot AI on January 27 announced the release of Kimi K2.5, its latest open-source AI model. The new model adds vision-based coding capabilities and a self-directed agent swarm system. These features are designed to improve efficiency for both consumers and enterprise knowledge workers.
The company said Kimi K2.5 combines text and visual inputs in a single multimodal architecture, allowing it to handle reasoning, coding, visual understanding, and autonomous task execution within a single system.
The model is available via the Kimi app, Kimi.com, APIs, and Kimi Code, with both Instant and Thinking modes supported.
“We aim to turn your ideas into products and your data into insights, while minimising technical friction,” said Zhilin Yang, founder and CEO of Kimi, in a release video during the launch.
Moonshot AI said Kimi K2.5 sets new benchmark results across agentic, coding, and vision tasks, including Humanity’s Last Exam, BrowseComp, and DeepSearchQA. The company added that the model delivers comparable performance to leading systems while using significantly less compute.
The launch builds on the Kimi K2 series, which focuses on reasoning and agentic capabilities. According to the company, consumer monetisation has grown rapidly, with global paid users increasing at an average month-on-month rate of more than 170% between September and November 2025. Over the same period, API revenue increased fourfold following the release of Kimi K2 Thinking.
A key addition in Kimi K2.5 is its agent swarm capability, which allows the model to create and coordinate up to 100 sub-agents working in parallel across as many as 1,500 steps. Moonshot AI said this approach enables faster completion of complex, multi-step tasks without predefined roles or workflows. Swarm Mode is currently available on Kimi.com for Allegretto-tier users and above.
Moonshot AI said the model is also its strongest open-source system to date for coding with vision. By reasoning directly over images and video, Kimi K2.5 can support tasks such as image-to-code replication, visual debugging, and front-end development using screenshots or screen recordings. The company said this allows users to translate visual designs into functional websites and applications.
“We didn’t just want Kimi to code; we wanted it to have an eye for design,” Yang said.
Kimi K2.5 also supports full-stack development through Kimi Code, which integrates with terminals and IDEs, including VS Code, Cursor, and Zed, and accepts multimodal inputs such as images and videos.
Moonshot AI said its focus on efficiency stems from limited access to large-scale compute resources. “That forced us to focus on fundamental research and efficiency, developing frontier models with 1% of the resources used by major US labs,” said Yutong Zhang, president of Moonshot AI, speaking at the World Economic Forum in Davos last week.
The company said Kimi K2.5 is designed to automate tasks commonly handled by knowledge workers, including financial modelling, document formatting, and presentation creation. According to Moonshot AI, tasks such as merging large reports or converting long research papers into presentation decks can now be completed in minutes through conversational interaction.
Kimi K2.5 is also available via third-party platforms, including NVIDIA’s build.nvidia.com and Fireworks. Moonshot AI said it plans to continue expanding its work on agentic intelligence, with a focus on improving real-world task completion under time and resource constraints.
ALSO READ: GitHub Introduces Copilot SDK to Embed AI Agents in Applications