Cursor has launched version 2.0, with a new multi-agent interface and its first proprietary AI model, Composer, designed for low-latency coding tasks.
Cursor 2.0 reorganises the IDE around agents rather than files, allowing users to run up to eight coding agents simultaneously.
Each operates in an isolated environment using git worktrees or remote machines to prevent file conflicts.
Besides, the update enhances agent collaboration and review processes, making it easier to monitor edits across files and test code within the interface.
“With Cursor 2.0, we’re making it simple to run many agents in parallel without them interfering with one another,” said Cursor.
The update also makes sandboxed terminals and a native browser-based testing tool generally available.
Enterprises now have new administrative controls for sandboxing, which improve cloud agent reliability and user activity auditing.
Cursor 2.0 boosts the performance of language server protocols (LSPs) in Python and TypeScript by dynamically increasing memory limits for larger projects.
In addition, the interface now supports voice control, shareable team commands, and improved prompt management, reflecting Cursor’s move towards team-wide automation rather than individual code editing.
A detailed changelog can be found here.
Alongside this, Cursor also released Composer, a mixture-of-experts (MoE) coding model trained via reinforcement learning (RL).
The company describes it as “a frontier model that is 4x faster than similarly intelligent models.”
“The model is built for low-latency agentic coding in Cursor, completing most turns in under 30 seconds. Early testers found the ability to iterate quickly with the model delightful and trust the model for multi-step coding tasks,” said Cursor.
Composer was trained in real-world environments, with access to tools such as semantic search, terminal commands, and file editing to support agentic workflows.
The company built an internal benchmark, Cursor Bench, to measure a model’s usefulness to developers, evaluating code quality, correctness, and adherence to existing abstractions.
During RL, Composer learned to optimise for speed by minimising redundant responses and parallelising tool use.
It was trained using custom infrastructure built on PyTorch and Ray, scaled across thousands of NVIDIA GPUs with MXFP8 kernels for low-precision efficiency.
“We trained Composer to make efficient choices in tool use and maximise parallelism whenever possible,” said Cursor.
The company positions Composer as optimised for agentic coding, combining long-context understanding with responsiveness needed for interactive development.
Though models like GPT-5 and Sonnet 4.5 outperform it on some benchmarks, Cursor claims Composer offers the fastest interactive experience among current “fast frontier” coding models.
“Cursor builds tools for software engineering, and we make heavy use of the tools we develop. A motivation of Composer development has been developing an agent we would reach for in our own work,” said the company.
“In recent weeks, we have found that many of our colleagues were using Composer for their day-to-day software development. With this release, we hope that you also find it to be a valuable tool.”
ALSO READ: Databricks Launches Data Intelligence for Cybersecurity

