Ramp Builds AI Model Better than Claude Opus for Navigating Spreadsheets

Ramp has also built Inspect, an internal background coding agent that autonomously writes, tests, verifies, and submits code changes.

Share

Fintech company Ramp and AI infrastructure company Prime Intellect have developed Fast Ask, an AI model designed to navigate and retrieve data from business spreadsheets. 

It is based on the open-weights Qwen3.5-35B-A3B architecture to process queries faster and with greater exact-match accuracy than Anthropic’s Claude Opus 4.6 model. 

“A spreadsheet agent is only as good as the information it retrieves,” the developers from Ramp wrote in a post on X, explaining that standard models frequently fail to extract necessary data on their first attempt. 

“Given a question like ‘What was the revenue from March to May?’, Fast Ask navigates the workbook, reads the relevant ranges, and returns a compact answer for the main agent to use,” Ramp stated. 

Prior to developing the new model, Ramp observed high exploration overhead in its production environment. “The main agent spent 17.8% of all tool calls opening tabs, reading ranges, and filtering irrelevant sheets before it had the data needed to answer,” the company stated. This frequent data scanning wastes processing tokens on irrelevant rows and increases the risk that the model will anchor on decoy information.

To resolve this inefficiency, Ramp collaborated with Prime Intellect to apply reinforcement learning (RL) post-training to a Qwen model, with 3 billion active parameters. 

“This was a good fit for RL because spreadsheet retrieval is repeated often, latency sensitive, and has clean feedback,” Ramp wrote. “The model either returns the right cent amount, date, invoice ID, yes/no, or row reference, or it does not. That let us optimise the retrieval policy directly with deterministic rewards.”

The engineers trained Fast Ask across 100 steps using synthetic business datasets designed to mimic real-world financial workflows, including revenue rollups and invoice reconciliation. 

The training environment incorporated adversarial elements such as decoy sheets and ambiguous identifiers to build robust navigation policies.

The resulting Fast Ask system increased exact-match task completion accuracy by 10 percentage points compared to its base model. 

During evaluation, the subagent outperformed Claude Opus by four percentage points while operating at the much lower latency profile of Claude Haiku 4.5. The developers attributed this success heavily to training parameters rather than sheer computing power. 

“The important work was not in model architecture or scale but in environment design: the right tasks, a minimal tool interface, and a reward function grounded in how the product actually works in production,” Ramp reported.

This is one of the many internal AI advances Ramp has made over the past year. The company has deployed a suite of automated tools, including procurement agents and systems designed to detect AI-generated fake invoices. 

Ramp has also built Inspect, an internal background coding agent that autonomously writes, tests, verifies, and submits code changes. Ramp says Inspect now contributes a substantial share of merged pull requests internally. In March, Ramp deployed an agentic AI system to continuously monitor, triage, and propose fixes for its Ramp Sheets software, an AI-native spreadsheet editor.

ALSO READ: When Money Moves Itself: Why Agent‑Readable Banks Still Need Human Guardians

Staff Writer
Staff Writer
The AI & Data Insider team works with a staff of in-house writers and industry experts.

Related

spot_img

Unpack More