MongoDB Releases New Offerings for Enterprise AI

MongoDB said the capability significantly reduces the engineering overhead traditionally associated with semantic search infrastructure.

Share

MongoDB has unveiled a series of new AI-focused capabilities to help enterprises deploy and manage AI agents in production environments, positioning its database platform as a unified infrastructure layer for enterprise AI applications.

Announced at the company’s MongoDB.local London 2026 event, the new offerings include automated embedding generation, persistent agent memory, enhanced performance capabilities, and expanded deployment flexibility across cloud and on-premises environments.

The company said enterprises have historically relied on multiple disconnected systems to manage vector search, memory, embeddings and operational databases for AI applications, creating complexity and scalability challenges. MongoDB is now attempting to consolidate these functions into a single platform.

“The hardest part of running agents in production isn’t the model. It’s the data layer underneath it,” said CJ Desai, President and Chief Executive Officer of MongoDB, in a statement. “To trust an agent at scale, it has to retrieve the right context, hold memory across sessions, and operate at machine speed, wherever the enterprise needs it.”

The announcement comes as enterprises increasingly move beyond AI experimentation toward deploying AI agents for customer service, workflow automation, and enterprise decision-making. Industry demand is now shifting toward infrastructure that can reliably support real-time retrieval, memory retention and high-volume inference in production environments.

Among the key announcements was Automated Voyage AI Embeddings for MongoDB Vector Search, currently in public preview. The feature automatically generates embeddings whenever data is written or updated, allowing AI agents to retrieve more accurate and up-to-date contextual information without requiring enterprises to build separate embedding pipelines.

MongoDB said the capability significantly reduces the engineering overhead traditionally associated with semantic search infrastructure. Its Voyage AI embedding models currently rank first on the Retrieval Embedding Benchmark, according to the company.

The company also announced the general availability of LangGraph.js Long-Term Memory Store, which enables persistent cross-conversation memory for AI agents built using JavaScript and TypeScript.

“When AI tools and agents produce a wrong answer, the instinct is to blame the model,” said Pablo Stern, CPO of AI and Emerging Products, MangoDB. “But the data platform is what enables the agent with the right context and memory to act correctly.”

MongoDB additionally introduced version 8.3 of its database platform, which it claims delivers up to 45% higher read throughput, 35% higher write throughput, and 30% faster complex operations compared with MongoDB 8.0, without requiring application-level code changes.

“The requirements of enterprises running AI at scale are what we build for. MongoDB 8.3 makes agent workloads faster and cheaper to run on infrastructure customers already have,” said Ben Cefalo, CPO of Core Products.

MongoDB said its platform supports deployment across AWS, Google Cloud, Microsoft Azure, hybrid environments and on-premises infrastructure.

As part of the update, MongoDB announced the general availability of cross-region connectivity support for AWS PrivateLink, enabling database traffic between MongoDB Atlas clusters across AWS regions to remain entirely within AWS’ private network infrastructure.

ALSO READ: Inside the Orchestration Crisis: Why AI‑Driven Enterprises Need a Control Plane, Not More Tools

Staff Writer
Staff Writer
The AI & Data Insider team works with a staff of in-house writers and industry experts.

Related

spot_img

Unpack More