Chrome’s Brings Built-In Support for Autonomous Web Agents with WebMCP

Google also outlined several examples illustrating how WebMCP could change agent interactions with real-world websites.

Share

Google has introduced Web Model Context Protocol (WebMCP) in early preview with Chrome 146, marking a step towards making websites natively accessible to AI agents.

WebMCP is a proposed web standard that allows websites to expose structured, machine-readable “tools” directly to in-browser AI agents, replacing fragile techniques such as screen scraping and Document Object Model (DOM) inference.

“WebMCP bridges the gap between web applications and AI agents by providing a contract for interaction,” said Google in the early preview design document. 

Websites can publish the capabilities they support—such as checkout flows, form submissions, or diagnostic actions—using either JavaScript APIs or declarative HTML annotations. 

These definitions include JSON Schemas for inputs and outputs, reducing errors and hallucinations during agent execution.

In Chrome 146, WebMCP is available behind an experimental flag and requires manual activation. The early preview supports both imperative tool registration through the navigator.modelContext API and a declarative approach that turns annotated HTML forms into callable tools for agents

Google also outlined several examples illustrating how WebMCP could change agent interactions with real-world websites.

In one example, an airline website could expose a book_flight tool, allowing an AI agent to submit structured passenger and travel data directly. 

Instead of attempting to interpret calendar widgets or human-designed form layouts, the agent can invoke a clearly defined function with validated inputs.

Google also pointed to complex medical and legal portals, where forms can be annotated as a submit_application tool. 

This allows agents to map user data accurately to required fields—such as distinguishing between a full legal name and separate first and last name fields—reducing repeated errors and failed submissions.

In another scenario, a developer settings page could expose a run_diagnostics tool, enabling an agent to trigger troubleshooting actions that are typically buried behind multiple nested menus. This allows agents to perform advanced maintenance tasks without relying on brittle UI automation.

These examples demonstrate how WebMCP enables websites to explicitly publish their capabilities, turning user interfaces into reliable, machine-readable interaction surfaces for AI agents.

The release drew immediate attention from developers working on browser automation and AI agents.

One user on X said, “Today, browsing agents have to implement an MCP layer to operate the browser via the Chrome DevTools Protocol and/or browsing automation frameworks like Playwright. Soon they will be able to directly operate Chrome, which will cut out one annoying piece of glue work.”

Another developer highlighted the implications for modern web apps, particularly single-page applications. “Pretty excited for this standard,” they wrote. 

“Browser agents can interact with [single-page applications] as if they were application programming interfaces. Feels like this will unlock some new [human-computer interaction] paradigms, especially when bundled with agentic browsers that watch, learn, and work ahead.”

ALSO READ: OpenAI Begins Testing Ads on Free and Go Plans in the US

Staff Writer
Staff Writer
The AI & Data Insider team works with a staff of in-house writers and industry experts.

Related

Unpack More