Tech & Web

WebMCP: why making your site agent-ready will reshape AI visibility

Written on 22/3/2026
Modified on 23/3/2026
3min
Thibaut Legrand
Thibaut Legrand
WebMCP

Key points of the article

  • AI agents currently navigate the web like blind users: screenshots, DOM parsing, approximate clicks. WebMCP changes that.
  • WebMCP lets a website expose structured actions directly to agents, without scraping or visual interpretation.
  • It is a W3C standard co-developed by Google and Microsoft, available as a preview in Chrome 146 since February 2026.
  • WebMCP is not classic MCP: it runs entirely client-side in the browser, with no dedicated server.
  • Two implementation paths: a declarative API (adding attributes to existing HTML forms) and an imperative API (registering tools via JavaScript).
  • The question will no longer be "can a bot read your page?" but "can an agent complete a task on your site?"
  • It is still a draft under active standardisation: stable support on Chrome and Edge is expected for the second half of 2026.

AI agents are everywhere. They answer questions, compare offers, fill out forms, and book services. To do all of this, they need to interact with websites that were never designed with them in mind.

That is where the problem starts.

Today, when an AI agent visits a page, it makes do with what it has: it takes screenshots, parses the DOM, guesses where to click, waits for the page to load, and tries again if it fails. It is slow, fragile, and breaks at the slightest layout change.

WebMCP is an attempt to fix this at the source. Not by making agents smarter, but by giving websites a way to make themselves explicitly legible to agents.

Understanding the problem: how agents navigate today

Without WebMCP
The agent guesses
  • Takes a screenshot of the page
  • Sends the image to a vision model
  • Infers where to click, what to type
  • Retries if the action fails
  • Breaks at the slightest layout change
Slow, token-heavy, non-deterministic
With WebMCP
The site exposes its actions
  • The site declares its available tools
  • The agent reads the structured schemas
  • Calls the tool with the right parameters
  • Receives structured data directly
  • Works regardless of the site's visual design
Fast, reliable, independent of the visual interface

Until 10 February 2026, an AI agent had only two ways to interact with a website.

The first is visual: the agent captures a screenshot of the page, sends it to a vision model, and tries to infer where to click, what to type, how to navigate. It is the most common method. It is also the most token-intensive, the slowest, and the least reliable.

The second is semantic: the agent parses the raw HTML, explores the accessibility tree, and tries to identify interactive elements to trigger events. More precise than screenshots, but still indirect, still dependent on the DOM structure.

Both methods share the same fundamental flaw: the agent must interpret what the site can do, rather than the site explaining it directly. That ambiguity generates errors, latency, and non-deterministic behaviour. The same agent, on the same page, can succeed nine times and fail on the tenth because an element shifted by a few pixels.

For demos, that is acceptable. For production, it is a blocker.

What WebMCP is

WebMCP is a browser-native JavaScript API that lets developers expose their website functionality as "tools" directly callable by AI agents.

Concretely: instead of waiting for an agent to guess how to book a flight on your site, you tell it explicitly: "here is the book_flight action, here are the parameters it accepts, here is what it returns". The agent no longer has to interpret the interface, it calls the tool.

It is co-developed by Google and Microsoft, under the W3C Web Machine Learning Community Group. The official specification was published on 10 February 2026 as a draft Community Group Report. Chrome 146 is the first browser to offer a preview.

The analogy that comes up consistently is responsive design: when mobile arrived, most teams did not rebuild everything from scratch. They added breakpoints, and the site was mobile-ready. WebMCP follows the same logic: annotate your existing forms, register your key actions, and the site becomes agent-ready without a full rebuild.

WebMCP vs MCP: they are not the same thing

The confusion is common and worth clarifying.

MCP (Model Context Protocol), developed by Anthropic in late 2024, is a client-server protocol. An agent connects to a remote MCP server that exposes tools. Communication goes over the network, with an architecture separate from the user interface.

WebMCP runs entirely in the browser. No separate server, no dedicated network communication. Tools are defined in the page's own JavaScript, via the navigator.modelContext API. The browser handles the translation between the declared tools and the agent calling them.

As Patrick Brosset, an engineer on the Microsoft Edge team involved in the spec, clarifies in his article published on 23 February 2026:

WebMCP shares MCP's API surface and conceptual model, but it is not strictly MCP.

The browser is the intermediary between the page and the agent, not an external server.

Classic MCP
Client-server protocol
Developed by Anthropic (Nov. 2024)
AI Agent
↕ network connection (JSON-RPC)
Remote MCP server
↕ internal API
Backend data / tools
Server hosted separately
Communication outside the browser
Ideal for backend tools and APIs
WebMCP
Browser-native protocol
W3C — Google & Microsoft (Feb. 2026)
AI Agent
↕ navigator.modelContext
Browser (intermediary)
↕ client-side JavaScript
Tools declared in the page
No additional server required
Shares the existing user session
Local processing, better privacy

This architectural choice has practical consequences:

  • Tools share the existing user session in the browser (authentication, cookies, state)
  • Processing stays local, which is better for privacy
  • A single codebase covers both the human interface and the agent integration
  • But it requires an active browsing context — no pure headless mode at this stage

Two implementation approaches

WebMCP offers two ways to make a site agent-ready, complementary depending on the use case.

The declarative API: start from your existing forms

This is the simplest entry point. If you already have a booking, contact, or search form, you can expose it to AI agents by adding a few HTML attributes:

  • toolname: the tool's identifier
  • tooldescription: a natural language description of what the form does
  • toolparamdescription: the description of each field
  • toolautosubmit: if present, the agent can submit the form without an explicit human gesture

Without toolautosubmit, the browser pauses the action and waits for user confirmation. This is a permission-first approach that keeps the human in the loop for sensitive actions.

A notable secondary benefit: WebMCP encourages you to improve the label and aria-description attributes on your forms, since Chrome uses them to build tool parameter descriptions. Sites with solid baseline accessibility have already done most of the work.

The imperative API: register tools in JavaScript

For more complex interactions, dynamic workflows, or structured data returns, the imperative API lets you register tools directly via navigator .modelContext .registerTool().

Each tool is defined by a name, a natural language description, a JSON schema describing the expected parameters, and an execute function that returns a structured result.

The point: replace 20 UI actions with a single tool call. An agent looking for a product no longer needs to type text into a field, wait for the render, parse the results, filter by category, and parse again. It calls search_products with the right parameters and gets clean JSON back.

What this changes for your visibility

This is the question that directly concerns marketing teams and acquisition managers.

Until now, AI visibility has played out on two levels. First, citation visibility: is your content referenced in ChatGPT, Perplexity, or Gemini responses? That is what GEO and content optimisation for AI citations covers. Second, retrieval visibility: are your pages among the sources the model consults when building its responses? That is where RAG and retrieval mechanisms come in.

WebMCP adds a third level: action visibility. Can your site be used by an agent acting on behalf of a user?

The distinction matters. Being cited in an AI response is passive visibility. Being the site on which the agent executes the user's requested action is active visibility. In a context where agents are starting to handle purchases, bookings, and support requests, the question of whether an agent can complete a task on your site is going to carry increasing weight.

As Thatware's analysis on WebMCP and LLM SEO puts it: the question will no longer be "can a bot read your page?" but "can an agent execute a task on it?". That distinction reshapes SEO in meaningful ways.

Level 1
Citation visibility
GEO / AEO
"Is your brand mentioned in AI responses?"
The AI cites your content in its responses. You are a reference source, but the user is not yet interacting with your site.
Passive visibility
Level 2
Retrieval visibility
RAG
"Is your content consulted in real time to build responses?"
The model visits your pages at generation time. Your structure and structured data directly influence what it retains.
Content visibility
Level 3
Action visibility
WebMCP
"Can an agent complete a task on your site on behalf of a user?"
The agent interacts directly with your site via structured tools. Booking, purchase, support: your site executes.
Active visibility

The connection to other layers of the agentic web

WebMCP does not work in isolation. It is part of a wider ecosystem that is taking shape.

RAG (Retrieval-Augmented Generation) is the layer that allows LLMs to retrieve external information at the moment of generating a response. It is what enables Perplexity to read your pages in real time when building its answers.

WebMCP operates downstream: once the agent has identified your site as the right destination, it needs a reliable way to interact with it. RAG helps it find you. WebMCP lets it act once it is there.

There is also a related project, agenticweb.md, a machine-readable discovery standard that would let agents discover the tools available on a site before even visiting it. Still at the proposal stage, but it complements WebMCP's logic by addressing discoverability, which is one of the current standard's key limitations.

Where standardisation stands and what the current limits are

WebMCP is still early. That needs to be stated clearly.

The spec published by the W3C Web Machine Learning Community Group is a draft Community Group Report, not a formal W3C recommendation. It is evolving: between the initial February 2026 publication and March 2026, the provideContext and clearContext methods were already removed from the API. Teams starting to experiment now should expect adjustments.

Browser support is currently limited to Chrome 146 Canary, behind a flag. Edge support is expected but not yet formalised. Firefox and Safari have not announced plans. Stable support on Chrome and Edge is anticipated for the second half of 2026.

Among the current technical limitations identified in the spec and from early adopters:

  • No native discoverability: an agent cannot know what tools a site offers without visiting it first
  • No built-in state synchronisation: no native mechanism to keep UI state and application state consistent
  • The security section of the spec is currently empty: open questions remain around exposure to CSRF, XSS attacks, and risks from combining WebMCP with other emerging APIs such as the Prompt API

On security, Silicon.fr notes in its analysis of the standard that existing attack vectors could apply in WebMCP-specific ways, and that there are currently far more questions than answers. WebMCP delegates access arbitration to the browser, which ensures backwards compatibility across versions, but does not yet resolve all risks associated with agents executing actions on behalf of users.

CriterionStatusDetail
W3C statusDraft Community GroupNot yet a formal W3C recommendation. Spec published 23 March 2026, still actively evolving.
ChromePreview (flag)Available in Chrome 146 Canary via chrome://flags. Not yet in stable, Beta, or Dev channels.
Edge (Microsoft)PlannedMicrosoft co-develops the spec at the W3C. Support announced, date not yet formalised.
FirefoxNot plannedNo support announcement at this stage.
SafariNot plannedNo support announcement at this stage.
Stable support expectedH2 2026Stable Chrome and Edge support expected for the second half of 2026.
Security sectionEmptyThe spec acknowledges risks (CSRF, XSS, prompt injection) but responses are still being worked out.
Production useNot recommendedThe API is still evolving. Methods have already been removed between February and March 2026.

What to take away now

WebMCP is not a standard to implement in production today. But it is a clear signal about where the web is heading.

Responsive design taught us that waiting for a standard to fully stabilise before paying attention to it often means starting several steps behind. Teams that experimented with responsive design in 2011-2012 did not have to rebuild everything in 2015 when Google started penalising non-mobile-friendly sites.

The logic is similar here. Sites that have identified their key forms, structured their main actions, and started thinking in terms of "exposable tools" will be better positioned when WebMCP reaches stable support.

Concretely, what you can do now:

  • Identify the 3 to 5 most repeated actions on your site (search, booking, contact, purchase) and think about how they would be described to an agent
  • Improve your form accessibility: correct labels, ARIA descriptions, explicitly named fields. WebMCP rewards good accessibility practices already in place
  • Follow the spec evolution via the W3C Web Machine Learning Community Group GitHub and Chrome announcements
  • Test in Chrome Canary with the WebMCP flag enabled if you want to start experimenting

It is not urgent. But now is the right time to understand what is coming.

WebMCP is a browser-native JavaScript API that lets developers expose their website functionality as structured tools, directly callable by AI agents. It was co-developed by Google and Microsoft under the W3C Web Machine Learning Community Group, and has been available as a preview in Chrome 146 since February 2026.

MCP (Model Context Protocol by Anthropic) is a client-server protocol: tools are hosted on a remote server and the agent connects to it over the network. WebMCP runs entirely in the browser, client-side, with no dedicated server. The browser itself handles the translation between the tools declared by the page and the agent calling them.

No. WebMCP adds a third level of visibility, action visibility, alongside SEO (being found on Google) and GEO (being cited in AI answers). It does not replace either one: it complements them by letting agents interact directly with your site.

WebMCP is available as a preview in Chrome 146 Canary behind an experimental flag. The spec is still a W3C draft and is actively evolving: methods have already been removed between February and March 2026. Stable support on Chrome and Edge is expected for the second half of 2026.

The declarative API (via HTML attributes added to existing forms) is accessible to non-developer profiles with some guidance. The imperative API, which uses JavaScript and tool registration via navigator.modelContext, requires a technical profile.

Primarily sites with repeated transactional flows: e-commerce, booking, customer support, content search. These are the use cases where an agent can complete an entire task on behalf of a user, such as booking a flight, creating a support ticket, or finding a product, which is exactly what WebMCP enables.

Thibaut Legrand
Thibaut Legrand
Co-founder - Vydera