On January 26, 2026, the Model Context Protocol project shipped its first official extension: MCP Apps. MCP tools can now return interactive HTML interfaces -- dashboards, forms, 3D visualizations, multi-step workflows -- that render directly inside AI conversations.
The spec was jointly authored by Anthropic, OpenAI, and the maintainers of the community MCP-UI project. That last part is worth pausing on. Two direct competitors collaborated on a shared standard for interactive AI interfaces. The result works across Claude, VS Code, Goose, Postman, and MCPJam today, with ChatGPT support rolling out.
Most coverage of MCP Apps has been announcement-style: what launched, who supports it. This article is a practitioner guide. What MCP Apps actually are, how the architecture works, what the security model looks like, and how to build one.
MCP Ecosystem (Feb 2026)
What Are MCP Apps?
The problem is straightforward. AI tools return text. The model formats it nicely, but some results are fundamentally visual. Analytics dashboards need charts with filtering. Configuration workflows need forms with dependent fields. Code coverage reports need flame graphs. Returning a JSON blob and asking the model to describe it is a workaround, not a solution.
MCP Apps solve this by letting tools declare interactive UI as a resource. When a tool runs, it returns both a text response (for the model) and a pointer to an HTML interface (for the user). The client fetches the HTML and renders it in a sandboxed iframe inside the conversation.
The Goose team at Block put it well: "UI stops being something your server returns and starts being something your server serves." That distinction matters. The UI is not embedded in the tool response. It is a separate resource that the client fetches and renders independently.
Launch Partners
Ten companies shipped MCP Apps on day one: Amplitude, Asana, Box, Canva, Clay, Figma, Hex, monday.com, Slack, and Salesforce. These are not demo integrations. Figma turns text into flow charts and Gantt diagrams in FigJam. Hex answers data questions with interactive charts and citations. Amplitude lets you build analytics dashboards and adjust parameters directly in the conversation.
From MCP-UI to MCP Apps
MCP Apps did not appear from nothing. In 2025, the Goose team at Block built MCP-UI, an experimental approach to returning interactive interfaces from MCP servers. It worked, but it was host-specific. An MCP-UI experience built for Goose could not run in Claude or VS Code without client-specific code.
Rather than letting competing implementations fragment the ecosystem, Anthropic, OpenAI, and the MCP-UI maintainers (Ido Salomon and Liad Yosef) collaborated on a shared standard. The result is MCP Apps: a resource-based architecture that works the same way across every supporting client.
MCP Server
Tool + embedded UI
Tool Response
UI returned inline
Single Client
Host-specific render
Tightly coupled to host. Works in one client only.
MCP Server
Tool + UI resource at ui://
Tool Response
_meta.ui.resourceUri
Any Client
Sandboxed iframe
Decoupled. Same UI works across Claude, VS Code, Goose, and more.
Four architectural changes made this possible:
- Resource-based UI model. Instead of embedding HTML in tool responses, servers store UI under
ui://URIs and return pointers via_meta.ui.resourceUri. - Resource discovery protocol. Servers declare resources in their capabilities and implement list/read handlers. Clients can discover available UIs before any tool is called.
- Content Security Policy. Servers must explicitly whitelist external domains for API calls, static assets, and embedded content.
- Standardized communication. UI-to-host messaging shifted from custom formats to JSON-RPC methods:
ui/initialize,ui/message, and notification channels for size and context changes.
How MCP Apps Work
The architecture combines two existing MCP primitives: tools and resources. Here is the flow:
- Tool declaration. A tool includes a
_meta.ui.resourceUrifield pointing to aui://resource. - UI preloading. The host can fetch and render the UI resource before the tool finishes executing, enabling streaming of tool inputs to the app.
- Tool execution. When the tool runs, it returns three payloads:
content(what the LLM sees),structuredContent(data for the UI), and_meta(metadata hidden from the model). - Sandboxed rendering. The client renders the HTML in a sandboxed iframe. All communication between app and host goes through the
postMessageAPI. - Bidirectional interaction. The app can call server tools, update model context, send messages to the conversation, and open links in the user's browser.
Three Data Channels
This is the detail most coverage misses. Every MCP App tool response has three separate data channels, each serving a different audience:
content
What the LLM sees. Text summary of the result for model reasoning.
structuredContent
What the UI sees. Data for rendering the interactive interface.
_meta
Hidden from model. Large or sensitive data exclusively for widgets.
This separation is intentional. You do not want a 50KB analytics payload cluttering the model's context window when a one-line summary is enough for reasoning. And you do not want sensitive data leaking to the model when it only needs to reach the UI.
Building Your First MCP App
Here is the practical difference. A traditional MCP tool returns text. An MCP App registers a UI resource alongside the tool and returns structured data for the interface to render.

