Byte Bot
Developer ToolsFebruary 7, 202614 min read

MCP Apps: The Developer Guide to Interactive AI Interfaces

MCP Apps launched January 26, 2026 as the first official extension to the Model Context Protocol. Tools can now return interactive dashboards, forms, and visualizations inside AI conversations. Here is how it works and how to build one.

Hunter Goram

Hunter Goram

COO & Co-Founder at Byte Bot

On January 26, 2026, the Model Context Protocol project shipped its first official extension: MCP Apps. MCP tools can now return interactive HTML interfaces -- dashboards, forms, 3D visualizations, multi-step workflows -- that render directly inside AI conversations.

The spec was jointly authored by Anthropic, OpenAI, and the maintainers of the community MCP-UI project. That last part is worth pausing on. Two direct competitors collaborated on a shared standard for interactive AI interfaces. The result works across Claude, VS Code, Goose, Postman, and MCPJam today, with ChatGPT support rolling out.

Most coverage of MCP Apps has been announcement-style: what launched, who supports it. This article is a practitioner guide. What MCP Apps actually are, how the architecture works, what the security model looks like, and how to build one.

MCP Ecosystem (Feb 2026)

5,800+MCP ServersIn ecosystem
97M+SDK DownloadsMonthly
28%Fortune 500Adoption rate
300+MCP ClientsAnd growing
Backed by:AnthropicOpenAIGoogleMicrosoftBlockSalesforce

What Are MCP Apps?

The problem is straightforward. AI tools return text. The model formats it nicely, but some results are fundamentally visual. Analytics dashboards need charts with filtering. Configuration workflows need forms with dependent fields. Code coverage reports need flame graphs. Returning a JSON blob and asking the model to describe it is a workaround, not a solution.

MCP Apps solve this by letting tools declare interactive UI as a resource. When a tool runs, it returns both a text response (for the model) and a pointer to an HTML interface (for the user). The client fetches the HTML and renders it in a sandboxed iframe inside the conversation.

The Goose team at Block put it well: "UI stops being something your server returns and starts being something your server serves." That distinction matters. The UI is not embedded in the tool response. It is a separate resource that the client fetches and renders independently.

Launch Partners

Ten companies shipped MCP Apps on day one: Amplitude, Asana, Box, Canva, Clay, Figma, Hex, monday.com, Slack, and Salesforce. These are not demo integrations. Figma turns text into flow charts and Gantt diagrams in FigJam. Hex answers data questions with interactive charts and citations. Amplitude lets you build analytics dashboards and adjust parameters directly in the conversation.

From MCP-UI to MCP Apps

MCP Apps did not appear from nothing. In 2025, the Goose team at Block built MCP-UI, an experimental approach to returning interactive interfaces from MCP servers. It worked, but it was host-specific. An MCP-UI experience built for Goose could not run in Claude or VS Code without client-specific code.

Rather than letting competing implementations fragment the ecosystem, Anthropic, OpenAI, and the MCP-UI maintainers (Ido Salomon and Liad Yosef) collaborated on a shared standard. The result is MCP Apps: a resource-based architecture that works the same way across every supporting client.

BeforeMCP-UI (2025)
1

MCP Server

Tool + embedded UI

2

Tool Response

UI returned inline

3

Single Client

Host-specific render

Tightly coupled to host. Works in one client only.

AfterMCP Apps (2026)
1

MCP Server

Tool + UI resource at ui://

2

Tool Response

_meta.ui.resourceUri

3

Any Client

Sandboxed iframe

Decoupled. Same UI works across Claude, VS Code, Goose, and more.

Four architectural changes made this possible:

  1. Resource-based UI model. Instead of embedding HTML in tool responses, servers store UI under ui:// URIs and return pointers via _meta.ui.resourceUri.
  2. Resource discovery protocol. Servers declare resources in their capabilities and implement list/read handlers. Clients can discover available UIs before any tool is called.
  3. Content Security Policy. Servers must explicitly whitelist external domains for API calls, static assets, and embedded content.
  4. Standardized communication. UI-to-host messaging shifted from custom formats to JSON-RPC methods: ui/initialize, ui/message, and notification channels for size and context changes.

How MCP Apps Work

The architecture combines two existing MCP primitives: tools and resources. Here is the flow:

  1. Tool declaration. A tool includes a _meta.ui.resourceUri field pointing to a ui:// resource.
  2. UI preloading. The host can fetch and render the UI resource before the tool finishes executing, enabling streaming of tool inputs to the app.
  3. Tool execution. When the tool runs, it returns three payloads: content (what the LLM sees), structuredContent (data for the UI), and _meta (metadata hidden from the model).
  4. Sandboxed rendering. The client renders the HTML in a sandboxed iframe. All communication between app and host goes through the postMessage API.
  5. Bidirectional interaction. The app can call server tools, update model context, send messages to the conversation, and open links in the user's browser.

Three Data Channels

This is the detail most coverage misses. Every MCP App tool response has three separate data channels, each serving a different audience:

content

What the LLM sees. Text summary of the result for model reasoning.

structuredContent

What the UI sees. Data for rendering the interactive interface.

_meta

Hidden from model. Large or sensitive data exclusively for widgets.

This separation is intentional. You do not want a 50KB analytics payload cluttering the model's context window when a one-line summary is enough for reasoning. And you do not want sensitive data leaking to the model when it only needs to reach the UI.

Building Your First MCP App

Here is the practical difference. A traditional MCP tool returns text. An MCP App registers a UI resource alongside the tool and returns structured data for the interface to render.

Traditional MCP Tool
// Traditional MCP tool: text only
server.tool("get-metrics", {
description: "Fetch dashboard metrics",
inputSchema: {
type: "object",
properties: {
period: { type: "string" }
}
}
}, async ({ period }) => {
const data = await fetchMetrics(period);
return {
content: [{
type: "text",
text: JSON.stringify(data, null, 2)
}]
};
});
MCP App with UI
// MCP App: interactive dashboard
registerAppTool(server, "get-metrics", {
title: "Dashboard Metrics",
description: "Interactive metrics dashboard",
inputSchema: {
type: "object",
properties: {
period: { type: "string" }
}
},
_meta: {
ui: { resourceUri: "ui://metrics/app.html" }
}
}, async ({ period }) => {
const data = await fetchMetrics(period);
return {
content: [{ type: "text",
text: `Metrics for ${period}` }],
structuredContent: {
type: "resource",
resource: { uri: "ui://metrics/app.html",
mimeType: "text/html;profile=mcp-app",
text: "<html>...</html>" }
}
};
});

The left side returns raw JSON that the model formats into text. The right side returns the same data but also provides an interactive dashboard the user can filter, click, and explore.

Project Setup

The SDK is on npm. Install it alongside the standard MCP server package:

$ npm install @modelcontextprotocol/ext-apps @modelcontextprotocol/sdk

The ext-apps repository includes starter templates for React, Vue, Svelte, Preact, Solid, and vanilla JavaScript. The App class from the SDK handles postMessage communication, but it is a convenience wrapper, not a requirement. You can implement the protocol directly if you prefer.

Testing Locally

Two options for local development:

  • basic-host. The ext-apps repo includes a test host that renders your app at localhost:8080. Run npm start from the basic-host example.
  • MCPJam Inspector. Run npx @mcpjam/inspector@latest for a purpose-built emulator.
  • Claude with tunnel. Use npx cloudflared tunnel --url http://localhost:3001 to expose your local server, then add the URL as a custom connector in Claude settings.

15+ Working Examples

The ext-apps repository ships with real examples across categories:

  • 3D and visualization: CesiumJS globe, Three.js scenes, GLSL shader effects
  • Data exploration: Cohort heatmaps, customer segmentation scatter plots, wiki explorer
  • Business applications: SaaS scenario modeler, budget allocator
  • Media: PDF viewer, video player, sheet music notation, text-to-speech
  • Utilities: QR code generator, system monitor, speech-to-text transcription

The Security Model

Running third-party HTML inside an AI conversation creates obvious security concerns. MCP Apps addresses this with multiple layers of isolation.

The sandboxed iframe prevents apps from accessing the parent window's DOM, reading host cookies or local storage, navigating the parent page, or executing scripts in the parent context. All communication goes through the postMessage API, which the SDK abstracts.

Servers must declare a Content Security Policy for any external domains. Need to call an external API? Whitelist it in connectDomains. Loading images from a CDN? Declare it in resourceDomains. The host enforces these policies.

Context: MCP Security Today

The Zuplo State of MCP survey (Dec 2025) found that 50% of MCP builders cite security as their top challenge, 25% of servers have no authentication, and 38% say security concerns are actively blocking adoption. MCP Apps improves on this baseline with a formal CSP model and sandbox isolation, but it is still early. Audit every server you connect to.

Tool Visibility Control

A detail worth calling out: tools can restrict who triggers them. Setting visibility to ["model"] means only the AI model can call it. ["app"] means only the UI can call it, hidden from the model entirely. This matters for high-risk operations like deletions -- you might want a delete button in the UI but not let the model trigger it autonomously.

Where MCP Apps Work Today

Claude

Web + Desktop

Live

Full support, launch partners active

VS Code Insiders

Microsoft

Live

First AI code editor with full MCP Apps support

Goose

Block

Live

Reference implementation, pioneered MCP-UI

Postman

API Platform

Live

API testing with interactive MCP tools

ChatGPT

OpenAI

Rolling out

Via OpenAI Apps SDK

MCPJam

Dev Tools

Live

Inspector and emulator for development

For developers building MCP clients that want to add MCP Apps support: the @mcp-ui/client package provides React components, or you can build on the lower-level App Bridge SDK for full control.

Should You Build One?

Good Fit Today

  • +Data visualization and analytics dashboards
  • +Interactive configuration forms
  • +Document and media preview
  • +Internal tools with limited user base
  • +Developer utilities and debugging tools

Wait For Now

  • Production-critical customer-facing workflows
  • Complex multi-page state management
  • Offline-first or low-connectivity use cases
  • Regulated industries needing audit trails
  • Apps requiring native OS-level access

Our Recommendation

We build custom AI tools for businesses. Here is how we would approach MCP Apps:

  1. Start with a visualization tool. Pick something your team already has an API for -- analytics, monitoring, reporting. Build the MCP App version. Get the mental model.
  2. Use the basic-host for development. Do not tunnel to Claude until you have the core interaction working locally. The feedback loop is faster.
  3. Separate data channels from the start. Put the model summary in content, the full dataset in structuredContent, and sensitive metadata in _meta. This is the pattern that scales.
  4. Test across multiple clients. The promise of MCP Apps is cross-client compatibility. Verify it. An app that only works in Claude is just MCP-UI with extra steps.

Related Reading

GitHub Continuous AI: The End of CI/CD as We Know It?

Background AI agents running inside repositories. Another piece of the agentic infrastructure puzzle.

AI Just Got a User Interface Layer

For the past two years, AI tools have communicated through text. The model reads, the model writes, you read what it wrote. MCP Apps adds a parallel channel: interactive HTML that lives alongside the conversation, not inside it.

The standard is young. The spec is still draft. Client support varies. But the direction is clear, and the backing is unprecedented -- Anthropic, OpenAI, Google, Microsoft, Block, and the Linux Foundation are all invested. Build something small, learn the architecture, and be ready when the ecosystem matures.

The conversation window just became an app container.

Share this article

About the Author

Hunter Goram

Hunter Goram

COO & Co-Founder at Byte Bot

Hunter is the COO and Co-Founder of Byte Bot, helping businesses build custom software solutions. He writes about AI, development, and technology trends.

Dashboard Analytics

Free 15-minute strategy call

Build custom AI tools in days, not months

Get a custom AI roadmap and 3 quick wins you can implement this week, with or without us.

FAQ

MCP Apps FAQ

Common questions about building interactive AI interfaces with the Model Context Protocol Apps extension.