A product leader's guide to MCP

Published on
April 8, 2025
Contributors
No items found.
Subscribe for product updates and more:
Read about our privacy policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Table of Contents:

Anthropic introduced the Model Context Protocol (MCP), essentially creating a "USB-C for AI" that standardizes how large language models connect to external tools and data sources. And while this might sound like yet another developer framework, it represents an evolution in how AI can be integrated into products and services.

As we all move from impressive AI demos to embedded enterprise applications, the challenge has always been connecting models to the systems where real data and services live. Every company has been building custom, one-off integrations that don't scale or generalize. Well, MCP aims to solve this fragmentation problem once and for all.

The problem MCP solves

Even the most powerful language models like Claude and GPT-4 have been limited. Trapped behind information barriers with no direct access to fresh data, our proprietary systems, or the ability to take meaningful actions without custom integration work.

This has led to our current state where AI features feel impressive but limited. Your customer support AI might generate wonderful responses but can't access your latest product documentation. Your coding assistant doesn't know about your internal APIs. Your dashboard AI can analyze data but can't run queries to fetch updated information.

MCP tackles this problem head-on by creating a standardized way for AI to safely interact with external systems, whether that's databases, APIs, documentation, or tools.

How MCP works

At its core, MCP follows a client-server architecture specifically designed for AI systems:

MCP Servers wrap specific data sources or capabilities (like a database, Slack integration, or web browsing tool) and expose them through a consistent interface. Each server knows how to handle requests and return results in a standardized format.

MCP Clients (within host applications) maintain connections to these servers and relay the AI model's requests. The client can run anywhere an LLM is being used, whether that's a chat interface, code IDE, or other custom application.

The protocol defines standard formats for messages, requests, and responses, enabling any client to communicate with any server. This is similar to how HTTP allows any web browser to communicate with any website.

The key elements that make MCP powerful for product teams include:

  1. Resources (Read-only context): Pieces of data a server can securely expose to an AI (file contents, database records, etc.). Resources are typically controlled by the application or user, ensuring sensitive data isn't accessed unless authorized.

  2. Tools (Invokable actions): Functions an AI can request a server to perform, like "send an email," "query a database," or "browse a URL." Each tool is defined with a name, description, and input schema that constrains what the AI can do.

  3. Prompts (Your reusable workflows): Predefined templates or scripts that guide the AI through multi-step interactions. These make complex sequences reusable across sessions.

  4. Roots (Context boundaries): Scope limitations that define where servers should operate, helping to sandbox the AI's access and keep it focused on relevant data.

The beauty of this approach is its modularity: you can add, remove, or update tools without changing the underlying AI model or other components. It's also platform-agnostic, meaning it works with Claude, GPT-4, open-source models, or any other AI system that implements the protocol.

MCP vs. other approaches

To understand why MCP represents an evolution in AI integration, let's compare it to existing approaches:

OpenAI's Function Calling: While powerful, function calls are proprietary to OpenAI's ecosystem. They're defined within your application code and aren't easily portable or reusable across different AI models. OpenAI's plugins were a step toward standardization but remained limited to their platform.

LangChain Framework: It’s a popular choice for building AI applications and offers similar capabilities but as a programming framework rather than a protocol. It's excellent for prototyping but doesn't solve the cross-application interoperability problem. Two apps using LangChain don't automatically share tools unless you physically share code.

AutoGPT-style agents: These early autonomous agent experiments demonstrated the potential of tool-using AI but lacked the standardization and security controls we need on production environments. They were more proof-of-concept than production-ready architecture.

MCP distinguishes itself by being:

  • An open protocol anyone can implement (not tied to one vendor)
  • Security-focused by design (with fine-grained access control)
  • Modular and extensible (tools are standalone services)
  • Cross-platform (works with any AI model that supports it)

What MCP means for product leaders

For product managers and tech leaders, MCP represents a step forward in how AI features can be designed and implemented. Let’s look some examples:

Embedded intelligence with live data

With MCP, your AI features can access up-to-date information without users manually uploading files or copying data. Imagine a customer support chatbot that can query your billing database in real-time and then email transaction histories via an email connector, all within a single conversation.

This changes your product design approach since AI features become action-oriented and context-aware, truly solving user problems end-to-end rather than simply providing information.

Multi-tool workflows

MCP makes it natural to design experiences where AI orchestrates multiple tools to accomplish complex tasks. Instead of building hardcoded automation workflows, you can think in terms of outcomes and trust the AI to chain the right tools together.

For example, a productivity assistant might take a request like "Analyze last quarter's sales data, generate a summary presentation, and share it with my team." Behind the scenes, the AI could fetch data from your CRM, generate visualizations, create slides, and post to Slack - all through separate MCP connectors, without you having to code that specific sequence.

Security, compliance, and control

For enterprise applications, MCP offers granular security control. Instead of giving an AI broad system access, you expose only specific actions through MCP servers. The model can only call those explicitly defined functions.

This containment aligns with the principle of least privilege and makes security reviews more straightforward. Your security team can audit the MCP server code (which is usually simple and focused) and approve it, knowing the AI cannot exceed those boundaries.

The human-in-the-loop options (like requiring user approval for certain actions) provide additional safety guardrails. If an AI attempts an unusual operation, the system can require explicit confirmation before proceeding.

Vendor flexibility

MCP also decouples your tool ecosystem from any specific AI provider. Your team can switch between models (perhaps starting with Anthropic's Claude but later moving to a fine-tuned open-source model) without losing your integrations.

This reduces your vendor lock-in and gives you leverage to choose the AI that best fits your needs, knowing your connector layer remains compatible. It also simplifies supporting multiple models simultaneously for different use cases.

Development Speed via Open Ecosystem

Building with MCP means leveraging community-built connectors rather than reinventing the wheel for each project. Need your AI to browse web pages? Plug in the existing Puppeteer MCP server. Want GitHub integration? Just use the GitHub MCP server.

As the community grows, we're seeing a rich library of MCP servers emerge (from database connectors to third-party API wrappers). Zapier's MCP Beta alone provides access to over 7,000 apps and 30,000 actions through a single integration.

This marketplace effect accelerates development, allowing your team to focus on building unique features rather than rebuilding common integrations.

Getting started with MCP servers

Okay, let’s talk actions now. If you're considering bringing MCP into your product strategy, here's a practical roadmap you can adapt to your needs.

1. Have you identified your integration points?

Start by mapping where your users would benefit from AI having access to data or tools. Common starting points include:

  • Internal knowledge bases and documentation
  • Customer/user data systems
  • Communication tools (email, chat, etc.)
  • Project management systems
  • Code repositories or development environments

2. Have you evaluated existing connectors?

Check the growing ecosystem of open-source MCP servers to see what's already built. Anthropic maintains a repository of connectors, and community contributions are expanding this library rapidly.

For standard services like Google Workspace, Slack, GitHub, and databases, you may find production-ready connectors you can deploy immediately already.

3. Time to build your first custom connector

For systems unique to your organization, you'll want to create custom MCP servers. The TypeScript and Python SDKs make this surprisingly straightforward:

  • Define the resources your connector will expose (what data can be read)
  • Specify the tools it will provide (what actions can be performed)
  • Implement the handlers for those tools with appropriate security checks
  • Deploy the server in your environment (locally, in your cloud, etc.)

Here's a simplified example of what an MCP server might look like in TypeScript:

const server = new Server({ name: "my-product-connector", version: "1.0.0" });

// Define a tool to fetch user information
server.setRequestHandler(ListToolsRequestSchema, async () => ({
  tools: [{
    name: "get_user_profile",
    description: "Fetch a user's profile information",
    inputSchema: {
      type: "object",
      properties: { 
        userId: { type: "string", description: "ID of the user to fetch" } 
      },
      required: ["userId"]
    }
  }]
}));

// Implement the tool
server.setRequestHandler(CallToolRequestSchema, async (request) => {
  if (request.tool === "get_user_profile") {
    // Add authentication and validation here
    const userId = request.input.userId;
    const userProfile = await databaseClient.fetchUser(userId);
    return { result: userProfile };
  }
});

4. Don’t skimp on the AI experience, design it

With your connectors in place, design how the AI will interact with users and when it should access tools:

  • Will tool use be automatic or require user approval?
  • What conversational patterns will trigger tool use?
  • How will you communicate to users what actions the AI is taking?
  • What error handling is needed when tools fail?

5. Start small, then expand

Begin with focused, high-value use cases where tool access clearly enhances the user experience (the “user” being your own team, if you’re working on internal tools, or your customers, if you’re working on your product). As you gain confidence in your implementation, gradually expand the range of tools available to your AI.

💡 Consider a phased approach:

  • Phase 1: Read-only access to non-sensitive resources
  • Phase 2: Interactive tools with user approval
  • Phase 3: More autonomous operation for trusted workflows

MCP in action

Several early adopters are already demonstrating MCP's potential. Claude Desktop, for instance, is Anthropic's desktop app and uses MCP to let Claude access files on your computer without uploading them to the cloud. This enables secure, private interactions with local documents. 

Zapier is another early bird. Their implementation turns their entire automation platform into MCP tools, instantly giving AI access to thousands of apps through one integration.

Replit's Ghostwriter is a cloud IDE that uses MCP to let its AI assistant run code, read files, or search documentation, enabling natural language commands like "Summarize this video and save to summary.txt."

And companies like Block (Square) are using MCP to connect AI to internal knowledge bases, allowing their AI assistants to retrieve relevant information without broad system access. Something you might probably want in a financial services provider.

The future of AI integration?

As MCP and similar approaches gain traction, we're likely to see several transformative developments as well. First of all, expect to see formal directories of MCP servers emerge, where you can discover and deploy connectors for popular services.

AI will also maintain coherent context as it moves between different tools and datasets, creating better experiences in our apps. Some tools might actually wrap other AI models with specific skills, creating networks of collaborating specialized AIs?

And when it comes to advanced AIs, they might learn to identify capability gaps and propose or even implement new connectors, accelerating ecosystem growth. 

I mean, who knows where we will end? The more AI reliably performs tasks across systems, the more we will see new product categories like "AI Ops" assistants or "AI Project Managers" that coordinate work across multiple tools and reshape how we use computers.

Preparing your strategy

The Model Context Protocol (MCP) brings new air to how AI will be integrated into products and services. It provides a secure, modular, and standardized way for AI to interact with the world, granting us the playground where we will see the next level of usefulness for language models.

For product leaders, this means 📣

  1. Rethinking AI features - move beyond "AI answers questions" to "AI accomplishes tasks"

  2. Planning for modularity - design systems where components can be swapped or upgraded independently

  3. Prioritizing security by design - use MCP's containment model to implement least-privilege access (!)

  4. Building for an ecosystem - consider how your tools might be shared or reused across applications

  5. Focusing on outcomes - let AI handle the mechanics while you design for user goals

The companies that embrace this architectural change early will gain an advantage. They'll deliver more capable AI features faster, with better security and flexibility than competitors who continue building custom, siloed integrations.

MCP signals the maturation of AI from impressive technology to a practical, embedded tool. The question for product teams is no longer "how do we add AI?" but "what problems can our AI solve now that it can truly interact with our systems?" That change in perspective opens up entirely new product possibilities, and the time to start exploring them is now.

⁀જ➣ Share this post:

Continue Reading

Similar readings that might be of interest:
🛠️ How-to

Async check-ins as the foundation of modern team collaboration

Learn how async check-ins are essential for modern teams and how they build trust while eliminating wasteful meetings in today's distributed workplace environment.
Read post
🛠️ How-to

The hidden architecture of effective teams

Discover the three essential mental models that drive high-performing teams according to research, and learn how organizational context either nurtures or undermines these shared frameworks of understanding.
Read post
🛠️ How-to

Managing dependencies across multiple teams with check-ins

Dependencies between teams aren't just annoying, they're killing your projects. This guide reveals how to transform check-ins, eliminate structural bottlenecks, and implement a practical 30-day plan that turns dependency hell into smooth workflow heaven.
Read post
We use our own and third-party cookies to obtain data on the navigation of our users and improve our services. If you accept or continue browsing, we consider that you accept their use. You can change the settings or get more information here.
I agree