Sources

1577 sources collected

Since launching MCP in November 2024, adoption has been rapid: the community has built thousands of MCP servers, SDKs are available for all major programming languages, and the industry has adopted MCP as the de-facto standard for connecting agents to tools and data. Today developers routinely build agents with access to hundreds or thousands of tools across dozens of MCP servers. However, as the number of connected tools grows, loading all tool definitions upfront and passing intermediate results through the context window slows down agents and increases costs. … ## Excessive token consumption from tools makes agents less efficient As MCP usage scales, there are two common patterns that can increase agent cost and latency: 1. Tool definitions overload the context window; 2. Intermediate tool results consume additional tokens. ### 1. Tool definitions overload the context window Most MCP clients load all tool definitions upfront directly into context, exposing them to the model using a direct tool-calling syntax. These tool definitions might look like: … Tool descriptions occupy more context window space, increasing response time and costs. In cases where agents are connected to thousands of tools, they’ll need to process hundreds of thousands of tokens before reading a request. ### 2. Intermediate tool results consume additional tokens Most MCP clients allow models to directly call MCP tools. For example, you might ask your agent: "Download my meeting transcript from Google Drive and attach it to the Salesforce lead." The model will make calls like: … Every intermediate result must pass through the model. In this example, the full call transcript flows through twice. For a 2-hour sales meeting, that could mean processing an additional 50,000 tokens. Even larger documents may exceed context window limits, breaking the workflow. With large documents or complex data structures, models may be more likely to make mistakes when copying data between tool calls.

11/4/2025Updated 4/3/2026

## The Hard Truth: Security Is Still the Elephant in the Room Let's be direct: MCP in 2025 shipped fast, and security didn't always keep pace. Security researchers have documented multiple outstanding issues, and some are genuinely concerning:^10^ **Authentication gaps**: The protocol provides minimal guidance on authentication, and many implementations default to no auth at all. Session IDs in URLs violate basic security practices. Until recently, there was no official registry to verify server authenticity.^11^ **Prompt injection vulnerabilities**: Tool descriptions go straight to the AI model. Malicious actors can hide instructions in those descriptions that the AI follows without the user's knowledge.^12^ **Token storage risks**: MCP servers often store OAuth tokens for multiple services. One breach equals access to everything: your Gmail, your Drive, your CRM.^13^

12/23/2025Updated 4/6/2026

We spent the last few months working through a long list of candidate priorities. They were informed by production experience, community feedback, and the pain points that keep surfacing. We narrowed them down to the areas that matter most for 2026. ... Right now, every SEP requires full Core Maintainer review, regardless of domain. That’s a bottleneck. It slows down Working Groups that already have the expertise to evaluate proposals in their own area. The goal is to remove that bottleneck without sacrificing quality. ... Enterprises are deploying MCP and running into a predictable set of problems: audit trails, SSO-integrated auth, gateway behavior, and configuration portability. This is also the least defined of the four priorities, and that’s intentional.

3/9/2026Updated 4/7/2026

State of Model Context Protocol in Software 2026 MCP Pain We asked respondents from companies, “What obstacles are blocking or slowing MCP adoption in your organization?” and allowed them to choose all applicable options. Security concerns topped the list, and interestingly a higher percentage of respondents called … 0 20 40 60 80 100 Security concerns and requirements 64% Cost of implementation or running costs 40% Legacy system integration complexity 38% Lack of end user training 37% Proving business value 36% Data quality and availability 26% Infrastructure and compute resources 26% Network and connectivity restrictions 22% Other 1% We don’t face any barriers to MCP adoption 6% www.stacklok.com enterprise@stacklok.com 14 www.stacklok.com enterprise@stacklok.com 14 … State of Model Context Protocol in Software 2026 Near the top of the obstacles list was “legacy system integration” and “cost of implementation”. Figuring out where the hang-ups are / will be is valuable to those on an MCP adoption journey. firms are working hard to connect a broad swath of systems to

Updated 4/5/2026

Users have been cobbling together ad hoc solutions for this problem. Plug-ins. Vector databases. Retrieval systems. These Band-Aids are clever, but fragile. They don't cooperate with each other. They break when you switch providers. It's less “responsible plumbing” and more “duct tape and prayer.” … ### The Context Window Problem Developers have been trying to develop workarounds by providing relevant data as needed. Pasting in documents, providing chunks of a database, and formulating absurdly robust prompts. These fixes are great, but every LLM has what we call a context window. The window determines how many tokens a model can remember at any given time. Some of the bigger LLMs have windows that can accommodate hundreds of thousands of tokens, but users still quickly find ways to hit that wall. Bigger context windows should be the answer, right? But there's our Catch 22: The more data you provide within that window, the more fragile the entire set up becomes. If there's not enough context, the model may very well just make stuff up. If you provide too much, the model bogs down or becomes too pricey to run. ### The Patchwork Fixes The AI community wasn't content to wait for one of the big players to provide a solution. Everyone rushed to be first-to-market with an assortment of potential fixes. Custom plug-ins let the models access external tools and databases, extending their abilities beyond the frozen training data. You can see the issue here. Plug-ins designed for one platform won't work with another. Your workspace becomes siloed and fragmented, forcing you to rework your integrations if you try to switch AI providers. … Each of these approaches are useful, but they all suffer from the same weakness as any proprietary solution. The developers have to reinvent the wheel each time. Without universal integration standards, these solutions are unstable and non-transferable. AI systems need a standardized approach for context access and authentication. … ## Security, Privacy, and Governance Is MCP the holy grail of making functional AI agents—without using a bottle of Elmer's glue and some yarn to bundle the integrations together? If your internal security alarms are ringing, that just means you're thinking responsibly. Every time I read about one of these new AI applications, I cringe at the security implications. A workflow that makes it easier to move context across systems also expands your exposure surface. Such a system has to prioritize security, privacy, and governance. … ### The Governance Layer Although security and privacy are vital, use of MCP brings up some complicated questions regarding governance. We're still in the Wild West phase of AI and as it continues to evolve, we'll remain there. It can be a crap shoot determining which servers can be trusted. How can an organization of any size know where to set boundaries? How do we determine what the model is allowed to access? … - **Context Poisoning:** If a malicious actor can compromise an MCP server, they can manipulate the data flowing to the model, corrupting it. Transparency can provide visibility to the data, but it's unable to filter out tainted information. - **Overreach:** It's tempting for an organization to default to maximum connectivity. Maybe it opts togive the AI assistant far more access than is truly needed. That plants the seeds for an inevitable breakdown in governance. - **Surveillance Misuse:** The protocol has no inherent bias, but the use of it will define the outcomes. There's always a chance of abuse. In such a scenario, a malicious user could weaponize MCP to aggregate and surveil sensitive user information. - **Ecosystem Fragmentation:** There's always the possibility that MCP won't be fully adopted but cloned. MCP-like variations could fragment the landscape and cut compliance corners. Interoperability breaks down and erodes security assurances. ### Juggling Openness and Safety Therein lies the friction: The openness and flexibility of MCP leads to a more powerful ecosystem. But with that openness comes increased risk. How are servers vetted? Soon, we'll see them popping up all over the place. Some of them will be compromised. It's just the law of numbers. How can users ensure that these upstart servers won't leak, corrupt, or abuse data?

Updated 3/27/2026

Since the Model Context Protocol (MCP) was announced by Anthropic a year ago, we’ve seen immense growth in large language models (LLMs) and agentic use cases. Before MCP became the de facto agentic standard, developers building agents on top of LLMs would have to hard-code the connective tissue between the LLM and apps. Developers would need to build custom integrations between their LLM client and the apps required by an end user’s prompt. With MCP, developers can now connect directly with external data sources, so their LLM can read data from and write data to the connected applications. But there’s a breaking point where things start to fall apart. The robustness and efficacy of agentic solutions depend on the quality of the application programming interfaces (APIs) that are used by an MCP server. MCPs expose tools that are invoked by LLMs, and these tools often reference individual API endpoints. The quality of APIs, therefore, directly correlates with the LLM client’s accurate discovery and execution of user prompts.

12/19/2025Updated 3/28/2026

## 4. The Architectural Problems: Slowness, Bloat, and the Double Hop Tax Security breaches get headlines, but MCP’s architectural limitations are what quietly frustrate developers day to day. These problems don’t cause dramatic incidents. They just make everything slower, heavier, and harder to scale. Understanding them is essential to understanding why alternatives exist. The double hop tax is the most visible performance problem. Every time an AI agent wants to call a tool in MCP, the request doesn’t go directly to the tool. It makes two trips. The agent sends a JSON-RPC request to the MCP Server, the server parses it, reformats it, and forwards it to the actual tool. The tool responds, the MCP Server receives it, reformats it again, and sends it back to the agent. Visually the flow looks like this: … Context window bloat is subtler but hits just as hard in practice. When an MCP client connects to a server, it typically loads the full list of tools that server exposes, including the name, description, and JSON schema for every parameter of every tool. All of that gets injected into the LLM’s context window before the AI even starts thinking about the user’s request. A typical tool schema looks like this: … The stateful session problem becomes painful at scale. MCP’s original design assumed a persistent stateful connection between client and server, a reasonable assumption for a local development tool where one Claude Desktop instance talks to one MCP Server on the same machine. But production deployments route traffic through load balancers across many server instances: ⚡ Powered by CloudScale … When MCP sessions are stateful and the load balancer routes the next request to a different server instance, that instance has no record of the session, and things break. The workarounds, such as sticky sessions, shared Redis session stores, and distributed state management, add operational complexity and cost that teams did not anticipate when they thought they were simply adding MCP support to their stack. The MCP 2026 roadmap explicitly names this as a top priority, but it is a hard problem that was not solved at launch. The wrapper tax is the hidden infrastructure cost that accumulates over time. To expose any tool via MCP, someone has to write and maintain an MCP Server, a dedicated process that wraps the tool’s native API. For a tool with a perfectly good REST API already, the before and after looks like this: ⚡ Powered by CloudScale … That MCP Server needs to be written in Python or TypeScript, hosted somewhere, kept running, updated whenever the underlying tool’s API changes, secured against the vulnerabilities described in the next section, monitored for failures, and scaled if load increases. For a small team, this per tool overhead accumulates fast and becomes a significant ongoing maintenance burden. … This is why enterprise teams running production agentic workflows have been among the loudest voices pushing for MCP to evolve or for alternatives to be considered. ... The Security Crisis: Breaches and Real World Failures The same openness and power that made MCP attractive became its Achilles heel. As adoption scaled into production environments, a pattern familiar from the history of internet protocols repeated itself: when powerful technology moves faster than security practices, breaches follow. In March 2025, security firm Equixly published research finding command injection vulnerabilities in 43% of tested MCP implementations, with another 30% vulnerable to server side request forgery attacks and 22% allowing arbitrary file access. This was not a theoretical paper. It was a survey of real deployed servers. In April 2025, security researcher Simon Willison documented how MCP’s architecture created severe prompt injection risk. Because LLMs process tool outputs as context, a malicious MCP Server, or even a malicious message sent to a user’s WhatsApp that gets processed by an LLM, could hijack the AI’s behavior, extract private data, or execute unauthorized commands. The spec noted that there “should always be a human in the loop,” but in practice many implementations skipped this entirely. … By October 2025, JFrog Security had disclosed critical vulnerabilities in mcp-remote, an OAuth proxy used by hundreds of thousands of environments. CVE-2025-6514 was rated CVSS 9.6 and allowed remote code execution via OS commands embedded in OAuth discovery fields. CVE-2025-6515 enabled what researchers called Prompt Hijacking, where attackers exploiting predictable session IDs could intercept and redirect MCP sessions entirely. And Anthropic’s own developer debugging tool, the MCP Inspector, was found to allow unauthenticated remote code execution, turning a diagnostic tool into a potential remote shell. … Because MCP Servers are distributed via npm and PyPI without universal verification, the ecosystem is exposed to the same supply chain attacks that have plagued web development for years. Tool descriptions can also be modified after a user approves them, a technique researchers call a rug pull, meaning an LLM that was told a tool does one thing can silently be fed a new description instructing it to do something entirely different.

3/22/2026Updated 3/30/2026

- The 5 most common manual coding pain points—and how to fix them. - What a Model Context Protocol (MCP) is and why it matters. ... … **1. Authentication boilerplate:** Implementing OAuth flows or token handling for every new service or environment. **2. Data mapping:** Converting API responses into app-friendly structures. This is especially messy when integrating third-party PSPs, PIMs or ERPs. **3. Pagination and filtering:** Writing pagination logic or filtering queries for search endpoints every time from scratch. **4. Validation and error handling:** Inconsistent validations lead to hard-to-debug issues or inconsistent user experiences. **5. Webhook setup:** Manually configuring and testing webhooks for order status, inventory changes, etc., are prone to silent failures. Each of these tasks chips away at developer time and increases the risk of errors that could delay launch.

8/28/2025Updated 3/25/2026

Harald Kirschner from VSCode/GitHub shares how they systematically identified and solved the five biggest MCP pain points, transforming the developer experience from frustrating to delightful. ... "I get a lot of complaints," he admitted with a smile, before diving into what he called "the five biggest MCP pain points" that VSCode has systematically addressed. ... ## The Installation Nightmare is Over ‍ The first pain point Harald tackled was one many developers know all too well: the Byzantine process of finding and installing MCP servers. "It started kind of hacky," he explained, "with copying around JSON blobs and maybe hard-coding API keys." This wasn't just inconvenient—it was a barrier to adoption that kept many developers from even trying MCP. … ## Making API Keys Actually Secure ‍ The second major improvement addresses a security headache that has plagued MCP adoption. Many servers still require API keys, and the traditional approach meant dropping these sensitive credentials directly into JSON configuration files—a practice that made security-conscious developers cringe. VSCode now handles API keys as proper inputs, prompting users for them during installation rather than requiring manual JSON editing. This approach makes credential management both more secure and more user-friendly, eliminating one of the most common stumbling blocks for new MCP users. ## Beyond Secret Incantations: Intelligent Tool Discovery Harald's third point struck at something every MCP user has experienced: the frustration of "secret incantations" needed to invoke specific tools. Too often, developers find themselves crafting careful prompts to trigger the right functionality, treating their AI assistants more like command-line interfaces than intelligent agents. VSCode addresses this with a built-in tool picker that lets developers be explicit about which tools they want to use in any given session. But the real innovation goes deeper: developers can save effective tool combinations as reusable "chat modes," creating what Harald called "context engineering building blocks." … ## Scaling to Unlimited Tools The fourth challenge Harald addressed might seem like a luxury problem, but it's actually fundamental to MCP's future: what happens when developers accumulate dozens or hundreds of tools? Traditional approaches break down quickly under this load, forcing difficult choices about which capabilities to keep active. VSCode's latest release tackles this head-on with support for essentially unlimited tools. Harald demonstrated this by loading 171 tools simultaneously—a number that would overwhelm most systems—and then successfully running queries that automatically selected the right tools for the task. … ## Smart Context Management The final major pain point Harald discussed was context bloat—the problem of MCP servers potentially flooding conversations with excessive information from searches or other operations. This issue can quickly exhaust token limits and make conversations unwieldy. VSCode's solution leverages one of MCP's more sophisticated but underutilized capabilities: sampling. When an MCP server performs a search, it can use sampling to summarize results before presenting them to the user. This provides much more condensed, focused answers while still preserving the full context for those who need it. … ## A Pattern for the Future ... Each solution builds on the others: easy installation encourages experimentation, secure credential management removes adoption barriers, intelligent tool discovery makes daily use more pleasant, unlimited tool support enables power users, and excellent developer experience ensures the ecosystem continues to grow.

9/2/2025Updated 3/25/2026

### Challenge #1: MCP’s authorization is not ‘enterprise-friendly’ Before poking at the vulnerabilities of MCP’s current authorization specification with OAuth, let’s quickly examine the reason Anthropic introduced OAuth specifications in the first place. Originally, setting up MCP involved a 1:1 deployment of a client and an MCP server on a developer’s local machine. This worked fine for individual developers but didn’t scale to enterprise needs. Over time, the surge of MCP adoption among smaller projects created a ripple effect in the enterprise. Engineering team leaders were interested in setting up remote MCP servers, but to access data on these servers in privacy-compliant ways, they needed authorization. Anthropic responded with the first set of authorization specifications, released in March 2025. *First specifications: no separation between authentication and resource servers.* The MCP Authorization spec allowed secure access to servers using OAuth 2.1. Now, engineers could set up the protocol on a remote server, but they had new concerns. In the specifications, MCP servers were treated as both resource and authorization servers, which went against enterprise best practices, increased fragmentation, and forced developers to expose metadata discovery URLs. *The latest specification: servers are decoupled, but security issues remain.* In June, after months of active discussions on where the first authorization specifications fell short, Anthropic released an updated version that decoupled authorization and resource servers. Developers were still unhappy. For one, the revised specification leans on OAuth RFCs – a set of frameworks that grant third-party applications limited access to HTTP services, which is not widely used by identity providers. … *Aaron Palecki,* * ‘Enterprise-Ready MCP’* The problem is that MCP doesn’t integrate smoothly with these enterprise SSO systems. Parecki argues that MCP-enabled AI agents should be treated like any other enterprise application – controlled through the company’s identity management system. At the time of writing, connecting an AI agent like Claude to enterprise tools through SSO involves several frustrating steps. … 1. When the user grants appropriate OAuth permissions, they can come back to Claude and use the AI agent. This authentication by itself is inconvenient for enterprise multi-agent systems that have to connect to a wider range of applications. More importantly, in this authentication approach,* the * ***user* ** is the one granting permissions, with no visibility at the ***admin* ** level. This means there’s no one to oversee access control, and there’s a risk of unchecked interaction between mission-critical systems and unvetted third-party applications. **How enterprises solve this problem** Identity solution providers are already developing workarounds to address the limitations of MCP’s authorization. ... … ### Challenge #3: MCP’s default ‘server’ approach does not blend well with serverless architectures Over 95% of Fortune 500 companies are embedded in the Azure ecosystem that relies on serverless architectures. These infrastructures are poorly suited to MCP implementations, since Anthropic’s protocol is currently deployed as a **Docker-packaged server**. Building and managing MCP servers on top of already stable serverless architectures increases maintenance overhead and adds to infrastructure costs in the long run. … **Cold start delays** of up to 5 seconds made the system too slow for any time-sensitive workflows – imagine waiting 5 seconds every time your AI agent needed to access a tool. **Developer experience ** issues plagued the setup. As Isenberg put it, the process was “confusing, inconsistent, and far from intuitive.” There wasn’t a clear guide for how to set everything up properly. **Infrastructure complexity** meant figuring out all the pieces manually, since there was no standard Infrastructure-as-Code template to follow. **Logging problems** arose because FastAPI and FastMCP use different logging systems, and they didn’t play well with AWS Lambda’s standard monitoring tools. **Testing difficulties** required manual VS Code configuration since there weren’t any streamlined tools for testing MCP server interactions in a serverless environment. … ### Challenge #4: Tool poisoning In April 2025, Invariant Labs discovered that MCP is vulnerable to tool poisoning, a type of attack where a prompt with malicious instructions is launched at the LLM. The instructions are not visible to humans but understandable to the AI agent. Thus, a model, now armed with access to internal tools and data, can perform malicious actions, like: … He encourages engineering teams to follow MCP specifications and make sure there’s a human in the loop between the agent and the tools it uses. AI agents should also be designed with transparency in mind, which means: - Have a clear UI that clarifies which tools are exposed to AI - Provide notifications or other indicators whenever an agent invokes a service - Ask users for confirmation on mission-critical actions like data manipulation or extraction to adhere to HITL principles.

9/15/2025Updated 4/6/2026

Finally, it’s also worth calling out a technique we’ve been exploring for agentic coding: **Anchoring coding agents to a reference application** **(*Techniques/Assess*)**. It addresses the age-old problem of code drift, where the live state of an application differs from how it's defined in code. It’s easy to see how such an issue could prove particularly troublesome for AI agents — by employing an MCP server to help anchor agents to template code and commit diffs, it becomes easier for those agents to detect and mitigate drift. ## MCP risks and antipatterns As with any rapidly adopted and much–hyped technology or trend, MCP isn't without risks. The most significant is security. As one widely shared article joked, the S in MCP stands for security. The piece, by researcher Elena Cross, outlines a number of common attack vectors opened up by MCP. This includes tool poisoning, where the MCP tool contains a malicious description, silent or mutated definitions and cross-server tool shadowing, where a malicious agent intercepts calls made to one that’s trusted. She makes the point that the protocol’s focus is on simplicity and ease, not authentication and encryption. … While there are undoubtedly technical risks associated with MCP, some caution about when and where to use MCP could go a long way to mitigating many issues. For instance, we’ve noticed a rush to convert APIs to MCP servers. This is a trend that raises serious issues from both a security and efficiency perspective, which is why we’ve urged caution against what we describe as **naive API-to-MCP conversion** ** (*Techniques/Hold*)** on Technology Radar Vol.33.

12/11/2025Updated 4/1/2026

As MCPs become the backbone of agentic AI systems, the developer experience still faces key challenges. Here are some of the major hurdles: … ### Complex installations and distribution Getting started with MCP tools remains complex. Developers often have to clone repositories, wrangle conflicting dependencies in environments like Node.js or Python, and self-host local services—many of which aren’t containerized, making setup and portability even harder. On top of that, connecting MCP clients adds more friction, with each one requiring custom configuration that slows down onboarding and adoption. ### Auth and permissions fall short Many MCP tools run with full access to the host, launched via npx or uvx, with no isolation or sandboxing. Credentials are commonly passed as plaintext environment variables, exposing sensitive data and increasing the risk of leaks. Moreover, these tools often aren’t designed for scale and security. They’re missing enterprise-ready features like policy enforcement, audit logs, and standardized security.

5/5/2025Updated 3/30/2026