Sources
1577 sources collected
commercetools.com
Developer MCP: Shifting Dev Work From Execution to Strategy.- The 5 most common manual coding pain points—and how to fix them. - What a Model Context Protocol (MCP) is and why it matters. ... … **1. Authentication boilerplate:** Implementing OAuth flows or token handling for every new service or environment. **2. Data mapping:** Converting API responses into app-friendly structures. This is especially messy when integrating third-party PSPs, PIMs or ERPs. **3. Pagination and filtering:** Writing pagination logic or filtering queries for search endpoints every time from scratch. **4. Validation and error handling:** Inconsistent validations lead to hard-to-debug issues or inconsistent user experiences. **5. Webhook setup:** Manually configuring and testing webhooks for order status, inventory changes, etc., are prone to silent failures. Each of these tasks chips away at developer time and increases the risk of errors that could delay launch.
fastmcp.mintlify.app
FastMCP UpdatesFastMCP 2.13 “Cache Me If You Can” represents a fundamental maturation of the framework. After months of community feedback on authentication and state management, this release delivers the infrastructure FastMCP needs to handle production workloads: persistent storage, response caching, and pragmatic OAuth improvements that reflect real-world deployment challenges.💾 … The new consent screen prevents confused deputy and authorization bypass attacks discovered in earlier versions, while the OAuth proxy now issues its own tokens with automatic key derivation. RFC 7662 token introspection support enables enterprise auth flows, and path prefix mounting enables OAuth-protected servers to integrate into existing web applications. ... ⚡ **Response Caching Middleware**dramatically improves performance for expensive operations, while **Server lifespans**provide proper initialization and cleanup hooks that run once per server instance instead of per client session.✨ **Developer experience improvements**include Pydantic input validation, icon support, RFC 6570 query parameters for resource templates, improved Context API methods, and async file/directory resources. ... **Sampling API Fallback**tackles adoption challenges by letting servers generate completions server-side when clients don’t support the feature, encouraging innovation while maintaining compatibility. ... **Elicitation Support**enables dynamic server-client communication and “human-in-the-loop” workflows, allowing servers to request additional information during execution.📊 **Output Schemas**provide structured outputs for tools, making results more predictable and easier to parse programmatically.🛠️ **Enhanced HTTP Routing**with OpenAPI extensions support and configurable algorithms for more flexible API integration.This release includes a breaking change to ... FastMCP 2.9 ... … *MCP Middleware*brings a flexible middleware system for intercepting and controlling server operations - think authentication, logging, rate limiting, and custom business logic without touching core protocol code.✨ *Server-side type conversion*for prompts solves a major developer pain point: while MCP requires string arguments, your functions can now work with native Python types like lists and dictionaries, with automatic conversion handling the complexity.These features transform FastMCP from a simple protocol implementation into a powerful framework for building sophisticated MCP applications. Combined with the new
gofastmcp.com
FastMCP UpdatesTwo community-contributed fixes: auth headers from MCP transport no longer leak through to downstream OpenAPI APIs, and background task workers now correctly receive the originating request ID. Plus a new docs example for context-aware tool factories. ... First patch after 3.0 — mostly smoothing out rough edges discovered in the wild. The big ones: middleware state that wasn’t surviving the trip to tool handlers now does, `Tool.from_tool()` accepts callables again, OpenAPI schemas with circular references no longer crash discovery, and decorator overloads now return the correct types in function mode. … ... FastMCP 3 RC1 means we believe the API is stable. Beta 2 drew a wave of real-world adoption — production deployments, migration reports, integration testing — and the feedback overwhelmingly confirmed that the architecture works. This release closes gaps that surfaced under load: auth flows that needed to be async, background tasks that needed reliable notification delivery, and APIs still carrying beta-era naming. If nothing unexpected surfaces, this is what 3.0.0 looks like.🚨 **Breaking Changes** — The `ui=` parameter is now `app=` with a unified `AppConfig` class, and 16 `FastMCP()` constructor kwargs have been removed after months of deprecation warnings.🔐 **Auth Improvements** — Async `auth=` checks, Static Client Registration for servers without DCR, and declarative Azure OBO flows via dependency injection. ... v2.14.4 backported `dereference_refs()` but never wired it into the tool schema pipeline — `$ref` and `$defs` were still sent to MCP clients. Now fixed: schemas are fully inlined before reaching clients. FastMCP 2.14.5 ... Sometimes five seconds just isn’t enough. This release fixes an HTTP transport bug that was cutting connections short, along with OAuth and Redis fixes, better ASGI support, and CLI update notifications so you never miss a beat.⏱️ **HTTP transport timeout fix** restores MCP’s 30-second default connect timeout, which was incorrectly defaulting to 5 seconds.🔧 **Infrastructure fixes** including OAuth token storage TTL, Redis key prefixing for ACL isolation, and ContextVar propagation for ASGI-mounted servers with background tasks. … ... FastMCP 2.14 begins adopting the MCP 2025-11-25 specification, introducing protocol-native background tasks that enable long-running operations to report progress without blocking clients.⏳ **Background Tasks (SEP-1686)** let you add `task=True` to any async tool decorator. Powered by Docket for enterprise task scheduling—in-memory backends work out-of-the-box, Redis enables persistence and horizontal scaling. … ## FastMCP 2.12.5: Safety Pin Pins MCP SDK version below 1.17 to ensure the `.well-known` payload appears in the expected location when using FastMCP auth providers with composite applications. FastMCP 2.12.4 Releases September 26, 2025 ... Hotfix for streamable-http transport validation in fastmcp.json configuration files, resolving a parsing error when CLI arguments were merged against the configuration spec. FastMCP 2.12.1 Releases September 3, 2025 … 🛠️ **Enhanced HTTP Routing** with OpenAPI extensions support and configurable algorithms for more flexible API integration.This release includes a breaking change to `client.call_tool()` return signatures but significantly expands the interaction capabilities of MCP servers. FastMCP 2.9 ... June 23, 2025
www.youtube.com
Your MCP Server is Bad (and you should feel bad) | Jeremiah Lowin, AI Engineer Code Summit 2025Jeremiah Lowin breaks down why most MCP servers miss the mark and what to do about it. As the creator of FastMCP, Jeremiah has seen every way people build MCP servers. Most of them aren't great. But here's the thing - it's not because people are bad at building. It's because we're designing for the wrong user. This talk is about agentic product design. Agents aren't humans. They're expensive at discovery, slow at iteration, and limited on context. So why are we building interfaces for them like they're just magical humans who can handle anything? Recorded at AI Engineer Code Summit 2025. ... … It enumerates every single tool and every single {ts:391} description on that server. So discovery is actually really expensive for agents. It consumes a lot of tokens. Um next {ts:398} iteration. Same idea. If you're a human developer and you're writing code against an API, you can iterate really … There's a big asterisk on that because {ts:574} client implementations in the MCP space right now are not amazing and they do some things that are themselves not {ts:579} compliant with the MCP spec. Maybe at the end we'll get into that. It's not directly relevant to now except {ts:586} that all we can do is try to build the best servers we can subject to the limitations of the clients that will use … {ts:938} the middle because they can do it but it's expensive and slow and annoying and hard to debug and stochcastic. And so if {ts:944} you can avoid that, please do. If you can't, there are times when you don't know the algorithm and you don't know {ts:949} how to write the code and it's not programmatic. … {ts:1167} but it's still going to be hard. There was until very recently there may still be a bug in maybe it's not a bug because {ts:1175} no one seems to fix it but in cloud desktop all um all structured arguments like object arguments would be sent as a {ts:1184} string and this created a real problem um because we do not want to support automatic string conversion to object … It gets what it sees as information about the fact that it didn't uh succeed in {ts:1466} what it was attempting to do. And so if you just allow Python in in fastmcp's case or whatever your tool of choice is {ts:1473} to raise for example an empty value error or a cryptic MCP error with an integer code that's the information that … {ts:1492} interesting strategies that I don't want to wholeheartedly endorse but I will mention where for example if you do have {ts:1497} a complex API because you can't get away from that. Then instead of documenting every possibility in the dock string {ts:1505} that that documents the entire tool you might actually document how to recover from the most common failures. … Who cares? One of the problems is that there are clients that are not compliant with the spec. Cloud Desktop {ts:1860} is one of them. I've mentioned it a few times. I have a history with Cloud Desktop. Um Cloud Desktop hashes all of {ts:1866} the tools it receives on the first contact and puts them in a SQLite database and it doesn't care what you
www.prefect.io
Building a Knowledge Work Stack with FastMCPI was good at this work but was constantly fighting my tools. Every presentation started from scratch because last quarter's deck got lost in someone's email. Every spreadsheet had five versions scattered across shared drives (FINAL, FINAL_v2, FINAL_ACTUALLY_FINAL). Strategic documents just sat there as frozen artifacts. No way to see how we got there, why we decided anything. The real pain was context switching and consistency. I'd spend my morning in Salesforce tracking progress, then export to Excel for analysis, copy insights into Word for documentation, paste tables into PowerPoint for presentations, upload everything to SharePoint for "version control," and then email the whole mess to stakeholders with links going every which way. Each tool was an island that required its own mental map to get around. The integration & context layer was me, manually copying and pasting, trying to keep it all synchronized in my head. Then I learned how to code. The more code I wrote, the more I questioned everything about knowledge work. Why don't we version control strategic decisions? Why is copy-paste our integration layer? How much context am I losing jumping between platforms all day, all week, all month? … This isn't anyone's fault. These tools were built when documents were the atomic unit of knowledge work. They've been incrementally improved, but the fundamental model stayed the same. Create isolated artifacts, store them in silos, manually integrate. To further complicate things, countless other tools promised to solve the context problem but most just add another layer of complexity. You're often not solving the context problem. Just adding to it.
blog.modelcontextprotocol.io
The 2026 MCP Roadmap | Model Context Protocol BlogWe spent the last few months working through a long list of candidate priorities. They were informed by production experience, community feedback, and the pain points that keep surfacing. We narrowed them down to the areas that matter most for 2026. ... Right now, every SEP requires full Core Maintainer review, regardless of domain. That’s a bottleneck. It slows down Working Groups that already have the expertise to evaluate proposals in their own area. The goal is to remove that bottleneck without sacrificing quality. ... Enterprises are deploying MCP and running into a predictable set of problems: audit trails, SSO-integrated auth, gateway behavior, and configuration portability. This is also the least defined of the four priorities, and that’s intentional.
1) The tool explosion problem is real: The MCP protocol standard has an overwhelming number of connectable tools. LLMs find it difficult to effectively select and use so many tools, and no AI can be proficient in all professional domains, which is not a problem that can be solved by parameter count. 2) Documentation description gap: There is a huge disconnect between technical documentation and AI understanding. Most API documents are written for humans, not for AI, lacking semantic descriptions. 3) Weakness of the dual-interface architecture: As a middleware between LLM and data sources, MCP must handle upstream requests and transform downstream data. This architectural design is inherently flawed. When data sources explode, unified processing logic is almost impossible. 4) Vastly different return structures: Lack of standards leads to data format chaos, which is not a simple engineering issue but the result of overall industry collaboration absence, requiring time. 5) Context window limitations: Regardless of how quickly Token limits grow, information overload always exists. MCP spewing out a bunch of JSON data will occupy a large context space, squeezing inference capabilities. 6) Nested structure flattening: Complex object structures lose hierarchical relationships in text descriptions, making it difficult for AI to reconstruct data correlations. 7) Difficulty of multi-MCP server connections: "The biggest challenge is that it is complex to chain MCPs together." This difficulty is not unfounded. Although MCP as a standard protocol is unified, the specific implementations of servers in reality are different. One handles files, one connects to APIs, one operates databases... When AI needs to collaborate across servers to complete complex tasks, it's as difficult as trying to forcibly connect Lego, building blocks, and magnetic pieces.
asjes.dev
The Type Safety ChallengeSix months later, the API provider (hopefully unintentionally) updates their schema. The `name` field is deprecated in favor of separate `first_name` and `last_name` fields. Your code breaks. You shake your fist in anger, update the SDK, fix your code, test everything, and deploy. But there's another layer to this problem. As Frank Fiegel points out, traditional HTTP APIs suffer from "combinatorial chaos" - data scattered across URL paths, headers, query parameters, and request bodies. This makes them particularly hard for AI agents to use reliably. … ## The SDK provider's burden The pain isn't just felt by developers—SDK providers face their own set of challenges that make the current system unsustainable. When a new version is released, particularly a major version, providers essentially have to beg developers to upgrade. This creates a frustrating dynamic where providers want to innovate and improve their APIs, but are held back by the friction of SDK adoption. Without implementing complex API versioning systems, any breaking change means potentially nuking integrations that use older versions of the SDK. This is especially painful for languages with strong type safety, where minor schema changes can cause compilation failures across entire codebases. Consider the maintenance burden: a popular API provider might need to maintain SDKs for JavaScript, Python, Ruby, PHP, Go, Java, C#, and more. Each language has its own conventions, package managers, and release cycles. When the API changes, that's potentially 8+ SDKs that need updating, testing, and coordinating releases. Auto-generating SDKs from an OpenAPI spec can help with development, but you still have to deal with the one thing out of your control: user adoption. SDK providers often find themselves supporting legacy versions for years because large enterprise customers can't easily upgrade. This fragments the ecosystem and slows innovation. The result? Many API providers either: - Move extremely slowly to avoid breaking changes - Implement complex versioning schemes that add overhead - Accept that a significant portion of their user base will always be on outdated SDKs … - **Credential exposure**: LLMs sometimes include sensitive data in their reasoning process. There's a risk that API keys or other credentials could be logged or leaked through the LLM's output. - **Unvalidated operations**: Unlike traditional SDKs where operations are explicit, natural language instructions could be misinterpreted in dangerous ways: … - **Self-healing gone wrong**: The automatic healing mechanism could potentially "fix" API calls in ways that bypass intended security restrictions or change the operation's scope. These security concerns would need to be thoroughly addressed through: - Strict input sanitization and validation (i.e. guardrails) - Sandboxed execution environments - Audit logging of all LLM-generated requests - Rate limiting and anomaly detection - Clear boundaries around which operations are allowed - Human review processes for sensitive operations … ## The cons: honest challenges - **Security complexity**: Introduces new attack vectors around prompt injection, credential exposure, and unvalidated operations that don't exist with traditional SDKs. - **Reliability concerns**: Even with structured output, LLMs can misinterpret intent. MCP's pre-built tools are inherently more reliable. - **Self-healing limitations**: Can handle schema changes but not semantic API changes. May incorrectly "heal" when the real issue is bad user input. - **Architectural complexity**: Adding an LLM layer introduces latency and complexity that SDKs avoid. - **Trust and auditing**: When the system makes automatic decisions, developers need comprehensive logging and review capabilities. - **Not AI-agent optimized**: This is designed for human developers, while the ecosystem is moving toward AI agents that work better with MCP.
Since launching MCP in November 2024, adoption has been rapid: the community has built thousands of MCP servers, SDKs are available for all major programming languages, and the industry has adopted MCP as the de-facto standard for connecting agents to tools and data. Today developers routinely build agents with access to hundreds or thousands of tools across dozens of MCP servers. However, as the number of connected tools grows, loading all tool definitions upfront and passing intermediate results through the context window slows down agents and increases costs. … ## Excessive token consumption from tools makes agents less efficient As MCP usage scales, there are two common patterns that can increase agent cost and latency: 1. Tool definitions overload the context window; 2. Intermediate tool results consume additional tokens. ### 1. Tool definitions overload the context window Most MCP clients load all tool definitions upfront directly into context, exposing them to the model using a direct tool-calling syntax. These tool definitions might look like: … Tool descriptions occupy more context window space, increasing response time and costs. In cases where agents are connected to thousands of tools, they’ll need to process hundreds of thousands of tokens before reading a request. ### 2. Intermediate tool results consume additional tokens Most MCP clients allow models to directly call MCP tools. For example, you might ask your agent: "Download my meeting transcript from Google Drive and attach it to the Salesforce lead." The model will make calls like: … Every intermediate result must pass through the model. In this example, the full call transcript flows through twice. For a 2-hour sales meeting, that could mean processing an additional 50,000 tokens. Even larger documents may exceed context window limits, breaking the workflow. With large documents or complex data structures, models may be more likely to make mistakes when copying data between tool calls.
www.toddpigram.com
Introducing Docker MCP Catalog and Toolkit - Cloudy JourneyAs MCPs become the backbone of agentic AI systems, the developer experience still faces key challenges. Here are some of the major hurdles: … ### Complex installations and distribution Getting started with MCP tools remains complex. Developers often have to clone repositories, wrangle conflicting dependencies in environments like Node.js or Python, and self-host local services—many of which aren’t containerized, making setup and portability even harder. On top of that, connecting MCP clients adds more friction, with each one requiring custom configuration that slows down onboarding and adoption. ### Auth and permissions fall short Many MCP tools run with full access to the host, launched via npx or uvx, with no isolation or sandboxing. Credentials are commonly passed as plaintext environment variables, exposing sensitive data and increasing the risk of leaks. Moreover, these tools often aren’t designed for scale and security. They’re missing enterprise-ready features like policy enforcement, audit logs, and standardized security.
At first glance, the Model Context Protocol (MCP) promises to be a breakthrough in the realm of AI integration, but the reality is much more convoluted. Imagine diving into a Python SDK that feels more like navigating a maze than utilizing a straightforward tool. Inside, you find layers of wrappers and accessors that seem unnecessary for tasks that could be elegantly handled with a few lines of simple JSON. Instead of embracing the sleek simplicity of Python, MCP appears to indulge in a quest for complexity that distracts rather than delivers the practical solutions developers crave. One glaring criticism of MCP is its insistence on setting up new servers just to tap into established APIs. Consider this: having to build an entire framework to access a tool that’s already functional! This raises serious questions about efficiency and practicality. Why not allow the Large Language Model (LLM) to interact directly with existing APIs? It boggles the mind when you consider how easily this could save developers not just time, but also countless headaches. In a landscape that has thrived on REST and Swagger integration, the idea of adding unnecessary layers feels misguided and frustrating, leaving many to wonder why we would complicate something so fundamentally simple. Security is paramount in today’s technological landscape, and unfortunately, MCP’s current approach leaves much to be desired. Imagine being asked to expose your servers to LLMs without any solid assurances regarding safety. This lack of concern for robust security protocols is alarming. With the surge of data breaches and increasing privacy violations, it is imperative that MCP prioritizes strong safeguards. Until a robust security framework is set in place, trusting MCP feels like rolling the dice! While MCP aims to position itself as a universal interface for large language models, its heavy reliance on stateful connections creates an unnecessary hurdle. Think about this: most modern APIs thrive in stateless environments, such as AWS Lambda, precisely because they are efficient, scalable, and cost-effective. If MCP assumes that developers will have abundant local resources and dedicated servers, it overlooks the reality many developers face. This significant disconnect raises critical questions: how can we adapt to new technologies when they don’t align with our current practices and infrastructure? Moreover, MCP tends to overwhelm developers with a plethora of options, which ultimately clutters the model context. Picture trying to sift through an overstuffed backpack—it's a frustrating ordeal, and essential items can easily get lost among the chaos. This cluttering can result in unexpected and erratic behaviors from the model itself, leading to wasted tokens and a total loss of focus. A more streamlined and manageable approach, with clear prioritization, would not only simplify interactions but boost overall efficiency, allowing developers to focus on what really matters.
www.codemag.com
MCP: Building the Bridge Between AI and the Real WorldUsers have been cobbling together ad hoc solutions for this problem. Plug-ins. Vector databases. Retrieval systems. These Band-Aids are clever, but fragile. They don't cooperate with each other. They break when you switch providers. It's less “responsible plumbing” and more “duct tape and prayer.” … ### The Context Window Problem Developers have been trying to develop workarounds by providing relevant data as needed. Pasting in documents, providing chunks of a database, and formulating absurdly robust prompts. These fixes are great, but every LLM has what we call a context window. The window determines how many tokens a model can remember at any given time. Some of the bigger LLMs have windows that can accommodate hundreds of thousands of tokens, but users still quickly find ways to hit that wall. Bigger context windows should be the answer, right? But there's our Catch 22: The more data you provide within that window, the more fragile the entire set up becomes. If there's not enough context, the model may very well just make stuff up. If you provide too much, the model bogs down or becomes too pricey to run. ### The Patchwork Fixes The AI community wasn't content to wait for one of the big players to provide a solution. Everyone rushed to be first-to-market with an assortment of potential fixes. Custom plug-ins let the models access external tools and databases, extending their abilities beyond the frozen training data. You can see the issue here. Plug-ins designed for one platform won't work with another. Your workspace becomes siloed and fragmented, forcing you to rework your integrations if you try to switch AI providers. … Each of these approaches are useful, but they all suffer from the same weakness as any proprietary solution. The developers have to reinvent the wheel each time. Without universal integration standards, these solutions are unstable and non-transferable. AI systems need a standardized approach for context access and authentication. … ## Security, Privacy, and Governance Is MCP the holy grail of making functional AI agents—without using a bottle of Elmer's glue and some yarn to bundle the integrations together? If your internal security alarms are ringing, that just means you're thinking responsibly. Every time I read about one of these new AI applications, I cringe at the security implications. A workflow that makes it easier to move context across systems also expands your exposure surface. Such a system has to prioritize security, privacy, and governance. … ### The Governance Layer Although security and privacy are vital, use of MCP brings up some complicated questions regarding governance. We're still in the Wild West phase of AI and as it continues to evolve, we'll remain there. It can be a crap shoot determining which servers can be trusted. How can an organization of any size know where to set boundaries? How do we determine what the model is allowed to access? … - **Context Poisoning:** If a malicious actor can compromise an MCP server, they can manipulate the data flowing to the model, corrupting it. Transparency can provide visibility to the data, but it's unable to filter out tainted information. - **Overreach:** It's tempting for an organization to default to maximum connectivity. Maybe it opts togive the AI assistant far more access than is truly needed. That plants the seeds for an inevitable breakdown in governance. - **Surveillance Misuse:** The protocol has no inherent bias, but the use of it will define the outcomes. There's always a chance of abuse. In such a scenario, a malicious user could weaponize MCP to aggregate and surveil sensitive user information. - **Ecosystem Fragmentation:** There's always the possibility that MCP won't be fully adopted but cloned. MCP-like variations could fragment the landscape and cut compliance corners. Interoperability breaks down and erodes security assurances. ### Juggling Openness and Safety Therein lies the friction: The openness and flexibility of MCP leads to a more powerful ecosystem. But with that openness comes increased risk. How are servers vetted? Soon, we'll see them popping up all over the place. Some of them will be compromised. It's just the law of numbers. How can users ensure that these upstart servers won't leak, corrupt, or abuse data?