MCP
Plaintext credential storage and lack of sandboxing in MCP tools
9Many MCP tools run with full host access (launched via npx or uvx) with no isolation or sandboxing. Credentials are commonly passed as plaintext environment variables, exposing sensitive data. Tools lack enterprise-ready features like policy enforcement and audit logs.
Schema Overhead Consumes 16-50% of Context Window
9Full tool schemas load into context on every request with no lazy loading, selective injection, or summarization. This causes context window exhaustion before meaningful work begins, with confirmed instances ranging from 45K tokens for a single tool to 1.17M tokens in production deployments.
Naive MCP servers expose all tools to all users without fine-grained authorization
9MCP servers announce all available tools and resources to any connected client, and naive implementations expose sensitive tools to all users regardless of role or permissions. This creates major security risks where tools that delete data or trigger sensitive operations become available to anyone, and low-privilege users can instruct agents to use highly sensitive tools.
MCP Process Orphans Leak Memory Without Cleanup Hook
8When MCP sessions end abnormally, subprocesses continue running, memory climbs, and ports remain bound. No standard lifecycle hook exists in the spec for cleanup. Teams must write custom janitors using cron jobs and watchdog scripts.
LLM-based API healing introduces security risks
8Self-healing APIs that use LLMs to fix schema mismatches risk credential exposure, unvalidated operations, prompt injection attacks, and unauthorized scope changes. The automatic healing mechanism could bypass security restrictions or misinterpret user intent in dangerous ways.
MCP supply chain attacks via npm/PyPI distribution
8MCP servers are distributed via npm and PyPI without universal verification, exposing the ecosystem to the same supply chain attacks that plague web development. Tool descriptions can be modified post-approval (rug pulls).
Common Security Vulnerabilities in MCP Deployments
8Rapid MCP ecosystem growth has revealed common vulnerability patterns in deployed servers including command injection, insufficient input validation, privilege escalation, authentication implementation flaws, and lack of rate limiting.
Stateful session routing breaks with load balancers
8MCP assumes persistent 1:1 client-server connections, but production deployments with load balancers route requests across instances. When a session routes to a different server without state, connections fail. Workarounds (sticky sessions, Redis, distributed state) add significant operational complexity.
CloudDesktop caches tool schemas without respecting updates
8CloudDesktop hashes all tools on first contact and stores them in SQLite, ignoring subsequent updates to tool definitions. This makes it impossible to iterate on tool schemas when CloudDesktop is the client.
MCP clients not compliant with specification
8Multiple MCP client implementations do not properly implement the MCP spec, creating incompatibilities. CloudDesktop is cited as a specific example where structured arguments are sent as strings instead of objects, violating spec requirements.
Closed platform with limited external API for design automation
8Figma's platform is effectively closed for programmatic design updates. MCP integration with Claude is read-only, and meaningful mutations require plugins running inside Figma's sandbox. There is no serious external API for automation, CI-style pipelines, or integration with engineering systems.
Auth headers leak from MCP transport to downstream OpenAPI APIs
8Authentication headers from the MCP transport layer were being improperly forwarded to downstream OpenAPI APIs, creating security and information disclosure risks.
MCP server vetting and governance is unclear
7Organizations lack clear governance frameworks for MCP server trust, access control, and security boundaries. With no established vetting process and rapid proliferation of community-built servers, determining which servers can be trusted and what access to grant is a 'crap shoot'.
Chaining multiple MCP servers together is fragmentation nightmare
7Different MCP server implementations handle files, APIs, databases, etc. differently. When AI needs to collaborate across servers for complex tasks, the lack of unified interfaces makes it as difficult as connecting incompatible building systems (Lego, blocks, magnetic pieces).
Middleware for STDIO transport not supported
7Traditional ASGI middleware only works with web-based transports (streamable-HTTP, SSE), not with the STDIO transport that most major MCP clients use. This forces developers to either wrap servers in complex workarounds or abandon middleware patterns.
Agent iteration is slow and expensive
7Agents cannot iterate quickly like human developers when writing code against an API. They are slow at iteration and have limited context, making debugging and rapid development cycles difficult.
MCP server architecture incompatible with serverless deployments
7MCP's Docker-packaged server model doesn't align with serverless architectures used by 95% of Fortune 500 companies. Cold start delays (up to 5 seconds), missing infrastructure templates, logging mismatches, and testing difficulties increase maintenance overhead and costs.
Double hop performance tax in MCP request routing
7Every MCP tool call requires two round trips (agent → MCP server → tool → MCP server → agent) instead of direct calls, adding latency and overhead to each interaction. This architectural inefficiency compounds at scale and makes production deployments slower.
Production-grade resource management and state persistence gaps in MCP
7MCP servers lack built-in support for production workloads including proper memory limits, concurrent request handling, rate limiting, health checks, persistent storage, and state management. Developers must manually implement these critical infrastructure concerns.
API quality directly impacts MCP server effectiveness and LLM execution
7The robustness of agentic solutions depends on API quality used by MCP servers. MCP tools reference individual API endpoints, and poor API quality directly reduces the LLM client's ability to accurately discover and execute user prompts.
Enterprise Deployment Requirements Not Well-Defined
7Enterprises deploying MCP face a predictable set of problems including audit trails, SSO-integrated auth, gateway behavior, and configuration portability. These requirements are poorly defined and understood.
Agent discovery is token-expensive
6MCP servers enumerate all tools and descriptions on first contact, consuming significant tokens during agent discovery. This makes it costly for agents to learn what tools are available compared to human developers.
Limited Value from MCP in Coding Agent Use Cases
6Most developers encounter MCP through coding agents like Cursor and VSCode but struggle to extract value from MCP in this context. They often reject MCP in favor of CLIs and scripts which provide better functionality for their use cases.
MCP tool explosion reduces agent effectiveness
6As MCP servers scale to hundreds or thousands of tools, LLMs struggle to effectively select and use them. No AI can be proficient across all professional domains, and parameter count alone cannot solve this combinatorial selection problem.
MCP server wrapper maintenance overhead
6Every tool exposed via MCP requires writing and maintaining a dedicated MCP Server wrapper in Python or TypeScript, plus hosting, updating, securing, monitoring, and scaling. This per-tool overhead accumulates significantly for teams integrating multiple tools.
Installation and Configuration of MCP Servers is Complex
6Installing MCP servers requires finding servers, copying JSON configuration blobs, and manually hard-coding API keys, creating a Byzantine process that serves as a barrier to adoption.
Context Bloat from Excessive MCP Search Results
6MCP servers can flood conversations with excessive information from searches and operations, quickly exhausting token limits and making conversations unwieldy.
API documentation lacks AI-readable semantic descriptions
6Most API documentation is written for human developers and lacks semantic descriptions needed for AI agents to understand intent. This documentation-understanding gap makes it difficult for LLMs to correctly interpret and use APIs.
Naive API-to-MCP conversion creates security and efficiency problems
6Teams are rushing to convert existing REST APIs to MCP servers without considering security implications or efficiency costs. This creates both architectural overhead and expanded attack surface compared to direct API integration.
Code drift detection difficult for AI agents without reference anchoring
6Live application state often diverges from code definitions (code drift). AI agents struggle to detect and mitigate this without anchoring to reference templates and commit diffs, leading to agents making changes based on outdated or inaccurate code state.
Implementation and operational costs of MCP adoption
640% of enterprise respondents cited cost of implementation or running costs as a barrier to MCP adoption, making it a significant financial consideration for organizations evaluating the technology.
MCP ecosystem fragmentation threatens interoperability
6If MCP-like variations proliferate instead of universal adoption, the ecosystem will fragment and vendors will cut corners on compliance. Interoperability breaks down and erodes the security assurances that standardization provides.
MCP server performance optimization demands sophisticated engineering
6Ensuring low-latency, high-throughput communication between distributed MCP components requires sophisticated engineering skills. Performance optimization is a significant barrier for most teams.
LLM-generated operations need comprehensive audit logging
6When LLMs automatically make API decisions, developers need comprehensive logging and review capabilities for trust and auditing. The lack of transparency into LLM reasoning and generated operations is a critical gap.
$ref and $defs in tool schemas not dereferenced before sending
6Tool schemas with JSON Schema references ($ref, $defs) were not being inlined before being sent to MCP clients, violating spec requirements and causing client incompatibilities.
Prompt arguments must be strings despite needing structured data
6The MCP spec requires all prompt arguments to be strings, but Python functions generating prompts often need structured data (lists, dicts) for business logic. This forces developers to manually parse JSON strings with json.loads() and handle conversion errors.
HTTP transport connection timeout too short (5 seconds)
6The HTTP transport was configured with a 5-second timeout that was cutting connections short for operations that needed more time to complete.
Inefficient round-trip tool calling with intermediate result token waste
6Every tool call requires a round-trip cycle: LLM calls tool, result flows back through context, LLM reasons, calls next tool. Intermediate results that only feed the next step burn tokens repeatedly, reducing efficiency in multi-step workflows.
Type conversion between agents and servers unclear for complex types
5Despite some improvements, there remains confusion about how complex types should be converted between AI agents and MCP servers, especially when prompt arguments need to support structured data.
MCP protocol confusion about server lifecycle vs client session lifecycle
5The `lifespan` parameter in MCP SDK was ambiguous and commonly misunderstood—developers thought it referred to client sessions when it should refer to server lifecycle (e.g., database connections), causing initialization and cleanup logic to run incorrectly.
MCP describe_table_schema fails with CamelCase table names
5The Neon Model Context Protocol (MCP) describe_table_schema function does not work correctly with tables using CamelCase naming conventions, limiting schema introspection capabilities.
LLM layer adds architectural complexity and latency
5Adding an LLM layer for self-healing and tool selection introduces additional latency and architectural complexity that traditional SDKs avoid. The overhead is significant for performance-sensitive applications.
LLM-based self-healing can't handle semantic API changes
5Self-healing mechanisms work only for schema changes but fail for semantic API changes. The system may incorrectly 'heal' when the real issue is bad user input, leading to silent failures.
Complex hierarchical structures flatten into uninterpretable text
5When nested object structures are converted to text descriptions for AI consumption, hierarchical relationships and data correlations are lost. The flattened structure becomes difficult for AI to reconstruct properly.
Tool Discovery Requires 'Secret Incantations' for Invocation
5Developers must craft careful prompts to trigger specific MCP tool functionality, treating AI assistants like command-line interfaces rather than intelligent agents. Tool discovery and invocation is unintuitive.
Overly complex Python SDK design with unnecessary abstraction layers
5The MCP Python SDK features excessive wrappers and accessors that complicate simple tasks that could be handled with straightforward JSON, creating a confusing developer experience rather than practical solutions.
Limited public MCP server adoption and ecosystem maturity
4Despite expectations for widespread MCP adoption, only ~10 MCP servers from major companies see heavy use. The ecosystem has a massive long tail of public servers with near-zero users, indicating incomplete ecosystem maturity and uncertain value for public-facing use cases.
Insufficient documentation and error message clarity in standard MCP
4Standard MCP implementations lack comprehensive documentation and helpful error messages, making troubleshooting difficult. Developers struggle with limited examples and unclear guidance compared to abstraction frameworks.
AI coding agents frequently invent images and icons not in designs
4When implementing from design mockups, coding assistants often generate images and icons that don't exist in the original Figma designs. Fixing this requires explicit instructions and direct links to specific Figma nodes.