MCP
Schema Overhead Consumes 16-50% of Context Window
9Full tool schemas load into context on every request with no lazy loading, selective injection, or summarization. This causes context window exhaustion before meaningful work begins, with confirmed instances ranging from 45K tokens for a single tool to 1.17M tokens in production deployments.
Plaintext credential storage and lack of sandboxing in MCP tools
9Many MCP tools run with full host access (launched via npx or uvx) with no isolation or sandboxing. Credentials are commonly passed as plaintext environment variables, exposing sensitive data. Tools lack enterprise-ready features like policy enforcement and audit logs.
LLM-based API healing introduces security risks
8Self-healing APIs that use LLMs to fix schema mismatches risk credential exposure, unvalidated operations, prompt injection attacks, and unauthorized scope changes. The automatic healing mechanism could bypass security restrictions or misinterpret user intent in dangerous ways.
Common Security Vulnerabilities in MCP Deployments
8Rapid MCP ecosystem growth has revealed common vulnerability patterns in deployed servers including command injection, insufficient input validation, privilege escalation, authentication implementation flaws, and lack of rate limiting.
Stateful session routing breaks with load balancers
8MCP assumes persistent 1:1 client-server connections, but production deployments with load balancers route requests across instances. When a session routes to a different server without state, connections fail. Workarounds (sticky sessions, Redis, distributed state) add significant operational complexity.
MCP supply chain attacks via npm/PyPI distribution
8MCP servers are distributed via npm and PyPI without universal verification, exposing the ecosystem to the same supply chain attacks that plague web development. Tool descriptions can be modified post-approval (rug pulls).
MCP Process Orphans Leak Memory Without Cleanup Hook
8When MCP sessions end abnormally, subprocesses continue running, memory climbs, and ports remain bound. No standard lifecycle hook exists in the spec for cleanup. Teams must write custom janitors using cron jobs and watchdog scripts.
Chaining multiple MCP servers together is fragmentation nightmare
7Different MCP server implementations handle files, APIs, databases, etc. differently. When AI needs to collaborate across servers for complex tasks, the lack of unified interfaces makes it as difficult as connecting incompatible building systems (Lego, blocks, magnetic pieces).
Double hop performance tax in MCP request routing
7Every MCP tool call requires two round trips (agent → MCP server → tool → MCP server → agent) instead of direct calls, adding latency and overhead to each interaction. This architectural inefficiency compounds at scale and makes production deployments slower.
MCP server architecture incompatible with serverless deployments
7MCP's Docker-packaged server model doesn't align with serverless architectures used by 95% of Fortune 500 companies. Cold start delays (up to 5 seconds), missing infrastructure templates, logging mismatches, and testing difficulties increase maintenance overhead and costs.
Enterprise Deployment Requirements Not Well-Defined
7Enterprises deploying MCP face a predictable set of problems including audit trails, SSO-integrated auth, gateway behavior, and configuration portability. These requirements are poorly defined and understood.
MCP server vetting and governance is unclear
7Organizations lack clear governance frameworks for MCP server trust, access control, and security boundaries. With no established vetting process and rapid proliferation of community-built servers, determining which servers can be trusted and what access to grant is a 'crap shoot'.
API quality directly impacts MCP server effectiveness and LLM execution
7The robustness of agentic solutions depends on API quality used by MCP servers. MCP tools reference individual API endpoints, and poor API quality directly reduces the LLM client's ability to accurately discover and execute user prompts.
LLM-generated operations need comprehensive audit logging
6When LLMs automatically make API decisions, developers need comprehensive logging and review capabilities for trust and auditing. The lack of transparency into LLM reasoning and generated operations is a critical gap.
Limited Value from MCP in Coding Agent Use Cases
6Most developers encounter MCP through coding agents like Cursor and VSCode but struggle to extract value from MCP in this context. They often reject MCP in favor of CLIs and scripts which provide better functionality for their use cases.
MCP tool explosion reduces agent effectiveness
6As MCP servers scale to hundreds or thousands of tools, LLMs struggle to effectively select and use them. No AI can be proficient across all professional domains, and parameter count alone cannot solve this combinatorial selection problem.
Installation and Configuration of MCP Servers is Complex
6Installing MCP servers requires finding servers, copying JSON configuration blobs, and manually hard-coding API keys, creating a Byzantine process that serves as a barrier to adoption.
API documentation lacks AI-readable semantic descriptions
6Most API documentation is written for human developers and lacks semantic descriptions needed for AI agents to understand intent. This documentation-understanding gap makes it difficult for LLMs to correctly interpret and use APIs.
MCP server wrapper maintenance overhead
6Every tool exposed via MCP requires writing and maintaining a dedicated MCP Server wrapper in Python or TypeScript, plus hosting, updating, securing, monitoring, and scaling. This per-tool overhead accumulates significantly for teams integrating multiple tools.
Context Bloat from Excessive MCP Search Results
6MCP servers can flood conversations with excessive information from searches and operations, quickly exhausting token limits and making conversations unwieldy.
Naive API-to-MCP conversion creates security and efficiency problems
6Teams are rushing to convert existing REST APIs to MCP servers without considering security implications or efficiency costs. This creates both architectural overhead and expanded attack surface compared to direct API integration.
Code drift detection difficult for AI agents without reference anchoring
6Live application state often diverges from code definitions (code drift). AI agents struggle to detect and mitigate this without anchoring to reference templates and commit diffs, leading to agents making changes based on outdated or inaccurate code state.
Implementation and operational costs of MCP adoption
640% of enterprise respondents cited cost of implementation or running costs as a barrier to MCP adoption, making it a significant financial consideration for organizations evaluating the technology.
MCP ecosystem fragmentation threatens interoperability
6If MCP-like variations proliferate instead of universal adoption, the ecosystem will fragment and vendors will cut corners on compliance. Interoperability breaks down and erodes the security assurances that standardization provides.
MCP server performance optimization demands sophisticated engineering
6Ensuring low-latency, high-throughput communication between distributed MCP components requires sophisticated engineering skills. Performance optimization is a significant barrier for most teams.
MCP describe_table_schema fails with CamelCase table names
5The Neon Model Context Protocol (MCP) describe_table_schema function does not work correctly with tables using CamelCase naming conventions, limiting schema introspection capabilities.
LLM layer adds architectural complexity and latency
5Adding an LLM layer for self-healing and tool selection introduces additional latency and architectural complexity that traditional SDKs avoid. The overhead is significant for performance-sensitive applications.
Tool Discovery Requires 'Secret Incantations' for Invocation
5Developers must craft careful prompts to trigger specific MCP tool functionality, treating AI assistants like command-line interfaces rather than intelligent agents. Tool discovery and invocation is unintuitive.
Complex hierarchical structures flatten into uninterpretable text
5When nested object structures are converted to text descriptions for AI consumption, hierarchical relationships and data correlations are lost. The flattened structure becomes difficult for AI to reconstruct properly.
LLM-based self-healing can't handle semantic API changes
5Self-healing mechanisms work only for schema changes but fail for semantic API changes. The system may incorrectly 'heal' when the real issue is bad user input, leading to silent failures.
Overly complex Python SDK design with unnecessary abstraction layers
5The MCP Python SDK features excessive wrappers and accessors that complicate simple tasks that could be handled with straightforward JSON, creating a confusing developer experience rather than practical solutions.
AI coding agents frequently invent images and icons not in designs
4When implementing from design mockups, coding assistants often generate images and icons that don't exist in the original Figma designs. Fixing this requires explicit instructions and direct links to specific Figma nodes.