xenoss.io

MCP in enterprise: real-world applications and challenges - Xenoss

9/15/2025Updated 4/6/2026

Excerpt

### Challenge #1: MCP’s authorization is not ‘enterprise-friendly’ Before poking at the vulnerabilities of MCP’s current authorization specification with OAuth, let’s quickly examine the reason Anthropic introduced OAuth specifications in the first place. Originally, setting up MCP involved a 1:1 deployment of a client and an MCP server on a developer’s local machine. This worked fine for individual developers but didn’t scale to enterprise needs. Over time, the surge of MCP adoption among smaller projects created a ripple effect in the enterprise. Engineering team leaders were interested in setting up remote MCP servers, but to access data on these servers in privacy-compliant ways, they needed authorization. Anthropic responded with the first set of authorization specifications, released in March 2025. *First specifications: no separation between authentication and resource servers.* The MCP Authorization spec allowed secure access to servers using OAuth 2.1. Now, engineers could set up the protocol on a remote server, but they had new concerns. In the specifications, MCP servers were treated as both resource and authorization servers, which went against enterprise best practices, increased fragmentation, and forced developers to expose metadata discovery URLs. *The latest specification: servers are decoupled, but security issues remain.* In June, after months of active discussions on where the first authorization specifications fell short, Anthropic released an updated version that decoupled authorization and resource servers. Developers were still unhappy. For one, the revised specification leans on OAuth RFCs – a set of frameworks that grant third-party applications limited access to HTTP services, which is not widely used by identity providers. … *Aaron Palecki,* * ‘Enterprise-Ready MCP’* The problem is that MCP doesn’t integrate smoothly with these enterprise SSO systems. Parecki argues that MCP-enabled AI agents should be treated like any other enterprise application – controlled through the company’s identity management system. At the time of writing, connecting an AI agent like Claude to enterprise tools through SSO involves several frustrating steps. … 1. When the user grants appropriate OAuth permissions, they can come back to Claude and use the AI agent. This authentication by itself is inconvenient for enterprise multi-agent systems that have to connect to a wider range of applications. More importantly, in this authentication approach,* the * ***user* ** is the one granting permissions, with no visibility at the ***admin* ** level. This means there’s no one to oversee access control, and there’s a risk of unchecked interaction between mission-critical systems and unvetted third-party applications. **How enterprises solve this problem** Identity solution providers are already developing workarounds to address the limitations of MCP’s authorization. ... … ### Challenge #3: MCP’s default ‘server’ approach does not blend well with serverless architectures Over 95% of Fortune 500 companies are embedded in the Azure ecosystem that relies on serverless architectures. These infrastructures are poorly suited to MCP implementations, since Anthropic’s protocol is currently deployed as a **Docker-packaged server**. Building and managing MCP servers on top of already stable serverless architectures increases maintenance overhead and adds to infrastructure costs in the long run. … **Cold start delays** of up to 5 seconds made the system too slow for any time-sensitive workflows – imagine waiting 5 seconds every time your AI agent needed to access a tool. **Developer experience ** issues plagued the setup. As Isenberg put it, the process was “confusing, inconsistent, and far from intuitive.” There wasn’t a clear guide for how to set everything up properly. **Infrastructure complexity** meant figuring out all the pieces manually, since there was no standard Infrastructure-as-Code template to follow. **Logging problems** arose because FastAPI and FastMCP use different logging systems, and they didn’t play well with AWS Lambda’s standard monitoring tools. **Testing difficulties** required manual VS Code configuration since there weren’t any streamlined tools for testing MCP server interactions in a serverless environment. … ### Challenge #4: Tool poisoning In April 2025, Invariant Labs discovered that MCP is vulnerable to tool poisoning, a type of attack where a prompt with malicious instructions is launched at the LLM. The instructions are not visible to humans but understandable to the AI agent. Thus, a model, now armed with access to internal tools and data, can perform malicious actions, like: … He encourages engineering teams to follow MCP specifications and make sure there’s a human in the loop between the agent and the tools it uses. AI agents should also be designed with transparency in mind, which means: - Have a clear UI that clarifies which tools are exposed to AI - Provide notifications or other indicators whenever an agent invokes a service - Ask users for confirmation on mission-critical actions like data manipulation or extraction to adhere to HITL principles.

Source URL

https://xenoss.io/blog/mcp-model-context-protocol-enterprise-use-cases-implementation-challenges

Related Pain Points