asjes.dev
The Type Safety Challenge
Excerpt
Six months later, the API provider (hopefully unintentionally) updates their schema. The `name` field is deprecated in favor of separate `first_name` and `last_name` fields. Your code breaks. You shake your fist in anger, update the SDK, fix your code, test everything, and deploy. But there's another layer to this problem. As Frank Fiegel points out, traditional HTTP APIs suffer from "combinatorial chaos" - data scattered across URL paths, headers, query parameters, and request bodies. This makes them particularly hard for AI agents to use reliably. … ## The SDK provider's burden The pain isn't just felt by developers—SDK providers face their own set of challenges that make the current system unsustainable. When a new version is released, particularly a major version, providers essentially have to beg developers to upgrade. This creates a frustrating dynamic where providers want to innovate and improve their APIs, but are held back by the friction of SDK adoption. Without implementing complex API versioning systems, any breaking change means potentially nuking integrations that use older versions of the SDK. This is especially painful for languages with strong type safety, where minor schema changes can cause compilation failures across entire codebases. Consider the maintenance burden: a popular API provider might need to maintain SDKs for JavaScript, Python, Ruby, PHP, Go, Java, C#, and more. Each language has its own conventions, package managers, and release cycles. When the API changes, that's potentially 8+ SDKs that need updating, testing, and coordinating releases. Auto-generating SDKs from an OpenAPI spec can help with development, but you still have to deal with the one thing out of your control: user adoption. SDK providers often find themselves supporting legacy versions for years because large enterprise customers can't easily upgrade. This fragments the ecosystem and slows innovation. The result? Many API providers either: - Move extremely slowly to avoid breaking changes - Implement complex versioning schemes that add overhead - Accept that a significant portion of their user base will always be on outdated SDKs … - **Credential exposure**: LLMs sometimes include sensitive data in their reasoning process. There's a risk that API keys or other credentials could be logged or leaked through the LLM's output. - **Unvalidated operations**: Unlike traditional SDKs where operations are explicit, natural language instructions could be misinterpreted in dangerous ways: … - **Self-healing gone wrong**: The automatic healing mechanism could potentially "fix" API calls in ways that bypass intended security restrictions or change the operation's scope. These security concerns would need to be thoroughly addressed through: - Strict input sanitization and validation (i.e. guardrails) - Sandboxed execution environments - Audit logging of all LLM-generated requests - Rate limiting and anomaly detection - Clear boundaries around which operations are allowed - Human review processes for sensitive operations … ## The cons: honest challenges - **Security complexity**: Introduces new attack vectors around prompt injection, credential exposure, and unvalidated operations that don't exist with traditional SDKs. - **Reliability concerns**: Even with structured output, LLMs can misinterpret intent. MCP's pre-built tools are inherently more reliable. - **Self-healing limitations**: Can handle schema changes but not semantic API changes. May incorrectly "heal" when the real issue is bad user input. - **Architectural complexity**: Adding an LLM layer introduces latency and complexity that SDKs avoid. - **Trust and auditing**: When the system makes automatic decisions, developers need comprehensive logging and review capabilities. - **Not AI-agent optimized**: This is designed for human developers, while the ecosystem is moving toward AI agents that work better with MCP.
Related Pain Points
LLM-based API healing introduces security risks
8Self-healing APIs that use LLMs to fix schema mismatches risk credential exposure, unvalidated operations, prompt injection attacks, and unauthorized scope changes. The automatic healing mechanism could bypass security restrictions or misinterpret user intent in dangerous ways.
SDK maintenance burden across multiple languages
7API providers must maintain SDKs for 8+ languages (JavaScript, Python, Ruby, PHP, Go, Java, C#, etc.), each with different conventions and release cycles. When APIs change, coordinating updates, testing, and releases across all SDKs becomes unsustainable, forcing providers to move slowly or maintain legacy versions for years.
SDK adoption friction prevents API innovation
7API providers struggle to push major version updates because users resist upgrading SDKs. Breaking changes fragment the ecosystem as large enterprises remain on outdated versions indefinitely, forcing providers to either move extremely slowly, implement complex versioning schemes, or accept perpetual legacy support.
Schema changes break downstream code without notice
7When API providers deprecate fields (e.g., replacing `name` with `first_name` and `last_name`), dependent code breaks immediately. Developers must update SDKs, fix code, test, and redeploy—a reactive cycle that causes unplanned downtime and rework.
LLM-generated operations need comprehensive audit logging
6When LLMs automatically make API decisions, developers need comprehensive logging and review capabilities for trust and auditing. The lack of transparency into LLM reasoning and generated operations is a critical gap.
API design mismatch with AI agent adoption
689% of developers use generative AI daily, but only 24% design APIs with AI agents in mind. APIs are still optimized for human consumers, causing a widening gap as agent adoption outpaces API modernization.
API documentation lacks AI-readable semantic descriptions
6Most API documentation is written for human developers and lacks semantic descriptions needed for AI agents to understand intent. This documentation-understanding gap makes it difficult for LLMs to correctly interpret and use APIs.
LLM layer adds architectural complexity and latency
5Adding an LLM layer for self-healing and tool selection introduces additional latency and architectural complexity that traditional SDKs avoid. The overhead is significant for performance-sensitive applications.
LLM-based self-healing can't handle semantic API changes
5Self-healing mechanisms work only for schema changes but fail for semantic API changes. The system may incorrectly 'heal' when the real issue is bad user input, leading to silent failures.