www.tspi.at

When Simplicity Dies in a Container: Why Docker May Be Overkill on a Single Server

6/25/2025Updated 6/30/2025

Excerpt

Docker promises a world where services are reproducible and portable, where deployment becomes as easy as pulling an image. But for a single server hosting a few third-party applications, it frequently adds layers of indirection that obscure more than they illuminate. What used to be a straightforward rc.init or systemd service becomes a tangle of … Routine operations grow baroque. Want to restart a crashed app? You’ll first have to remember which container it was in, which network alias it used, and where its persistent data is actually stored. Even debugging - normally a matter of checking logs and inspecting running processes - becomes a hunt through Docker’s CLI flags, container IDs, and maybe shelling into the container just to see if a config file exists. What Docker saves in “write-once” deployment, it consumes in everyday friction. … Despite Docker’s reputation for portability, it actually introduces new limitations when it comes to cross-platform compatibility. It is only natively supported on a limited range of Linux distributions, and even there, it often expects a fairly standard userland and a kernel with specific cgroup and namespace features. On other operating systems, Docker doesn’t run natively - it runs inside a virtual machine, which adds overhead, fragility, and further reduces transparency. For example, on Windows and macOS, Docker typically relies on a bundled Linux VM running under for example Hyper-V. This adds layers of indirection and disconnects from the actual OS. On FreeBSD, Docker is virtually unusable, since most container images depend on Linux-specific features in the kernel and expect a GNU/Linux userland. The illusion of “universal containers” quickly collapses when you try to deploy services on anything but mainstream Linux environments. … Moreover, image updates rarely follow sane distribution policies. Unlike APT or RPM-based package management with clear changelogs and signed repositories, Docker images are often updated silently. You discover a breaking change only after redeploying, or worse, after something stops working. And even when nothing breaks immediately, the maintenance burden lingers in another form: you usually cannot update libraries or components inside the container using your system’s own package manager. The entire stack is frozen inside a Docker image, often built with outdated base layers, obsolete dependencies, and even vulnerable software versions. If you are not building the containers yourself as a deliberate deployment tool, but instead treat them as magical shrink-wrapped applications, you’re likely inheriting a mess of stale and mismatched code that you cannot easily inspect, audit, or upgrade. Of course there are solutions to this problem - you can build your images to be rootless for example. But then we are again in a regime of building your own images again. Systemd, journald, logrotate, user management - none of these integrate naturally with Docker. You end up building wrappers for things the OS already knows how to do. Want an app to start after the network is up? You’ll have to script it yourself. Want to apply unified logging? Now you need to aggregate container stdout and bind-mount log directories, often inconsistently across containers. Backups and restores - once the realm of simple tarballs or database dumps - become brittle when data lives partly inside volumes, partly on the host, and partly inside temporary layers that disappear on container restart. For a small team, this is operational debt with no upside. When only one person understands the container layout, the rest of the team is left helpless. Docker’s toolchain is its own world, complete with its own terminology, pitfalls, and culture. Many sysadmins with years of experience managing traditional Linux systems find themselves fumbling through container logs, debugging bridge networks, and deciphering mismatched volume paths. … Moreover, when relying on third-party Docker images, containerized systems drift from your control. You trust layers of caching and CI pipelines that you don’t own. You rely on upstream authors to think of your edge cases, to maintain timely updates, and to resolve compatibility issues. You lose touch with the operating system underneath - the very thing that’s supposed to provide stability. In contrast, when you build your own containers, you can retain this control and transparency, but few users who pull prebuilt images take the time to verify what exactly they’re inheriting. … It’s also a strong sign of deeper architectural problems when applications are only available as Docker images. This often signals a chaotic and undisciplined development environment, where reproducibility and maintainability have been outsourced to a container in order to mask systemic design flaws. Instead of offering proper packages, respecting dependency constraints, and maintaining a clean installation path, developers might rely on uncontrolled, bloated dependency chains - often pulled ad hoc from sources like

Source URL

https://www.tspi.at/2025/06/25/dockercomplex.html

Related Pain Points

Persistent Storage and Stateful Application Limitations

7

Docker's native volume management lacks comprehensive enterprise-grade stateful operations. Data integrity guarantees, backups, encryption at rest, and cross-host replication cannot be reliably accomplished using only Docker volume commands. Organizations must adopt complex external orchestration systems like Kubernetes to meet production stateful workload requirements.

storageDockerKubernetes

Security vulnerabilities in base Docker images

7

Outdated packages and CVEs in Docker images are not automatically detected. Requires manual scanning and image updates, with no built-in vulnerability management.

securityDocker

Systemd and OS integration incompatibility with Docker

6

Systemd, journald, logrotate, and OS-level user management do not integrate naturally with Docker. Developers must build custom wrappers for functionality the OS already provides (e.g., starting apps after network is up, unified logging, backups). This creates operational overhead with no upside.

compatibilityDockersystemd

Docker Desktop Performance Degradation on Windows and macOS

6

Docker Desktop emulates Linux containers using virtual machines on Windows and macOS, resulting in slow performance, excessive CPU consumption, and battery drain during heavy builds and container orchestration. Native Linux performance is significantly better, creating cross-platform friction.

performanceDocker

Debugging multi-service container builds is complex

6

Debugging across multiple services in containerized applications is difficult and time-consuming, requiring navigation of Docker CLI flags, container IDs, and manual shell access to inspect configuration.

dxDocker

Docker toolchain terminology and culture creates learning barriers for sysadmins

4

Docker's toolchain is its own world with its own terminology, pitfalls, and culture. Experienced sysadmins accustomed to traditional Linux systems struggle with container logs, bridge networks, volume paths, and container orchestration concepts.

dxDocker

Docker-only availability signals architectural and design problems

4

Applications available only as Docker images often indicate deeper architectural problems—chaotic development environments masking systemic design flaws through containerization instead of offering proper packages, respecting dependency constraints, and maintaining clean installation paths.

architectureDocker