Sources
1577 sources collected
www.youtube.com
Docker Desktop 4.50: Free Debugging Tools, AI Enhancements, and Kubernetes Integration ExplainedThis update introduces free debugging tools, deeper IDE integration (VSCode, Cursor), improved Kubernetes conversion, and AI-native enhancements like Model Context Protocol (MCP) support. ... Learn how Docker addresses common pain points like debugging multi-service builds and simplifies local-to-Kubernetes transitions. We'll also compare Docker Desktop to alternatives like Podman Desktop and GitHub Codespaces, highlighting its unique strengths. ... The update focuses on addressing common pain points in the {ts:19} development process, especially when it comes to debugging and AI integration. By offering a free debugging tool, {ts:27} Docker aims to simplify the often complex task of debugging container builds across multiple services. This is {ts:34} a big deal as it could significantly speed up development cycles and boost productivity.
**over 13 billion container downloads per month** and a market projected to reach $993 million by 2025, Docker has become as essential as knowing how to code itself. But here’s the shocking truth: **90% of developers are using Docker wrong**. If you’re one of the thousands searching “Docker vs Kubernetes,” struggling with container networking, or wondering why your containers work locally but fail in production, this guide is about to change everything. … ### The “It Works on My Machine” Problem is FINALLY Solved Remember that famous developer excuse? Docker containers have made it extinct. But most developers still don’t understand WHY. **The Science:** Docker containers bundle your application code with ALL dependencies, libraries, and configurations. This means your AI model, microservice, or web app behaves identically across: - Your MacBook - Your team’s Windows machines - Production servers in AWS - Edge devices running your IoT applications … ### Mistake #2: Not Using .dockerignore ``` node_modules .git *.log .DS_Store ``` ### Mistake #3: Rebuilding Everything Every Time Use Docker layer caching and multi-stage builds! ### Mistake #4: Ignoring Security - Always scan images for vulnerabilities - Use official base images - Keep images updated - Implement least-privilege principles
news.ycombinator.com
Docker for Desktop went from being complete garbage to a ...1. We were all spending 6-8 hours per week trying bending our host OS to work like Linux. Containers, package managers, compatibility layers, different autocompletes, etc... all do create work. All the adaptations to avoid running Linux also often don't work identically to the real thing (i.e. VM CPU & memory allocation vs. Linux container CPU & memory allocation, installing Postgres on MacOS vs Linux) so there's lots of little learning curves. … KronisLV on June 6, 2022 This is probably the most important point and something that I can wholeheartedly agree with! On Windows, you get the occasional issues with bind mounts or file permissions (e.g. wanting to use a container that has SSH keys in it, but software complaining about 77X access, instead of 700, which you literally cannot change), or even networking. … Running boring plain vanilla Ubuntu LTS with the ("curse you, FreeDesktop.org!") Gnome Shell and a few extensions that probably drive the minimalists on the Gnome Project bonkers (also running a lot of web apps like Gmail and Keep as PWAs). Or at least I hope they do.
Docker promises a world where services are reproducible and portable, where deployment becomes as easy as pulling an image. But for a single server hosting a few third-party applications, it frequently adds layers of indirection that obscure more than they illuminate. What used to be a straightforward rc.init or systemd service becomes a tangle of … Routine operations grow baroque. Want to restart a crashed app? You’ll first have to remember which container it was in, which network alias it used, and where its persistent data is actually stored. Even debugging - normally a matter of checking logs and inspecting running processes - becomes a hunt through Docker’s CLI flags, container IDs, and maybe shelling into the container just to see if a config file exists. What Docker saves in “write-once” deployment, it consumes in everyday friction. … Despite Docker’s reputation for portability, it actually introduces new limitations when it comes to cross-platform compatibility. It is only natively supported on a limited range of Linux distributions, and even there, it often expects a fairly standard userland and a kernel with specific cgroup and namespace features. On other operating systems, Docker doesn’t run natively - it runs inside a virtual machine, which adds overhead, fragility, and further reduces transparency. For example, on Windows and macOS, Docker typically relies on a bundled Linux VM running under for example Hyper-V. This adds layers of indirection and disconnects from the actual OS. On FreeBSD, Docker is virtually unusable, since most container images depend on Linux-specific features in the kernel and expect a GNU/Linux userland. The illusion of “universal containers” quickly collapses when you try to deploy services on anything but mainstream Linux environments. … Moreover, image updates rarely follow sane distribution policies. Unlike APT or RPM-based package management with clear changelogs and signed repositories, Docker images are often updated silently. You discover a breaking change only after redeploying, or worse, after something stops working. And even when nothing breaks immediately, the maintenance burden lingers in another form: you usually cannot update libraries or components inside the container using your system’s own package manager. The entire stack is frozen inside a Docker image, often built with outdated base layers, obsolete dependencies, and even vulnerable software versions. If you are not building the containers yourself as a deliberate deployment tool, but instead treat them as magical shrink-wrapped applications, you’re likely inheriting a mess of stale and mismatched code that you cannot easily inspect, audit, or upgrade. Of course there are solutions to this problem - you can build your images to be rootless for example. But then we are again in a regime of building your own images again. Systemd, journald, logrotate, user management - none of these integrate naturally with Docker. You end up building wrappers for things the OS already knows how to do. Want an app to start after the network is up? You’ll have to script it yourself. Want to apply unified logging? Now you need to aggregate container stdout and bind-mount log directories, often inconsistently across containers. Backups and restores - once the realm of simple tarballs or database dumps - become brittle when data lives partly inside volumes, partly on the host, and partly inside temporary layers that disappear on container restart. For a small team, this is operational debt with no upside. When only one person understands the container layout, the rest of the team is left helpless. Docker’s toolchain is its own world, complete with its own terminology, pitfalls, and culture. Many sysadmins with years of experience managing traditional Linux systems find themselves fumbling through container logs, debugging bridge networks, and deciphering mismatched volume paths. … Moreover, when relying on third-party Docker images, containerized systems drift from your control. You trust layers of caching and CI pipelines that you don’t own. You rely on upstream authors to think of your edge cases, to maintain timely updates, and to resolve compatibility issues. You lose touch with the operating system underneath - the very thing that’s supposed to provide stability. In contrast, when you build your own containers, you can retain this control and transparency, but few users who pull prebuilt images take the time to verify what exactly they’re inheriting. … It’s also a strong sign of deeper architectural problems when applications are only available as Docker images. This often signals a chaotic and undisciplined development environment, where reproducibility and maintainability have been outsourced to a container in order to mask systemic design flaws. Instead of offering proper packages, respecting dependency constraints, and maintaining a clean installation path, developers might rely on uncontrolled, bloated dependency chains - often pulled ad hoc from sources like
## 3. Local Development Pain Points — and a New Alternative Docker’s “heaviness” is particularly frustrating in **local development**. Spinning up a simple PHP or Node project often means downloading massive images, waiting for builds to finish, configuring ports, and finally hearing your laptop fans scream — all while productivity takes a hit. Some developers go back to manual setups with Homebrew or apt, but quickly fall into the old traps of **version conflicts** and **dependency mismatches**.
www.siriusopensource.com
What are the Problems with Docker## 1. Architectural Flaws and System-Level Security Exposure The fundamental design of the Docker Engine, characterized by its centralized daemon and shared kernel, introduces high-severity security and stability risks that are difficult to mitigate without external tooling or architectural shifts. … ### Shared Kernel Isolation Weakness Docker containers rely on Linux kernel features (namespaces and cgroups) for isolation, which differs fundamentally from the hardware virtualization provided by Virtual Machines (VMs). This architectural constraint means containers **share the host’s kernel**. This weakness creates a **false sense of isolation** among development teams. If a vulnerability exists within the underlying host kernel, all running containers inherit that vulnerability. Therefore, container security is critically dependent on rigorous and timely updating of the host kernel and the Docker Engine itself to mitigate known container escape vulnerabilities. ### Resource Contention and Cascading Host Crashes By default, Docker containers operate without explicit resource constraints and can consume all memory or CPU the host kernel scheduler allows. While simple, this poses a profound operational risk. … ### Secret Exposure and the Immutability Trap Exposed secrets (passwords, API keys) are among the most common, high-risk mistakes. This often occurs when credentials are hardcoded into Dockerfiles (e.g., via ENV or ARG) or copied into an image layer. … ### Image Bloat Increases Cost and Attack Surface Oversized container images, which can easily grow to 1.5 gigabytes, create "operational drag" by slowing down build processes, increasing bandwidth consumption during deployment, and dramatically **enlarging the attack surface** due to unnecessary libraries. Optimization is not the default setting and requires developer discipline. The most effective path to combat bloat is the **multi-stage build** methodology, which separates compilation stages from the clean runtime stage, carrying forward only the essential binaries. Furthermore, modern tooling like BuildKit must be used, as the older Docker Engine builder processes *all* stages of a Dockerfile, even if they are irrelevant to the final target, slowing down complex builds. … ### Docker Desktop Licensing Compliance and OPEX A major strategic risk is the licensing policy change for Docker Desktop implemented in 2021, which bundles the essential tools (Engine, CLI, Compose). Docker Desktop is **no longer free for commercial use** in larger organizations. Paid subscriptions (Pro, Team, or Business) are mandatory for organizations that exceed **either** of two thresholds: - Annual Revenue greater than **$10 million**. - Employee Count greater than **250**. This structure transforms Docker Desktop into a significant, mandatory operating expense (OPEX) for growing or established companies, introducing financial risk and procurement friction, even if the tool is only used for internal development. Using the product commercially beyond these limits constitutes a violation of the Docker Subscription Service Agreement, compounding governance and legal risk. Organizations must conduct a rigorous, organization-wide audit to ensure compliance. … ### Challenges with Persistent Storage and Stateful Applications Containerization emphasizes ephemerality: file changes inside a container's writable layer are deleted when the instance is deleted. While Docker provides volumes for data survival, it lacks the comprehensive management layer necessary for enterprise-grade stateful operations. Ensuring data integrity, guaranteed backups, configuring data encryption at rest, and replicating storage consistency across multiple hosts **cannot be reliably accomplished using only native Docker volume commands**. This volume management paradox means Docker is suitable only for simple, ephemeral workloads as a stand-alone solution. Organizations requiring high availability or data integrity must adopt external, complex orchestration systems, such as Kubernetes (using Persistent Volumes). ### Monitoring, Logging, and Debugging Limitations Docker provides basic telemetry (e.g., docker stats) for development diagnostics. However, this is fundamentally insufficient for production environments, which require centralized visibility, long-term historical data retention, compliance auditing, and monitoring across hundreds of distributed containers. While Docker collects container logs, its native functionality cannot effectively search, back up, or share these logs for governance and compliance. This creates an **observability debt**, mandating significant investment in separate, third-party centralized logging and robust external monitoring platforms to achieve production readiness. ### Networking and IP Address Management (IPAM) Conflicts Docker’s default bridge networking relies on Network Address Translation (NAT) to route traffic. This mandated NAT layer introduces **inherent overhead and latency**, making the default unsuitable for low-latency or high-throughput applications. Engineers must transition to more complex network drivers (e.g., macvlan). A frequent friction point is the non-deterministic allocation of IP ranges by Docker’s default IPAM, often allocating /16 networks in the 172.x.x.x range. This frequently **clashes with existing internal enterprise networks or VPN subnets**. Resolving these IPAM conflicts requires centralized administrative effort, often forcing configuration changes outside the standard application definition via the global Docker daemon configuration (e.g., modifying daemon.json).
quantum5.ca
Docker considered harmful - QuantumThis may seem extreme, but fundamentally, this boils down to several things: 1. The Docker daemon’s complete overreach; 2. Docker’s lack of UID isolation by default; 3. Docker’s lack of … ^2^… it’s quite likely for the container to be running as the user you are logged in right now! Isn’t that comforting? You can turn on UID namespaces, but the process is super painful and doing so wipes out the entire Docker state, requiring *all* images and containers to be recreated. It can also only have one UID namespace for all containers running under the same Docker daemon, which isn’t what I’d consider sufficient isolation between containers.
www.youtube.com
#1 Top 5 Problems Developers Face Without Docker | Why Docker is Essential for Modern DevelopmentFacing challenges in development without Docker? In this video, we’ll discuss the top 5 problems developers face without Docker, including environment inconsistencies, dependency conflicts, and scaling difficulties. Learn how Docker simplifies development, testing, and deployment, making your workflow seamless.
Firstly, I apologise for the rant. ... Since then, I haven't been able to dedicate much time to solving *any* of the issues I've outlined in that thread, but what I will say is that docker has caused me nothing but pain, and I have realised zero benefits from attempting to utilise it. Right from the start, the syntax for docker, docker-compose, and Dockerfiles is confusing and full of edge cases which no one explains to you in the hype of actually discussing it: - These 'images' you build grow to insane sizes unless you carefully construct and regiment your `RUN`, `COPY`, and other commands. - Docker complains to you about leaving empty lines in multi-line RUN commands (which is itself, as I see it, basically a hack to get around something called a "layer limit"), even if it contains a comment (which is not an empty line) and does not provide a friendly explanation on how to solve this issue. - There's basically no good distinction between bind mounts and volumes, and the syntax is even more confusing: declaring a `volumes`entry in a docker-compose.yml? You have no good idea if you're creating a volume or a bindmount. - Tutorials & documentation tends to either assume you're a power user who knows this sort of thing, or are so trivial they don't accurately represent a real-world solution, and are therefore basically useless. I've suffered endless permissions issues trying to run portions of my application, such as being unable to write to log files, or do trivial things like clearing a cache—that I have tried a dozen different ways of fixing with zero success. Then, when I run some things from within the docker container, such as tests, they can take an excruciatingly long time to run—only then did I discover that this is yet another docker issue. The whole point of docker is to abstract away the host OS and containerise things and it can't even do that. … `docker container exec -it php sh`. Docker-sync, kubernetes, docker-compose, images, containers. It's legitimately too much. I'm not a dev-ops or infrastructure guy. I just want to write code and have my app work. I don't have the money to employ anyone to solve this for me (I'm not even employing myself yet). … One problem is that you are using docker for Mac. Docker is hot trash outside of Linux, because on other platforms it has to run on a virtual machine instead of being a simple container. If you are working on a project with just yourself, I don't really recommend using docker in general. It's just another layer of complexity. Docker is only really useful if you have a team wit This has struck me as messed up ever since I started using Linux; other devs use Macbooks and companies seem to force it and mandate it for all developers (business people use Thinkpads or whatever Windows-equipped laptops are around) and yet we end up deploying our software onto Linux servers. All the user-facing stuff is in HTML/CSS and it would make more sense to run an emulator for Mac or Windows on top of Linux to make sure the frontend stuff looks good in different browsers. … Unless you're on something like NixOS or GuixSD, there is simply no guarantee something that works on your system now will work elsewhere, or even a few days later. Somethink like Docker is useful. Bare LXC isn't portable and is rather difficult to deal with. Maybe Vagrant is a better alternative? … Lot of good information in this post. However this bit: From personal experience with H2, MSSQL, PCF, and docker, be picky about H2. H2 is great for prototyping and initial development. However I've inevitably run into times where syntax differences between H2 and the production MSSQL required writing a different query for each environment. In and of itself this is not a big deal, but over the lifetime of the app it grows and becomes more overhead. So I recommend ditching H2 as soon as you can, get a copy of whatever the prod DB is running locally. … That's how I feel with any software development to be honest. The difference is how often the interruptions to development and the amount of yak-shaving that needs to be done. Docker is just yet another complicated bit of machinery that slows down dev once in a while for me (in a previous company it slowed down development a lot). … TBF, your critiques are valid coming from someone who uses Docker for Mac exclusively. But most of this seems like you're just not willing to learn the lingo/research solutions. That's not to say that Docker is fantastic, it definitely has stuff to improve on, but a lot of your issues seem like non-issues to me. Docker isn't meant to be a quick and effortless solution to every coders' problem, it's a toolset all on its own.
With Ubuntu, Canonical has had notable success in convincing people to switch from other platforms, but potential Ubuntu users are still running into trouble in several areas. Having spent some time on Canonical's forums, I've identified 10 points that seem to be common sticking points for new users -- that is, problems that have the potential to prevent a new user from adopting Ubuntu in the long term. These problems span the entire Ubuntu experience, but they all have two things in common: they are all serious enough to evoke the dreaded "I tried Linux but it didn't work" excuse, and they are all solvable. Ubuntu is still bad at properly detecting and setting up the display. Once it's gone wrong, there isn't much you can do from the GUI setup tool -- it either lies about your screen settings or offers inappropriate screen modes. Anyone for 640x480@52Hz on a 19-inch CRT? This is probably the most frequently reported complaint on the beginner forum. Other operating systems can set up the screen, so why can't Linux? From the user perspective, the solution involves some research and the editing of the xorg.conf config file. This is bad, because if the user makes a single mistake -- presuming the typical user is resourceful enough to make it this far -- it's all too easy to render the whole Ubuntu setup unusable. This problem is so widely acknowledged as a weakness of Ubuntu that I was surprised that Ubuntu 8.04 was still getting it wrong. ... Ubuntu is still bad at properly detecting and setting up the display. Once it's gone wrong, there isn't much you can do from the GUI setup tool -- it either lies about your screen settings or offers inappropriate screen modes. Anyone for 640x480@52Hz on a 19-inch CRT? This is probably the most frequently reported complaint on the beginner forum. Other operating systems can set up the screen, so why can't Linux? From the user perspective, the solution involves some research and the editing of the xorg.conf config file.
blendit.bsd.cafe
What are the problems with Ubuntu? - BlendIT - BSD CafeIn my personal opinion: 1- Snap packages. Dont like them for their closed source backend, dont lime them for how canonical has been sneaking then into the system of users who have been originally trying to install a deb. 2- Modern Ubuntu simply has no real benefit compared to other Distros. Nowadays it’s just another Gnome and Debian-based distro, I see no reason to use it over Debian itself, or Fedora, Solus, or any other Ubuntu derivative that simply does better than “vanilla” Ubuntu, such as Pop!_OS or Linux Mint. … It’s been more than 15 years since I used Ubuntu but from that point I really could feel that what @merci3@lemmy.world says is true - it no longer offered any real benefit compared to Fedora, Solus, Mint or whatever distro targeted at people getting into Linux. You won’t find many people saying that Ubuntu really stands out from their similars about something. … But as previously stated, my personal opinion is that modern Ubuntu adds nothing compared to other desktop distros, ot’s DE is just Gnome with extensions bult in. The Snap store is not very well optimized and there was no reason to have it as default over gnome-software, which is more feature-complete. Nowadays, for my use, I only see Ubuntu as Debian with a more modern installer. … One of the real problem is their dual license policy for their open source projects, that grant Ubuntu full license and the power to close in an Open source Project if they want. This is decidedly against the GPL spirit, but can be done with dual licensing. Another problem is the “not made here” mentality, which undermined Wayland for instance. … JustVikEnglish19•8 months ago Snap. :) @AusatKeyboardPremi@lemmy.world14•8 months ago Most of the criticism I have seen online stems from how Canonical (the company behind Ubuntu) plays fast and loose with the FLOSS ethos. The earliest controversy I can recall was the inclusion of the ‘Amazon shopping lens’ in its Unity desktop environment. There may have been earlier issues, but this one made mainstream headlines in the early 2010s. More recently, the push for Snap (its application bundle format), which relies on proprietary server-side components, which invited criticism. … More serious problem was Mir. Mir was an alternative to Wayland, because Canonical was not happy with Wayland and they didn’t want to implement what Ubuntu tried to do on phones. But that meant the programs and protocols to support was now X11, Wayland and Mir. And related to it, the focus of mobile user interface on desktop (Mir+Unity) was something lot of desktop fans didn’t like at that time.
news.tuxmachines.org
Ubuntu Loses Features and Breaks Itself Because Canonical Hired ...- ##### The New Stack ☛ Ubuntu 25.10 Scraps X11 for Wayland: A Solid Step Forward [Ed: Ubuntu is dropping many features that only X has is not good news; also, it loses compatibility with a lot of software; decisions are made by the blind .] ... Ubuntu’s decision to switch to Rust-based coreutils in 25.10 hasn’t been the smoothest ride, as the latest — albeit now resolved — bug underscores. The distro’s developers are bullish on the security and stability benefits that “oxidising” Ubuntu’s package set with Rust-based tools provide. In 25.10, it plumbed in Rust-based replacements for sudo and coreutils. … - ##### Bug in Coreutil's Rust Implementation Briefly Downed Ubuntu 25.10's Automatic Upgrade System > Ubuntu 25.10 was released earlier this month, bringing with it many improvements like a recent kernel with extended hardware support, GNOME 49 with lockscreen media controls, new default core apps, and the removal of the X11 session. > It also included a major new Rust component called sudo-rs, which replaces the traditional sudo with a Rust-based alternative. But, so far, we have seen a major bug breaking flatpaks on this release, and while it was quickly fixed, another one was recently caught.