Sources
1577 sources collected
{ts:456} kind of awful because when you start an right now in year one the people who work there Define the first years to {ts:464} come and when they might go away or other people come or the mindset changes then suddenly the C base looks different {ts:472} you see this you know this salad you have created this old thing this new thing this thing in between and this is {ts:479} what I don't like in react you have a clear Paradigm UI equals function stank and you have components you have hooks {ts:488} you have State Management and you have some libraries like tster cre which is actually a hook or a set of hooks which {ts:494} communicates with the outside world you have a clear process what is going on and as soon as you have this structure
**Version Compatibility:**One of the main challenges in dependency management is ensuring that all the dependencies used in a project are compatible with each other. Python libraries and packages are constantly being updated, which can lead to version conflicts and compatibility issues. **Dependency Conflicts:**Another common challenge is dealing with dependency conflicts. In some cases, different dependencies may rely on the same package but require different versions. Resolving these conflicts can be time-consuming and tedious. **Dependency Resolution:**Python developers often struggle with resolving dependencies efficiently. Manually managing dependencies can be error-prone, and automated tools may not always provide accurate results. **Dependency Updates:**Keeping dependencies up to date is essential for security and performance reasons. However, constantly updating dependencies can introduce new bugs and issues that need to be addressed. … ### 1. Compatibility Issues One of the biggest challenges faced by Python developers is compatibility issues. Python has two major versions in use today - Python 2 and Python 3. While Python 2 is still widely used, Python 3 is the future of the language. The transition from Python 2 to Python 3 has been slow, and many libraries and frameworks still do not support Python 3 fully. This can create compatibility issues when trying to run code written in Python 2 on a system that only supports Python 3. … ### 3. Dependency Management Dependency management is another challenge that Python developers often face. Python has a rich ecosystem of third-party libraries and frameworks, which can make dependency management complex. Keeping track of which versions of libraries are compatible with each other, resolving version conflicts, and ensuring that dependencies are up to date can be a time-consuming and error-prone process. To address dependency management challenges, Python developers can use tools like pip, virtualenv, and Anaconda to manage dependencies. ... … ### 1. Speed One of the main challenges faced by Python developers is the language's perceived lack of speed. Python is an interpreted language, which means that it is generally slower than compiled languages like C or C++. This can be a significant issue for developers working on performance-critical applications, such as real-time systems or large-scale data processing. To overcome this challenge, Python developers can utilize tools like Cython, which allows them to compile Python code into C extensions. ... ### 2. Memory Management Another challenge that Python developers face is memory management. Python uses automatic memory management, which means that developers do not have direct control over memory allocation and deallocation. While this can be convenient for developers, it can also lead to issues with memory leaks and inefficient memory usage. To address this challenge, Python developers can use tools like the Python Memory Profiler to identify and optimize memory-intensive parts of their code. By carefully managing memory usage and avoiding unnecessary allocations, developers can improve the performance of their Python applications. ### 3. Scalability Scalability is another key challenge faced by Python developers. Python is often criticized for its limited scalability, particularly when it comes to multi-threading and parallel processing. This can be a major issue for developers working on high-performance applications that require efficient utilization of multiple cores. To improve scalability in Python applications, developers can leverage libraries like asyncio and multiprocessing, which provide tools for asynchronous programming and parallel execution. By utilizing these libraries effectively, developers can take advantage of multi-core processors and improve the scalability of their Python applications. … Dude, debugging in Python can be a nightmare sometimes. The dynamic nature of the language can make it tough to trace through code and track down those pesky bugs. Gotta have a strong debugger in your toolbox! A big issue for Python devs is dealing with the Global Interpreter Lock (GIL). This restriction can limit the performance of multi-threaded programs, forcing developers to explore alternative solutions like multiprocessing. … Python developers face a lot of challenges in their day-to-day work. One of the biggest challenges is dealing with the different versions of Python. The compatibility issues between Python 2 and 3 can really be a pain in the neck. Working with large codebases can be a challenge for Python developers. Keeping track of all the different modules and functions can get overwhelming, especially if the codebase is poorly organized. Another challenge for Python developers is dealing with performance issues. Python is not known for its speed, so optimizing code for performance can be a real headache. One challenge that many Python developers face is integrating their code with other languages or external APIs. It's not always easy to get Python to play nicely with other technologies. Debugging can be a real challenge for Python developers. Sometimes the errors can be cryptic and hard to track down, especially in large codebases.
# The Issue with Docker in the Current Landscape: ## 1. Changes in Docker Desktop Licensing and Cost: Docker’s choice to position Docker Desktop behind a paid membership for bigger organizations was among the most obvious turning points. While people and small projects could keep using it freely, companies discovered they had to pay for something once free—and not always any better than the new choices. This action not only infuriated me but also let developers examine their reliance on Docker more closely. Open-source proponents and cost-conscious teams began wondering whether Docker’s worth warranted the additional outlay. ## 2. Performance Issues, particularly with Windows and macOS: Docker runs rather well on Linux. Docker Desktop has long been a hassle for macOS and Windows users, though. Particularly during heavy builds or multiple container orchestration, it emulates Linux containers using virtual machines, resulting in slow performance, excessive CPU consumption, and battery drain. Conversely, new solutions like Lima used under the hood by Finch offer more effective virtualization customized for developers, hence improving performance without the complexity and bloat of Docker Desktop. ## 3. Security Risk: Root Daemon Problem Docker’s dependency on a root-running daemon is among the architectural choices it most faces criticism for. This central service controls containers and calls for higher privileges, therefore augmenting the possible attack surface in manufacturing settings. Although Docker has evolved over time with features like user namespaces and rootless mode, security-conscious organizations typically want alternatives created from the bottom up with security in mind—like Podman, which operates totally without a daemon and can function as a non-root user. … ## 5. Vendor Lock-In Fear Developers are finally cautious about delving too deeply into Docker’s private tools. Though generally embraced, even the Dockerfile syntax is not controlled by an open standard like the OCI image and runtime requirements. Especially when open standards promise more flexibility and long-term stability, developers prefer to avoid being limited to a single toolchain.
## Future Trends in React Development: Key Challenges Developers Will Face Emphasizing component reusability and performance optimization is crucial, as applications become increasingly complex. Developers should prioritize the use of hooks and the latest features from frameworks like Suspense. According to a 2025 developer survey, 65% of respondents indicated that efficiency in rendering impacts their choice of libraries significantly. … |Challenge|Statistical Insight| |--|--| |Complexity in State Management|72% prefer Context API for ease of use.| |Performance Optimization|SSR can yield 30% faster load times.| |Testing Difficulties|54% face challenges with testing complexity.| |Security Vulnerabilities|60% of apps experience breaches.| … Performance considerations are vital. A report from 2025 showed that 70% of large-scale applications experienced latency issues tied to state updates, making it crucial to select a solution that optimizes both state management and render times. React Query is another remarkable tool, with a 40% rise in its adoption.
www.podcastworld.io
[Summary] 805: We React to State of React Survey | Syntax - Tasty Web Development Treats## React API challenges: Despite improvements in React 19, developers still face challenges with specific APIs like forward refs, memoization, and Context API, leading to potential frustration and consideration of alternative frameworks. React 19 eliminates the need for forward refs and automatically handles memoization to alleviate some pain points. Key takeaway from the State of React 2023 survey is that developers continue to face challenges with specific APIs, leading to potential frustration and the consideration of alternative frameworks. The most commonly cited pain points include forward refs, memoization, and the Context API. Forward refs were a significant issue for developers when trying to bridge the gap between vanilla APIs and React components. Memoization, which allows components to skip rerendering when their props are unchanged, added unnecessary mental overhead and complexity for some developers. Lastly, the Context API presented challenges with managing state and handling component updates. However, there is some good news. React 19 eliminates the need for forward refs, and the new compiler will handle memoization automatically. ... React development can come with its fair share of challenges, particularly when it comes to managing state, optimizing renders, and dealing with new APIs. One common issue is double rendering, which can occur when a component loads data and then re-renders after setting it. This can lead to unnecessary re-renders and confusion. Another issue is strict mode in React, which runs components twice during development to catch side effects, resulting in console logs appearing twice and added complexity. UseEffect and dependency arrays are also sources of frustration for developers, as they require careful management to ensure components re-render as intended. State management libraries, such as Zustand, can help simplify these challenges by tracking values instead of renders. New APIs, like React Server Components, also present learning curves and require careful consideration to optimize performance. Overall, React development requires a strong understanding of its unique challenges and best practices to build efficient and effective applications.
blog.isquaredsoftware.com
React and the Community in 2025, Part 1: Development History ...**the React community has had a growing sense of frustrations and disagreements on where React is headed, how it's developed, and the recommended approaches for using React, as well as the interactions between the React team and the community**. This in turn overlaps with dozens of different arguments that have ricocheted around the React and general web dev communities in the last few years, as well as specific technical concerns with how React works, comparisons with other similar JS frameworks, and how web apps should be built going forward. What makes this a lot worse is that everyone is arguing and debating a different subset of these concerns, and **many of the stated concerns are either false or outright FUD**. Unfortunately, the arguments and fears have also been exacerbated by **numerous self-inflicted missteps, communications problems, and lack of developer relations work from the React team itself**. All of these intertwine heavily. … ... Most of the React apps I've worked on have been internal tools with limited distribution, and mostly "desktop-style apps in a browser" - in other words, true SPAs without even any routing or CRUD-type behavior. I did work on … *get* to choose the exact combination of tools you need for your project... but you also *have* to choose a combination of tools. That leads to decision fatigue, variations in project codebases, and constant changes in what tools are commonly used. Overall, **both the React library and the React team intentionally stayed unopinonated**. They didn't want to play favorites with specific tools in the ecosystem, their time and attention was focused on building React itself, and they still viewed the scope of React as being somewhat narrow. ... *should* use a framework to write React apps - they have routing, data fetching, and build capabilities built in". This also tied into the work to build RSCs. As part of this, the "Start a New React Project" page specifically warned *against* using React without a framework. ... That said, by 2015 React was most commonly used for client-side SPA app architectures. As with everything, these had tradeoffs. They made it easier to generate the page contents (it's all React components), had faster user interactions (show a different component on click or route change instead of a full page refresh), and enabled richer app experiences. It didn't matter what the backend was (JS, Java, PHP, .NET, Python) - just expose a JSON API and fetch data. However, they also were slower to load the initial page bundle, and client-side routing could lead to uncanny interactions vs native browser behavior. … *felt* particularly native to React. This has led to a general mindset shift in the React ecosystem. There's more of a push for SSR-based architectures to improve page loading experiences and minimize the amount of JS needed for a page, as well as removing the need to use data fetching libraries on the client side. The React team has argued loudly against "waterfalls" in data fetching in order to improve page loading performance, and even client-side routers like React Router and TanStack Router offer ways to prefetch data at the route/page level rather than triggering fetches nested deep in the component tree.
blog.isquaredsoftware.com
The State of React and the Community in 2025 - Mark's Dev BlogHowever, I've observed and experienced that **the React community has had a growing sense of frustrations and disagreements on where React is headed, how it's developed, and the recommended approaches for using React, as well as the interactions between the React team and the community**. This in turn overlaps with dozens of different arguments that have ricocheted around the React and general web dev communities in the last few years, as well as specific technical concerns with how React works, comparisons with other similar JS frameworks, and how web apps should be built going forward. … The flexibility and variety of ecosystem options has been both a strength and a weakness. You *get* to choose the exact combination of tools you need for your project... but you also *have* to choose a combination of tools. That leads to decision fatigue, variations in project codebases, and constant changes in what tools are commonly used. … This ties into several other related points of concern: - Next is recommended first in the React docs, and the Next App Router is also mentioned as the main example under "Which features make up the React team’s full-stack architecture vision?" - Next is still the only production implementation of RSCs - React team members have been quoted as saying that "This Next release is the real React 18" … ### Concern: React Only Works with Next 🔗︎ I've seen multiple comments online with people saying, either seriously or wonderingly, that "React only works with Next now". This is easily refuted. **Even just looking at the "Start a New React Project" page shows other frameworks that are *not* Next**, as well as the somewhat infamous "Can I use React without a framework?" section. … It's also worth noting that many of the features in React 19 and 19.1 are client-only. If anything, the community has over-estimated the amount of effort put into server-side functionality, and missed the amount of effort put into client-side features. … Every truly efficient React setup was custom, different, and unachievable with Create React App. > These user experience problems are not specific to Create React App. They are not even specific to React. For example, apps created from the Vite homepage templates for Preact, Vue, Lit, and Svelte suffer from all of the same problems. These problems are inherent to purely client-side apps with no static site generation (SSG) or server-side rendering (SSR). > If you build entire apps with React, being able to use SSG/SSR is important. The lack of support for them in Create React App is glaring. But it's not the only area where Create React App is behind … I've also had conversations with the React team where they directly told me that they have heard many of the external complaints about React apps having bad loading times and overall poor performance. So, the frameworks emphasis is a direct response to that, with the goal of getting more apps to have decent performance by default. Based on that, we can summarize the React team's stance as: … - Frameworks add many additional features and functionality, but that's also added complexity to learn, making them less suitable for beginners that are just trying to get a handle on how to use React at all. - The added complexity can also be a trap that leads to confusion, such as accidentally using Context or hooks in Server Components (which throws errors) - Many companies may not be running JS backends, and may even have rules and restrictions against that - Frameworks with server functionality do require specific hosting to run, whereas a pure SPA can be trivially hosted anywhere that serves static HTML and JS (including Github Pages and Amazon S3) - While the *need* to pick and choose your libraries has often been a source of frustration for React users, it does enable customizing projects to meet your specific needs. Opinionated frameworks remove the need to make most of those decisions, but can also limit your ability to customize behavior later. … (SPA routers & loaders are a mess & underserved!) > It's not about React or Vite. It's the ecosystem. It's painful to realize that React won't encourage the traditional "non-framework" as strongly as the "new ways". The "React without a framework" section is tucked away in the docs and depressing. > As a non-Node backend company we see those docs as a sign that we don't align with React's primary direction anymore.
www.iankduncan.com
GitHub Actions Is Slowly Killing Your Engineering Team - Ian DuncanYou click the step that failed. The page hitches. You scroll. There is a pause, a held breath, and then the logs appear, slowly, like a manuscript being revealed one line at a time to a supplicant who has not yet proven worthy. That’s three or four clicks just to see the error, and every one of them loads a new page with its own loading spinner, and none of them are fast. You are navigating a bureaucracy. You are filling out forms at the DMV of CI. And then the log viewer itself. I have used every CI system known to man, and the GitHub Actions log viewer is the only one that has *crashed my browser*. Not once. Repeatedly. Reliably. Open a long build log, try to search for an error, and Chrome will look you in the eye and die. This is the log viewer for the most popular CI system in the world. This is the tool you are expected to use to understand why your build failed. It cannot survive contact with its own output. … Or a different run. Or a page you don’t recognize. The back button in the GitHub Actions UI is a roulette wheel. You will land somewhere. It will not be where you wanted to go. You will click the back button again. You will land somewhere else. Eventually you give up and type the PR URL from memory or go find it in your browser history, which is now 80% GitHub Actions URLs, a fact that will haunt you when you look at it later. … ## ”But the Marketplace!” Ah yes, the GitHub Actions Marketplace. The npm of CI. A bazaar of community-maintained actions of varying quality, most of which are shell scripts with a `Dockerfile` and a dream. Every time you type `uses: some-stranger/cool-action@v2`, you’re handing a stranger access to your repo, your secrets, and your build environment. Yes, you can pin to a SHA. Nobody does. And even if you do, you’re still running opaque code you didn’t write and probably haven’t read, in a context where it has access to your … *can* bring your own runners to GitHub Actions. Self-hosted runners exist. You can set up your own machines, install Nix, configure your environment exactly how you want it. And this does solve the compute problem. Your builds will be faster. Your caches will be warm. But you’ll still be writing GitHub Actions YAML. You’ll still be fighting the expression syntax and the permissions model and the marketplace and the log viewer that crashes your browser. You’ve upgraded the engine but you’re still driving the car that catches fire when you turn on the radio. … `workflow_call` trigger was. That person was happy. The `GITHUB_TOKEN` permissions model is a maze. `permissions: write-all` is a hammer, fine-grained permissions are a puzzle, and the interaction between repository settings, workflow settings, and job-level settings will make you want to lie down on the floor. I once spent an entire day on token permissions. I will never get that day back. It’s gone. I could have learned to paint. I could have called my mother. I could have mass-tested a new CI system. Anything.
news.ycombinator.com
I'll think twice before using GitHub Actions again - Hacker Newsnews.ycombinator.com › itemThe reason it gets unbearably messy is because most people google "how to do x in github actions" (e.g. send a slack message) and there is a way and it's almost always worse than scripting it yourself. SOLAR_FIELDS on Jan 21, 2025 Without tooling like this any sufficiently complex system is guaranteed to evolve into a spaghetti mess, because no sane way exists to maintain such a system at scale without proper tooling, which one would need to hand roll themselves against a giant, ever changing mostly undocumented black box proprietary system (GitHub Actions). Someone tried to do this, the project is called “act”. The results are described by the author in the article as “subpar”. … It is somewhat heavy on configuration, but it just moves the complexity from CI configuration to NX configuration (which is nicer IMO). Our CI pipelines are super fast if you don't hit one of one of our slow compilling parts of the codebase. … I do have to say that our NX configuration is quite long though, but I feel that once you start using NX it is just too tempting to split your project up in individual cacheable steps even if said steps are very fast to run and produce no artifacts. Although you don't have to. For example we have separate steps for linting, typescript type-checking, code formatting, unit testing for each unique project in our mono-repo. In practice they could be all the same step because they all get invalidated at the same time (basically on any file change). … hinkley on Jan 20, 2025 The only functionality a CI tool should be providing is: - starting and running an environment to build shit in - accurately tracking success or failure - accurate association of builds with artifacts - telemetry (either their own or integration) and audit trails - correlation with project planning software - scheduled builds - build chaining … chubot on Jan 21, 2025 Especially for Github Actions, which is stateless. If you want to reuse computation within their VMs (i.e. not do a fresh build / test / whatever), you can't rely on Just or Make A problem with Make is that it literally shells out, and the syntax collides. For example, the PID in Make is $$$$, because it's $$ in shell, and then you have to escape $ as $$ with Make. … and it feels like fighting against the flow when you're trying to make it reusable across many repos akdev1l on Jan 20, 2025 necovek on Jan 21, 2025 I've rarely seen a feedback loop with containers that's not longer than 10s only due to containerization itself, and that breaks the "golden" 10s rule (see https://www.nngroup.com/articles/response-times-3-important-...). … Why would it be slow? It needs to be rebuilt? (on a fast moving project with mid-sized or large team, you'll get dependency or Dockerfile changes frequently) It needs to restart a bunch of dependent services? Container itself is slow to initialize? Caching of Docker layers is tricky, silly (you re-arrange a single command line and poof, it's invalidated, including all the layers after) and hard to make the most of. … Stateful virtualenvs with no way to check if they're clean (or undo mistakes), no locking of version resolution (much less deterministic resolution), only one-way pip freeze that only works for leaf projects (and poorly even then), no consistency/standards about how the project management works or even basic things like the directory layout, no structured unit tests, no way to manage any of this stuff because all the python tooling is written in python so it needs a python environment to run so even if you try to isolate pieces you always have bootstrap problems... and most frustrating of all, a community that's ok with all this and tries to gaslight you that the problems aren't actually problems. … (I could rant for ages about Azure DevOps and how broken and unloved it is from Microsoft's side. ... It seems to me that a big part of the problem here (which I have also seen/experienced) is that there's no one specific thing that something like GitHub Actions is uniquely suited for. Instead, people want "a bunch of stuff to happen" when somebody pushes a commit, and they imagine that the best way to trigger all of that is to have an incredibly complex - and also bespoke - system on the other end that does all of it.
news.ycombinator.com
The Pain That Is GitHub ActionsAll of that is a lot more than what a local dev would want, deploying to their own private test instance, probably with a bunch of API keys that are read-only or able to write only to other areas meant for validation. ... To me, personally, the Github Actions CVE from August 2024 was the final nail in the coffin. I blogged about it in more technical detail [1] and guess what was the reason that the TJ actions have been compromised last week? Yep, you guessed right, the same attack surface that Github refuses to fix, a year later. … On one side, you got 50 plugins with CVEs but you can't update them because you need to find a slot that works for all development teams to have a week or two to fix their pipelines again, and on the other side you got a Jenkins instance for each project which lessens the coordination effort but you gotta worry about dozens of Jenkins instances. Oh and that doesn't include the fact many old pipelines aren't written in Groovy or, in fact, in any code at all but only in Jenkins's UI... … > How does the resulting YAML look like? ... Agreed. GitHub actions, or any remote CI runner for that matter, makes the problem even worse. The whole cycle of having to push CI code, wait 10 minutes while praying for it to work, still getting an error, trying to figure out the mistake, fixing one subtle syntax error, then pushing the code again in the hope that that works is just a terrible workflow. Massive waste of time. … I don't understand what problem you could possibly be experiencing. What exactly do you find hard about running commands in, say, GitLab CICD? iterating a GitHub Actions workflow is a gigantic pain in the ass. Capturing all of the important logic in a script/makefile/whatever means I can iterate it locally way faster and then all I need github to do is provision an environment and call my scripts in the order I require.
www.analyticsinsight.net
What are the Common Pain Points of Python Programming?Beginners sometimes run across minor obstacles when using Python, which gives them the impression that the language is difficult. The typical problems with Python that new users typically run across are: It's challenging to decide between pip and pipenv were given the abundance of package managers. Understanding the benefits of using virtualenv. You could find it completely absurd to have stringent indentation restrictions in a language's grammar if you're coming from C/C++ or Java. But with time, you'll begin to value it in a long term.
As my experience grew, I understood that this was a common point of pain among Python developers. No matter with whom I spoke – colleagues, strangers at a conference, or developers on web forums and mailing lists – I saw similar struggles. ... Here's how to identify and fix five common issues in your Python development setup. I experienced them all myself, and in some cases helped others through them as a colleague and team lead. If you can avoid these issues, you'll become a happier and more effective Python developer. ### 1. Don't waste time doing the compiler's job When developer brains do what computer brains can do much better then that's usually a costly mistake. One example is programmers spending time hunting bugs that could be spotted just as well by automated tools. For some reason, maybe because of Python's dynamic nature and earlier status as a "scripting" language, it's still rare to see it used with static code analysis tools and linters. … Yet, keeping that focus costs mental energy that we might then lack in other areas of our work: We get tired a little quicker in the afternoon, or introduce a tiny little extra bug with our latest commit. In my experience even small forced pauses and delays add up. Switching files in a slow editor or jumping between apps on a slow computer is frustrating. We can even apply this at a microscopic level and look into editor typing latencies. I believe these micro delays add up, too. They cost us productivity and cause frustration. … ### 4. Don't work with an unpleasant editing environment Working with tools that I don't enjoy crushes my productivity. You might know the feeling. Some tools are so frustrating to work with they zap your energy levels and motivation. What's the most important tool that you work with every day as a developer? For me it's my code editor. For some developers it might be their email client or a team chat app—but let's hope that a large part of your day is spent writing code.