news.ycombinator.com
Nginx: try_files Is Evil Too (2024)
Excerpt
kilburn 82 days ago - A more complicated nginx configuration. This is no light matter. You can see in the comments that even the author got bugs in their first try. For instance, introducing an HSTS header now means you have to remember to do it in all those locations. - Running a few regexes per request. This is probably still significantly cheaper than the stat calls, but I can't tell by how much (and the author hasn't checked either). - Returning the default 404 page instead of the CMS's for any URL in the defined "static prefixes". This is actually the biggest change, both in user-visible behavior and in performance (particularly if a crazy crawler starts checking non-existing URLs ni bulk or similar). The article doesn't even mention this. The performance gains for regular accesses are purely speculative because the author didn't make any effort to try and quantify them. ... Plus, I've found that it's nice to have api.myapp.com and myapp.com as separate bits of config, so that the ambiguity doesn't exist for anything that's reverse proxied and having as much of the static assets (for example, for a SPA) separate from all of that. Ofc it becomes a bit more tricky for server side rendering or the likes of Ruby on Rails, Laravel, Django etc. that try to have everything in a single deployment. … - resources that are dynamically-generated are served by API endpoints, therefore known locations with predictable parameters - everything else must be static files And definitely no dynamic script as the fallback rule, it's too wasteful in an era of crawlers that ignore robots.txt and automated vulnerability scanners. A backend must be resilient. ... Yeah theres the slight Go tax in latency, but almost every comparison online is benchmarking a fairly optimized and often cache configured nginx or apache config versus the most basic caddy config possible. Even worse, most are just testing http1 speeds using near zero-size files, who cares about how many theoretical connections it supports, lets talk how many users it supports on real world content without grinding to a halt. A few more lines of config and a more production intended caddy config is drawing like punches. Least in my real world testing I found little meaningful improvement using nginx, worse, it would grind to a halt a halt under loads that caddy at least while bogged down, would still be responsive during. DoctorOW 80 days ago ... - Running a few regexes per request.
Related Pain Points
Complex nginx configuration increases maintenance burden and bug risk
5More complicated nginx configurations require remembering to apply changes consistently across multiple locations (e.g., HSTS headers), and even experienced authors introduce bugs in their first attempts. Configuration changes propagate across the entire setup, creating a large surface area for errors.
Nginx configuration changes have speculative performance gains without quantification
3Performance optimizations in nginx configurations (like static asset prefix handling) are often implemented without actual benchmarking or measurement. The actual performance impact is unknown, making it difficult to justify configuration complexity or understand whether changes provide meaningful benefits.