Hacker News new | past | comments | ask | show | jobs | submit login
Things engineers believe about Web development (birtles.blog)
155 points by jnord 4 months ago | hide | past | favorite | 249 comments



The refrain against the "we should go back to MPA apps with server rendered HTML" is often "well what about Figma and Photoshop", which of course, yes those don't really work in the MPA, server rendered HTML model.

The problem isn't so much those but how most developers lump themselves in with the incredibly interactive sites because it sounds sexier and cooler to work on something complex than something simple. The phrase becomes "well, what about Figma and Photoshop (and my mostly CRUD SaaS)?"

I think a valuable insight that the MPA / minimal JS crowd is bringing to the table is the idea is that you shouldn't strive for cool and complicated tools, you should strive for the simplest tool possible, and even further, you should strive to make solutions that require the simplest tools possible whenever you can.


This is motte-and-bailey argumentation in my opinion.

The motte: SPAs are a good way to write highly complex applications in the browser, like Photoshop and Figma, to compete with desktop apps.

The bailey: SPAs are a good way to write most web applications.

If you attack the bailey, proponents retreat to the motte, which is hard to disagree with. With the motte successfully defended, proponents return to the bailey, beneficial for those enthusiastic about SPAs but much harder to defend.

The only way to tease this issue apart is to stick to specifics and avoid casting SPAs or MPAs as universally good or bad. Show me the use-case and we can decide which route is best.


Or even avoid discussing SPAs or MPAs entirely.

At the end of the day, we're talking about whether a specific interaction (or a set of interactions) can be handled over the network or not.

If you need the interaction to fully resolve (as in the state is updated and the success or failure of the interaction is visible to the user) within 800ms or so, then it shouldn't be performed over the network.

For interactive editors like Figma, you often have interactions based on key repeats, which usually fire at 50-200ms intervals. So client-side rendering is really the only feasible option.


> If you need the interaction to fully resolve (as in the state is updated and the success or failure of the interaction is visible to the user) within 800ms or so, then it shouldn't be performed over the network.

Most real-world SPA sites perform a lot more roundtrips over the network than the MPA equivalent, not less. And every roundtrip adds yet another 800ms to your update latency, plus the risk that some random network failure will break the SPA state update and force you to reload it from scratch.


Those who do not remember Lotus Notes are doomed to reinvent it.


Strangely enough, this dichotomy seems to exist only for the web platform.

Everywhere else (desktop, mobile etc) the model is SPAs.

The only reason people distinguish it for web is because of legacy: html + DOM, i.e. Documents.

Documents don't generally require programmers even if using lateX.

Both models can coexist. I believe that SPAs somewhat supersede MPAs and that an MPA can be a simplification sometimes for a specific kind of apps, a website being simply an app that has been broken apart and is sent piece by piece.


Oh my, may I remind folks about SDI vs MDI https://en.m.wikipedia.org/wiki/Multiple-document_interface

Or the window-is-application/process (Windows, Linux DEs) vs window-is-document-and-application-is-independent (MacOS) models

Or spatial navigation vs file browser.


> Strangely enough, this dichotomy seems to exist only for the web platform.

> Everywhere else (desktop, mobile etc) the model is SPAs.

It exists for CLIs too, where some projects provide a collection of single-purpose programs (e.g. imagemagick) and others provide a single program which can do many things (e.g. git)


git is a facade and "git add" actually calls "git-add". On Windows this means separate exes, git-add.exe, git-commit.exe, git-update.exe, etc. But all these exes are actually identical. So git is multiple copies of a single program which can do many things!


No not really.

Yes there are two approaches, but you can switch between the two with 98% of the code intact.


and you have identified the problem. web dev tooling, despite the extreme churn, is still absolutely poo-poo to use the technical term.

of course, mostly because very few oh-new-shiny-woah-such-modern tools doesn't even attempt to solve the problem of backend-frontend state sync. sure, maybe you get a piece of the puzzle (eg. a library that conveniently rerenders on state change, React, or one that you can wire up with all the fancy observables, Angular, or one that's super simple, sleek, even has magical runes that help with only rerendering the things affected by the state change, Svelte ... and maybe on top of these you get a state manager library, and then you still end up writing a thousand mutator/reducer in Redux by hand).

So we are still nowhere near a nice end-to-end full-stack tool that helps model both backend and frontend changes and then helps to design and implement an efficient API between them. (Because, obviously, it seems that this is not obvious to most people. Hence we get solutions like expose your DB as REST API and ship your DB via WASM sqlite, and so on.) That said, even those might be better than one more frontend-only state management lib.


I think you're discounting (1) the fact that there is only one front end runtime, and any number of backend runtimes (2) the application is distributed across a network.

Those are the reasons why SPA and MPA are so different


> Everywhere else (desktop, mobile etc) the model is SPAs.

> The only reason people distinguish it for web is because of legacy: html + DOM, i.e. Documents.

There’s another key difference: the other model has preinstalled software and most functionality is on trusted clients. If I email you a Word document, you don’t have to download Word from the Office servers to open it, which creates very different trade offs. There’s also a big difference in trust: I don’t need to ask whether I want each web page I open to have access to the data on my computer.

The web’s big selling point was the immediacy of being able open anything quickly and not needing to trust the remote server to run native code on your system (even in the 90s we knew that was a bad idea), so it’s not surprising that there’s so much gravity to its core model. The addition of more app-like abilities is really useful but it’s lead to a certain amount of app-envy where people often pick the cool technical challenge without asking whether they’re working on an app which needs it.


To me, the core of this entire issue is complexity and where it belongs.

My view is that you should perform as much processing as possible in the backend. This allows you to send as little data as possible over the network and allows your application code to make more sense because all your domain logic is centered in one self-contained app.

Now this self-contained app is just a software library, it doesn't do anything on it's own. But you can throw a CLI on top of it and use it through the terminal. You put a web api on top of it and use it from an SPA. You can use it as a backend for a server side rendered app.

The ideal, in my eyes, is that th frontend only concerns itself with actually displaying things. It doesn't get a big list of data, filter and transform the data, then display it. It asks the backend for exactly what it needs and the backend provides exactly what it needs and it displays it and that's the end of it.

Now you can have a website and a mobile app that are both trivial to develop and both use the same backend - if you fix a bug in the backend you've fixed it on both frontends.

I realize this may not always be possible or practical, but I think it is both of those things more often than not.


> beneficial for those enthusiastic about SPAs

It's almost as if some developers choose technologies based on "what'll look good on my CV"


Oh, here we go.

When we start to even think about debate fallacies when comparing engineering methodologies, it's completely clear we already lost the game. In fact, we also lost the meta game, and probably some 2 or 3 outer meta layers of it.

So yes, we should design software for the specifics of the function it will provide. Do not let people evade their competency by talking in generalities.


A fun metaphor. A SPA-inclined team/consultancy/department will retreat to their motte when necessary. They'll live to fight another day. Given a chance, they'll return to the bailey, advocating for SPAs under a relaxed standard.

Using this metaphor can imply significant disingenuity: a lack of honesty about one's true belief.


> Using this metaphor can imply significant disingenuity: a lack of honesty about one's true belief.

I actually disagree. I think this is the natural state of people, and they come by it honestly. We make decisions emotionally and then justify them rationally. It's just the way we are.

You could maybe say it's a lack of honesty about one's true belief to themselves. But even then it's hard to fault somebody for lack of awareness for something that is very subtle.

Honestly I think pointing out this human tendency and calling it out with examples like this is the best way to combat it. Once people become aware of it, they are more likely to fight it internally.


Two things. First, yes, people can deceive themselves. To the extent this is true, I take your point; it isn’t a matter of honesty in the usual sense; it is perhaps better stated in terms of self inconsistency; i.e. having internal contradictions in one’s beliefs.

Second, people do can and do lie about this kind of thing. I’m talking about conscious deception. Motives vary; they range from “I’ll pick my battles” to “this is good for my income stream” to “these other people don’t know it yet, but I’m right, and they’ll thank me later” and others.


> … I think pointing out this human tendency and calling it out with examples like this is the best way to combat it. Once people become aware of it, they are more likely to fight it internally.

Sometimes that works. Sometimes it just causes people to dig in deeper.


That’s not why. In my experience, applications accumulate interactivity over time. At some point, they hit a threshold where you (as a developer— not an end user) wish you had gone with an interactive development model.

Also, for me, the statically typed, component-based approach to UI development that I get with Preact is my favorite way to build UIs. I’ve used Rails, PHP, ASP (the og), ASP.NET, ASP.NET MVC, along with old-school native windows development in VB6, C# Winforms, whatever garbage Microsoft came up with after Winforms (I forget what it was called), and probably other stacks I’m forgetting. VB6 and C# Winforms were the peak of my productivity. But for the web, the UI model of Preact is my favorite.


Styling with WPF (the thing after winforms) was so confusing, at least from someone coming from CSS.


WPF. That was it. Yeah. It was terrible.


I don't see why you cannot add interactivity later on. Frameworks like VueJS provide an easy way to deliver interactive widgets on a subset of rendered pages of a traditional website. If you need an API on the backend, you need to write that API one way or another anyway.

This way people who are just looking for some information on a website can go visit, get, and leave, without having to enable intrusive JS blobs, while those in for the interative things on the website can get their preferred experience as well. Instead many websites are developed with only the second group in mind, often intentionally forcing you to run their code on your computer, or not delivering useful information at all.


Agree, too many believe in the silver-bullet that solves all their problems. Different problems require different solutions, it's kind of simple but hard to realize when you're deep into the woods.

If you want to build a vector editor in the browser then yes, probably you want to leverage webassembly, canvas, webgl or whatever you fancy.

But if you're building a more basic application (like most CRUD SaaS actually are) then probably you don't want to over-complicate it, as you instead want to be able to change and iterate quickly, so simplest tools and solutions gives you the most velocity for changes, until you've figured out the best way forward.

Trouble is to recognize where on the scale of "Figma <> news.ycombinator.com" you should be, and it's hard to identify exactly where the line gets drawn where you can justify upfront technical innovation in favor of true-and-tested approaches.


I think there's a similar dynamic as behind "nobody got fired for choosing Oracle" - it's safer to choose a more complex, but also more flexible technology.

If you're a tech lead, your worst nightmare is when you have to say - "this is very difficult to do with the current stack. When we were choosing this stack, we assumed you'll never want these things". You're not going to extract a binding promise from product/business that they will never want a certain class of things - you can only guess, and then hope, that the product will remain a dumb CRUD.


This exactly. And with modern approaches (not that it's not without a fair amount of effort), you can achieve both an MPA style with SPA features via "isomorphic" JS (SSR).


The converse is also true, you can add a lot more interactivity than you used to in a server rendered HTML world with stuff like LiveView in Phoenix or Hotwire in the Rails world.

I think a good heuristic is looking at whether the UX of your app feels more multi-page or single page, that should be a pretty big factor in your decision.


Particularly with the web though, you're very rarely completely locked into one front end technology. It's 100% reasonable to say "this particular complex interaction should use React" without needing to port the entire application. I'm sure even Photoshop and Figma could build their account management or settings pages with MPA if they wanted to - I don't use them and have no idea if they do, but "some parts of my application require complex tools" doesn't mean "all of my app requires complex tools"


Absolutely, but it's a matter of tradeoffs and where you place them. It's still plenty common to use an MPA framework like Rails or Django for the management or CMS portion of your solution, while using an SPA framework for your front presentation. It's much more acceptable to say "doing that for this staff workflow is hard to do with our stack" than "doing this for our customers is hard to do with our stack".


Sure! I think it's just a question of scale and knowing your problem. If it's "the entire customer-facing area needs a large amount of interactivity" then going with a SPA makes sense. If it's "this particular UI element on the customer app needs a large amount of interactivity", that's easy to build as an isolated component regardless of the technology of the rest of the app.

One thing that I'll always view as a smell though is "we don't need X now, but we might need X in the future, so we should adopt this more complex technology just in case". We've learned that lesson MANY times as developers, it's not any less true for front-end technologies.


From my brief look at my log and history usage and generally my estimate, 95%, or dare I say 99% in terms of my traffic could be a MPA. Currently the only site I go regularly that are JS SPAs are Feedly, Youtube, Discourse Forums and Twitter. And apart from Twitter the others could have been MPAs and still be perfectly fine. ( Although Youtube is debatable ) I did like to think 80-90% of the web population browsing usage dont deviate from mine that much.

The thing about JS SPA is that they are hard to make it 100% right. Even the simplest thing. And this goes back to the topic about Web Development and computing. Modern day web is designed by Google for Google. Making things easier for 98% of the web simply isn't their thing. And that is not just on the Web but everything else they do as well. And since no one gets fired for using what Google uses, we then end up with additional tools to solve the complexity of what Google uses.

Depending on how you count it we are now fast coming close to 20 years of Google dominance on the web. And there hasn't been a single day I didn't wish for an alternative to compete with them. I know this sounds stupid. But may be I should start another Yahoo.


"Currently the only site I go regularly that are JS SPAs are ...."

Are you sure about that? Well made SPAs don't look like apps. They navigate seamlessly and instantly because they're not redownloading and parsing all their header and footer HTML, re-constructing a brand new DOM, loading and reinterpreting CSS, and bootstrapping a new Javascript runtime on every click.

Look at https://react.dev/learn, click around the documentation pages - do you think that's an SPA or an MPA?

Open up your network tab, and you'll find out what's happening: When you hover over a nav header, it starts preloading a JSON file containing the content for that page. When you click on a navigation link, that content's loaded into the react DOM and some more prefetches of just JSON content for likely next pages are loaded up. Your browser navi is updated, but you are still in the same original page context you started in. It is insanely snappy to interact with.


https://react.dev/learn is so slow on my phone, it takes 1.5s to open the burger menu, and about 1s to jump to a section. (Google pixel 5a). It must be some SPA that loads the whole documentation all at once I presume. A traditional MPA would probably work much better here.

edit: and like the sibling comment noted, the history back button gets messed up. edit2: I mistakingly wrote nexus 5a instead of pixel 5a


Do you mean the pixel 5a? Just wondering because it would make a big difference if it was nexus 5 from 10 years ago, versus a much more recent pixel 5!


Whichever it is, it should certainly not take 1.5s to open a menu. Especially not on a website, that aims to teach people something about web development.


Absolutely, but it being a pixel5a makes it much worse! But as you said, even a nexus 5 should be able to run a doc website.


Yes pixel 5a, thank you. Edited.


If I go to react.dev/learn, click on "Escape hatches" in the menu, and scroll all the way to the bottom of the page, the browser Back button no longer works because they've added nine duplicate entries to my history.

If the official React documentation website can't implement SPA page navigation properly, what chance does anyone else have?


Well that bug is clearly idiotic, and makes me feel a fool for thinking react.dev would be a strong example of sane SPA architecture to link to.

The idea is sound, and the basic loading behavior is as I said (not sure what the people who are encountering 1.5 second navigation times are doing), and the existence of an implementation bug doesn't undermine the theoretical soundness of the architecture.... although, as you say, having one on the react docs is embarrassing.


> (not sure what the people who are encountering 1.5 second navigation times are doing)

On CPU-limited devices (and my computer with 4x CPU throttling enabled in devtools), react.dev appears to block the main thread for 500-1000ms while navigating to some of the "Learn React" pages—even if all the data for that page is already cached in memory.

I remember reading all kinds of blog posts about how Concurrent Mode and Time Slicing were gonna magically solve this by breaking up long tasks and prioritizing above-the-fold content so that it would pop into view faster. It would be funny if, in addition to being unable to correctly use the History API, the React team was also unable to use their own framework's performance features.


>The idea is sound, and the basic loading behavior is as I said

Yes, and that is why despite hating the idea from the get go, which was before 2009. I gave it plenty of time to mature. But the truth is, any technology is only as good as the human factor. We aren't perfect, and that is why even basic thing like this we would make mistake.

And this example just proves it even more. And I am ignoring the site's performance which felt really slow for what should be a MPA ( and it is not ).


Then well-made SPAs seem to be exceptionally rare. Somehow that site lags more than opening a new page on HN. You list all the work the browser is doing and yet somehow the SPA is making it do more. I agree it makes no damn sense and yet that is the experience I have of using them.


"Well designed SPAs look and feel like a MPA" isn't exactly a ringing endorsement when MPAs are less complex to build.


What part of 'insanely snappy' did you miss?

There's NO NEED for a browser to reload and reconstitute the entire page context on every interaction! It's a crazy way to architect applications!


Yes, and every major MPA framework optimizes this away, the same way that SPA approaches support server side rendering so you don't see a literal blank page before the app downloads.


"every major MPA framework optimizes this away"

... wut?

Wouldn't that make them 'compile to SPA' frameworks?


I think GP is talking about solutions like https://turbo.hotwired.dev/, which just paste server-generated HTML into the page instead of passing JSON into a client-side UI framework.


.... which is an SPA architecture.

> During rendering, Turbo Drive replaces the current <body> element outright and merges the contents of the <head> element. The JavaScript window and document objects, and the <html> element, persist from one rendering to the next.

So... it makes your app into an SPA.


Does SSR make React a MPA? If "MPA" limits us to only frameworks that have to do a full browser navigation for every interaction, it's a pointless discussion - "MPA" frameworks have had these sorts of optimizations for a decade+ (Hotwire is the newest, but there was Turbolinks before that and PJAX before that). Sure, I'll agree that React is a better approach than using the 2005 version of a framework, but that's not useful.

Architecturally, you'll still designing your application as though the user is performing a complete navigation, there's just Javascript present to optimize away some of the issues with that approach.


> click around the documentation pages - do you think that's an SPA or an MPA?

I think it's a page with broken use of the history api.

I clicked your link, opened the menu (as a comment mentioned it being slow), and then had to hit back 3 times to return here.


> Although Youtube is debatable

It can definitely be an MPA. The only somewhat dynamic part it has is the comments.

And now it's so egregiously bad that it's the single source of bad scores in Google's own metrics: https://twitter.com/dmitriid/status/1742669322487533801 and https://twitter.com/dmitriid/status/1742670032113402049 (yes, it loads 47KB of CSS/2.7 MB on desktop among other things)


There's the picture in picture stuff when you navigate away too. I recently did that with a MPA, and it was a not straightforward experience to get right.


It's kind of an awful experience though? Do people actually want their videos to follow them? If I'm navigating away it's because I'm done, it actually makes me kind of angry that the video chases me.


In my case it was a podcast player, so primarily audio, where playing in the background is a perfectly normal thing to do and you might want to browse other content while playing.


Maybe such functionality is best left to the browser itself. Firefox already has the functionality to "detach" a video. Then you can scroll wherever you want and still see that video.


Ah, I fogot about PIP


I still think you are not too wrong though. I usually use invidious and there are no interactive widgets, except for the video player, which I think is default HTML5 player. I rarely need anything else. And PIP can be done in Firefox with ease, without the website needing to implement anything.


> you should strive to make solutions that require the simplest tools possible whenever you can

I’ve gone back to making MPA apps with minimal JS. It helps me actually ship my projects rather than tinkering and having an over complicated setup for mostly CRUD tasks.

In one project that is a bit more data intensive and interactive I’m using Laravel Breeze / Laravel + inertajs (SSR react pages).

I’m also a big fan of Jekyll lately, I made my own theme on Thursday with only 2 tiny scripts for the mobile menu and submission of the contact form.

Using DOM APIs and managing a little bit of state is fine for many, many projects.

OTOH when you don’t control the requirements and the business asks for a ton of stateful widgets progressive enhancement can become a mess of spaghetti in the UI and API unless very carefully managed and well thought out. At that point you might as well go all in on React/Angular/Vue, especially when you have to account for a mix of skill levels and turnover.


A big factor in that “tinkeriness” of SPAs is how nearly every part of making an SPA well-behaved and pleasant to use falls almost entirely on the shoulders of the developer. Due to how little browsers provide on that front, well-behaved polished SPAs are very much not on the happy path or default. Even if you use the big popular libraries, special care must be taken to not build a product that’s a frustrating mess for users.

In comparison a server-side MPA will probably be at least decent to use unless the dev has been entirely careless, because that model better matches what browsers have been built for.

The takeaway is that for SPAs to be consistently good for both devs and users, browsers need to do the bulk of the heavy lifting and provide a “happy path”, largely eliminating the need for overwrought JS libraries that try to paper over browser inadequacies.


Yes, can’t argue with that. Making apps for iOS and usually Android is often way more pleasant.

Have you seen any serious proposals on this front?


I track web engine development only loosely, but no I haven’t seen much movement in that realm.


> The problem isn't so much those but how most developers lump themselves in with the incredibly interactive sites

It is not only Figma or Photoshop. Any site with multiple steps of interactions or complex filters over search result etc. benefit from SPA and declarative code. The experience is smoother and development of anything, but simple forms is much faster.

People disabling JS or working on satellite internet from a remote island are fringe cases and are not relevant for the business.


This attitude is the reason why I now dread filling out any random web form, because a poorly implemented crappy JavaScript ”flow” will take over my browser, break the back button, accessibility and autofill features and randomly fail and start over in step 6 out of 19. It’s the reason why a simple web form requires 100 MB of bandwidth to deal with.

People working from a train or on data roaming are not a fringe case. They have amounts of bandwidth that, 20 years ago, were enough to serve any complex web experience. It’s not acceptable that we now require more data than is contained in all the books in the library of congress to buy a ticket on Ticketmaster.

I can’t understand how someone looks at the ratio of useful action to code size, sees it’s something like 1:1e6, and thinks this is fine.

The modern web’s usability, efficiency and reliability are terrible, and worse each year. Defending the tech stack that led to this with “it’s a good UX” is both wrong and makes me feel like web devs live on another planet from the rest of us.


> It’s the reason why a simple web form requires 100 MB of bandwidth to deal with.

I have a hard time taking these arguments seriously because they get so exaggerated on HN.

I’ve done a lot of work from flights and extremely low bandwidth, high latency connections in foreign hotels. Not once have I encountered anything like a web form taking 100MB.


Two weeks ago, I let my partner use my data roaming to submit two forms on some Adobe collaboration thing, reply to a chat message on Teams and send an email. The counter on my phone said these actions used about 500 MB of bandwidth in the space of about 20 minutes.

I agree it sounds exaggerated, but I don’t think it is. This is kind of my point, it’s past the point you would think likely.


Stuff like filtering search results is very easily accomplished by a MPA with query parameters to a results page. The specific elements that allow you to specify query parameters often require more interactivity, but this is easy to layer on with a progressive enhancement type approach on top of a fundamentally MPA application.


>Stuff like filtering search results is very easily accomplished by a MPA with query parameters to a results page.

But the "MPA to another results page" causing an HTML reload with a flickering blank screen is a jarring UI experience.

The issue isn't what's "easy" to implement. Instead, users prefer a fluid and responsive UI. An example of a website that has superfast filters in SPA style instead of "MPA results page" is McMaster-Carr: https://www.mcmaster.com/

On that website, when the user changes the categories or filters, the results of items instantly change without the jarring disruption and delay of a new page being loaded.

There were several previous HN threads about it. Based on the near-universal praise in the comments of those threads, I don't think converting McMaster architecture to your suggested "MPA search results" would be considered an improvement. :

+ https://news.ycombinator.com/item?id=32976978 : Mcmaster.com is the best e-commerce site I've ever used (bedelstein.com) 1402 points by runxel on Sept 25, 2022 | hide | past | favorite | 494 comments

+ https://news.ycombinator.com/item?id=34306793 : McMaster-Carr: A refreshingly fast, thoughtful, and well-organized website(https://www.mcmaster.com/) 102 points|jer0me|1 year ago|36 comments

+ https://news.ycombinator.com/item?id=24803857 : McMaster-Carr: Beautifully organized and informational industrial product store(https://www.mcmaster.com) 40 points|astrocat|3 years ago|27 comments

+ https://news.ycombinator.com/item?id=34000502 : Best ecommerce UX practices from mcmaster.com(https://medusajs.com/blog/9-best-ecommerce-ux-practices-with...) 322 points|amoopa|1 year ago|167 comments


> But the "MPA to another results page" causing an HTML reload with a flickering blank screen is a jarring UI experience.

Pretty much every modern full stack framework includes approaches to do partial renders and / or DOM morphs of server generated HTML responses, eliminating the full-page refresh effect.

www.mcmaster.com seems to utilize this to some degree, actually - while yes there are JSON responses, there are what appear to be HTML partial responses as well that are presumably injected on the page. In any case, everything on that search engine would be trivially accomplished using a server rendered HTML approach without needing to utilize a SPA. It's actually a great example of something that would work great with progressive enhancement - the search bar can start as a simple input that leads to full page search results, the navigation can do a full page refresh if the partial page refresh JS isn't available for some reason. Javascript can make it better without being a requirement.

A good rule of thumb is that if an interaction existed at roughly the same fidelity during the Web 2.0 days, it's not something that requires a SPA framework. Typeahead search results and categorized product listings existed and were functional to the level of the site you linked back then.


>partial renders and / or DOM morphs of server generated HTML responses, eliminating the full-page refresh effect. www.mcmaster.com seems to utilize this to some degree, actually - while yes there are JSON responses, there are what appear to be HTML partial responses as well that are presumably injected on the page.

Uhm... yes?!? The behavior you listed is exactly why I gave you that McMaster example. So I guess I'm a little confused. In any case, your comment matches up with the wikipedia definition of SPA (https://en.wikipedia.org/wiki/Single-page_application):

>A single-page application (SPA) is a web application or website that interacts with the user by dynamically rewriting the current web page with new data from the web server, instead of the default method of a web browser loading entire new pages. The goal is faster transitions that make the website feel more like a native app.

An alternate way of interpreting your reply to me is if you also categorize McMaster's website behavior as a form of "MPA". In other words, you classify McMaster's loading of new HTML fragments and rewriting DOM as "multiple pages". I've not heard others define MPA in this way.

>, everything on that search engine would be trivially accomplished using a server rendered HTML approach without needing to utilize a SPA. It's actually a great example of something that would work great with progressive enhancement - the search bar can start as a simple input that leads to full page search results, the navigation can do a full page refresh

Yes, we've already agreed about it being technically trivial. The issue is end user's preferred UI experience. Users don't want the "page refresh/reload" even though it's trivial.


I think a lot of the time "SPA" vs "MPA" essentially actually means "does the client largely render it's own HTML" or "does the server render HTML and the client just displays it". Whether it displays that with a full page refresh or by injecting HTML via Javascript does not in practice matter. The idea of using AJAX to render HTML fragments to increase interactivity predates the term "SPA" by about a decade.

That's not really strictly the same thing as what the acronyms SPA and MPA, but in reality, people refer to a Rails application that uses large amounts of Hotwire as a "MPA" (even if it never results in full page refreshes and often doesn't even feel like navigating pages) and things built with tools like React as "SPAs" (even if you're perfectly capable of navigating between pages and getting React rendered by the server until the client takes over routing.

If your definition of "MPA" is "every interaction requires a full page load", it's a pointless discussion, because that's not really the reality of development even with "MPA" frameworks like Rails or Phoenix (I can't really speak to stuff like Laravel, but I'm sure they have an equivalent)

Maybe a good way to think about it is that the fundamental interaction model of frameworks like Rails is that they're built around the concept of the server returning new pages on navigation, and optimize that to provide a better experience, and SPAs are designed around the idea of a single web page visit instantiating an application at which point the client is control of navigation, but they optimize that to provide a better user experience (i.e. server side rendering of pages on first load).


>I think a lot of the time "SPA" vs "MPA" essentially actually means "does the client largely render it's own HTML" or "does the server render HTML and the client just displays it".

It seems like there was already terminology of CSR-vs-SSR (client-side vs server-side rendering) to differentiate that so there was no need for SPA-vs-MPA to overlap with CSR/SSR to try and make the same distinction.

>Whether it displays that with a full _page_ refresh or by injecting HTML via Javascript does not in practice matter.

It seemed like the 'P' in SPA-vs-MPA is literally about the Page(s) being reloaded. It's "single page" or "multiple pages". That's why developers like to clarify that Next.js -- even with SSR HTML hydration of various subpages -- is still an SPA because the page on the client-side browser isn't reloaded. I just did some skimming of various "SPA vs MPA" search results and none seemed to use those acronyms as a way to classify CSR-vs-SSR. (https://www.google.com/search?q=spa+vs+mpa)

I'm also not clear how you classify Mcmaster.com ? Is it an MPA to you?

>If your definition of "MPA" is "every interaction requires a full page load", it's a pointless discussion,

No I'm not saying every interaction. I was responding to your original suggestion of "MPA with query parameters to a results page" ... and showing how McMaster.com search filters do not work the way you recommend it should. Each click of navigation and filters triggers a JSON payload and dynamically rebuilds the DOM tree. The browser's performance.timing.loadEventEnd property value does not change.


> The issue isn't what's "easy" to implement. Instead, users prefer a fluid and responsive UI.

Most SPAs can't give you that either.


> But the "MPA to another results page" causing an HTML reload with a flickering blank screen is a jarring UI experience.

This is the worst and dumbest excuse for SPA bullshit. It's not jarring. You'll get over it. It's a fraction of a second where your devices is obviously doing a thing.

Web devs love the word "jarring" like it's some world shattering visual effect. SPAs break all the time in dumb-ass ways that are way more jarring and experience breaking than a page load.


>Web devs love the word "jarring" like it's some world shattering visual effect.

I've never been a web dev. I'm just explaining why the typical mainstream end users who don't have the same patience as HN-type techies (who more happily accept MPA) do not like the discontinuous UI of reloading pages.

Another example of that SSR MPA page reload/refresh would be old Mapquest before 2006. Screenshot: https://www.e-education.psu.edu/geog160/sites/www.e-educatio...

Each click on North/South/East/West buttons and Zoom In/Out blanked out the entire page and loaded a new page to shift the map viewport. This was a suboptimal UI experience for the typical user. I concede it wasn't "jarring" to you but it was to a lot of users -- especially compared to a CDROM maps experience. Example video of a smoother maps UI experience circa ~2000 from Microsoft CDROM desktop software without "blank reloading pages" to move a map around and change zoom levels: https://www.youtube.com/watch?v=4YO_KGdsUm4

The Mapquest "MPA page reloads" from 2006 was a UI that was less smooth than the Microsoft Streets CD software from 2000.

In 2005, when Google launched Google Maps with extensive usage of Javascript live-loading map tiles to provide smooth scrolling without reloading pages, end users liked that UI because it felt more interactive. In response, Mapquest also eventually switched away from the old-style SSR MPA page reloads: https://techcrunch.com/2007/10/12/exclusive-mapquest-plays-c...

The 2005 SPA-style of Google Maps just gets the UI back to what users already experienced in 2000 with desktop software. The SSR MPA page reloads was something that end users endured with Mapquest but it wasn't actually the UI they really wanted.

I'm not advocating the web devs use SPA (or SPA frameworks). Instead, I'm saying that responding with "SPA websites can be redone as MPA and it's trivial" is saying something that's true but still doesn't actually address the issue that mainstream endusers don't like the discontinuity of MPA type of UIs. That's the subthread I was addressing: https://news.ycombinator.com/item?id=38901249

E.g. McMaster.com is not a "web app" like Figma/Photopea but users prefer the SPA-style UI of that parts catalog website.


> People disabling JS or working on satellite internet from a remote island are fringe cases and are not relevant for the business.

Satellite internet (even before Starlink) is actually plenty fast for a modern website, as long as delivery is halfway optimized to avoid a thousand round trips.

It’s people on mobile phones in 3rd-world countries that suffer the most, but they end up with specially optimized websites and even separate mobile apps if they’re a target market.

People who disable JS are virtually non-existent in the real world, outside of bubbles like HN comments. Building technology strategies to cater to this tiny minority is not a good decision.


Often times people don't disable JS, networking hiccups and bad JS disable JS.


> People who disable JS are virtually non-existent in the real world, outside of bubbles like HN comments.

Except for all the microbrowsers[0] and crawlers that don't have JavaScript enabled or don't run all the JavaScript bullshit on a page. Building accessible sites that can be used in the widest possible context is good engineering.

[0] https://24ways.org/2019/microbrowsers-are-everywhere/


Accessibility workers have to hear rather too much of the "these people aren't relevant to the business" arguments. And while every business has its own concerns and priorities, standards based on exclusion don't belong on the web.


> People disabling JS or working on satellite internet from a remote island are fringe cases and are not relevant for the business.

How about people working on a train?


For most scenarios, the experience should be better with a well-designed SPA as while first load may be slow, and person may have to wait a min. Once loaded data transfer per interaction is much smaller. For a use-case of just loading a page reading it and submitting few fields on it, will be worse. But for complex things like multiple filtering, searching for different dates, seat selection it will be faster.


One difference is that server interactions on a MPA are usually more predictable. I can wait for a good internet connection to submit a form or click a link. On top of that, I'm using browser navigation to navigate a lot of the time, and while it's not impossible to provide good feedback about interactions in a SPA, many sites don't (or worse, use optimistic UI updates without handling failure states well so it's impossible to tell what's persisted and what's not).


Their experience shouldn’t be much different on an SPA vs an MPA. If they can do an MPA round trip involving a medium-size image, then they should be able to load an SPA.


SPAs often require an uninterrupted internet connection even if it’s not technically necessary.


Many of them use 'websockets', which break when you have interrupted connectivity.


That’s not been my experience, but you may be right. Why do they require an uninterrupted internet connection?


Because they break when a request fails. MPAs have a request resubmission UI out of the box. They also have request history navigation, easy resource bookmarking and other stuff you can reimplement in an SPA but usually don't.


I don’t know. Recently I was viewing an image in the Discord web app and it suddenly disappeared because my device had lost connection, even though the image was already fully loaded in the browser.


I mean to be fair, MPAs are by definition unusable without a consistent internet connection. By design every meaningful interaction needs to communicate with a server.


MPAs only need connection when navigating to a new page. It is not needed when reading a page.


> you should strive for the simplest tool possible, and even further, you should strive to make solutions that require the simplest tools possible whenever you can.

Why do you believe this? I couldn’t disagree more. People should strive for the most effective tool, and most of the time that’s what they already know, unless some new tools’ efficacy outweighs cost to learn it


It also depends on your definition of simple. The architectural model of Preact is simple. You change state, your application renders correctly. The architectural model of an MPA with interactivity sprinkled in seems as simple, but quickly becomes more complex over time, and has ultimately not been as simple in my experience.


Preact / React is simple because it solves for a very small slice of what an application needs to do and willfully ignores the rest. For example, (P)React has no real opinion on how it interacts with a server, which is a fundamental requirement of 99% of web applications (the new server components stuff is a counterargment I guess, but even then it doesn't consider the complexities of what a backend needs to do and is essentially a "put backend stuff here" slot)

MPA frameworks tend to present themselves as complete and batteries included. If you're using Rails, you can build a complete application without thinking about a single other library than what Rails ships with.

Neither approach is correct, but comparing them is like saying that HTML is so much simpler than C++ so everyone should use HTML.


But, that is a form of simplicity. It’s kind of the UNIX approach. I use Preact and a typed RPC client and a very simple router. The result is a reasonably small, easy-to-reason about program that I find very enjoyable to work on. If Preact shipped with their own communication layer and router, that would reduce the simplicity, and I’m not sure I’d actually like the choices they made for the part of the stack that is unrelated to rendering. Angular is an example of what you describe, and it’s not for me.


Sure, that's fine, I'm just saying they're not really directly comparable. A batteries included framework is an entirely different beast than a view library.


Fair point that my point was overly reductive. But given you understand multiple tools, you should reach for the simplest tool possible to solve the problem. And in some cases you should still reach for the simplest tool possible even if you don't understand it yet. If you've only ever used Kubernetes but someone asks you to host a static HTML file at an address, you should learn how to use a simpler solution for that.


>most of the time that’s what they already know, unless some new tools effficacy outweighs cost to learn it

....which any proponent of the new tool will argue it does


Yes which is why you should generally stick with what you know until you have an actual problem that needs solving…

SPA, MPA, who cares. Ship.


That's not how anything in the world works though


Most professions do work like that, actually.


- Hammer, screwdriver, who cares, fix it.

- Scalpel, forceps, who cares, do surgery.

- Reinforced concrete, 2x4s, who cares, build a bridge.

No, pretty much every profession that uses tools cares about using the correct tools for the job.


Yea, tools and processes they’ve decided on decades ago. You don’t see these people writing blog posts about new tools and wasting time evaluating them yearly like in tech.

If there’s an actual issue like there was with deaths in the medical profession due to not washing hands, then they evaluate.


1. There probably is quite a bit of discussion about tool selection in some fields. Surgical innovations didn't end with the invention of the scalpel. I'm sure there's lots of discussion about the appropriate use cases for robotic vs laprascopic vs traditional surgery, we just don't see them because we're on a tech forum and not where medical doctors discuss tools. I can say for a fact that I've seen more written about the merits of various screwdriver heads than I would have thought possible.

2. Software development is a little unique as an industry since it's not all that common that the users of the tools are also the people who make the tools. There's naturally going to be a lot of discussion about tools if you're both the maker and the user of them.

3. Us not having a standard for which tool to use is a reason to have these discussions, not a reason to say "pick whatever, it doesn't matter". The reason that those people don't write blog posts and have discussions about the merits of a hammer vs. a screwdriver is precisely because they're so well established - if both were a couple of years old, absolutely people in construction would be discussing whether to use nails or screws for an application.


...except those field are literally a thousands years old, while software industry is about 70? Them being more mature doesn't change the fact that processes and tools are crucial


No one is disagreeing with that.


"the problem" "you should" – this is the language of special interests

developers are salarymaxxing first, second virtue signaling to support their case in their employers' selection process, third work-minimization and pain-minimization. Even the Simplicity Paladins are min/maxxing the same three priorities, perhaps weighing pain-minimization above salarymaxxing, yet still subject to the same invisible macro forces that shape our lives. and I postulate that this is a complete explanation of developer behavior at scale.


I feel like I understand 50% of your comment, is this some DSL from a different ecosystem being used to explain developer behavior, or something like that?


Lol, this is written in very game-like language where you often need to prioritize certain aspects (to max something) above others. This is often because you get a limited number of "ability points" when you level up, so "maxing" strength means you prioritize using those points to gain strength.


I feel like we live in completely different worlds.


You have never seen the resume-driven development? Lucky you, because it's the vast majority of development out there


Without a study of some sort, we’re just exchanging anecdotes. I’ve seen resume-driven development a handful of times in my 20-year career. You may be right, but we won’t know until we come up with some way to measure it.

Most developers I’ve worked with have just been interested in solving problems.


Physicists work on interesting problems. Developers work on profitable problems, mostly manufactured, for huge piles of money, from home, and with yoga over lunch.


Maybe you just haven't looked for better jobs. Because although I've read about what you're describing, I haven't seen it yet in real life.

I'm not planning on experiencing it either in the future. Though I'm sure some people like this kind of environment, and good for them.


I didn’t say “interesting” problems. Just problems. Anyway, sometimes they are interesting. I think Rich Hickey worked on some interesting problems. Clojure and Datomic are pretty neat, and Electric looks like an interesting problem, too :)


Rich Hickey is a founder, not a developer and regardless he is motivated by pain-minimization in the context of money making: "I had had enough!" [of manufactured complexity in commercial development] — his paper


A great mental framework that holds up to my experience as well.


> The problem isn't so much those but how most developers lump themselves in with the incredibly interactive sites because it sounds sexier and cooler to work on something complex than something simple.

This is very similar to the NoSQL arc. Some people at prestigious places posted about some cool problems they had, and a generation of inexperienced developers started that they needed MongoDB and Cassandra to build a CRUD app with several orders of magnitude fewer users, transactions, or developers. One of the biggest things our field needs to mature on is the idea of focusing on the problem our users have rather than what would look cool when applying for a new job.

The SPA obsession has been frustrating that way for me because I work with public-focused information-heavy sites where the benefits are usually negative and there’s a cost to users on older hardware – e.g. the median American user has JavaScript performance on par with an iPhone 6S so not requiring 4MB of JS to display text and pictures has real value – but that conflicts with hiring since every contractor is thinking about what’ll sound “modern” on their CV.


> " ... is bringing to the table is the idea ..."

Wikipedia states that

    "Keep it simple, stupid!", is a design principle first noted by the U.S. Navy in 1960 [0]
... but some coders, including yours truly, has been brought up with that principle as a keystone of programming from day one (which was decades ago). It is related to the more modern DRY principle.

If this is brought to the table now it is only seemingly so, caused by the fact that those at the table must have forgot it, or never learned it. Of course, there are also commercial interests in keeping things as complicated as possible - it could just be that these have had too much influence for too long.

[0] https://en.wikipedia.org/wiki/KISS_principle


I think it's (increasingly) not as binary as either MPA or SPA. Although it has been for quite some time now.

A lot of web developers strive for some amount of templating and client-side interactivity on their websites. And when frameworks like React came up they solved interactivity issues but made it hard to integrate into existing server-side templating systems, which were mostly using different programming languages.

So because integrating the frameworks for client-side interactivity was hard, the frameworks also took on the job of templating the entire site and suddenly SPAs were popular. I think a big draw here was that the entire tooling became JavaScript.

But the drawbacks were apparent, I guess a big one was that search engines could not index these sites and of course performance, so the frameworks got SSR support. The site was still written in the framework, rendered to HTML on the server and then hydrated back to a SPA on the client.

Now, even more recently we got stuff like islands, where you still use the handy web framework but can choose which parts of your site should actually be interactive (i.e. hydrated) on the client. And I believe this is the capability that has just long been missing. Some sites require no JS on the client (could even be SSGs), others require a little interactivity, and some make most sense as full blown SPAs.

We're finally entering the era where the developer has that choice even though they use the same underlying framework.


When you vote on a HN comment while writing a reply, it reloads and you lose your reply. That's the kind of problem you have with MPAs, even if you aren't building the next Figma.


Simplest feels like a folly. No project of significance stays in a simple phase. They all grow and expand.

Having a stable reliable generally applicable decision/toolset you can apply beats this hunt for optimization to smithereens. Don't optimize case by case. Optimize for your org, for your life, lean into good tools you can use universally and stop special casing your stuff. There's nearly no reason to complicate things by hounding for "simplicity." Other people won't be happier if you keep doing side quests for simple, and you won't be either.

(Do learn to get good with a front-end router, so you can avoid >50% of the practical downsides of SPAs. And I hope over time I can recommend WebComponents as a good-for-all un-framework.)


The core of what differentiates applications isn't what happens on the front end. Putting all the focus on the client which gets delivered seems like a misappropriation of funds.

Especially as one man teams it just doesn't make sense. Any on teams with multiple people having relatively static HTML is a really effective abstraction.


> well what about Figma and Photoshop

I, for one, don't want them rendered in my browser. I have an OS that can run apps, and I want my browser to be an app that renders simple HTML pages. If you want an app, make a damn Desktop app that can run on my OS.


I couldn't disagree more. Desktop apps are often so invasive that they almost feel like malware. Every time I install a desktop app I have to ensure that it isn't reading random files from my filesystem, snooping on my clipboard, or making itself persistent so that it restarts automatically every time I reboot my computer.

Adobe apps like Photoshop are some of the worst offenders. Sometimes I'll kill an Adobe process running in the background, only to realize that there's an additional background process ready to restart the first one. It's like playing whack a mole trying to stop all of the creative cloud junk processes. I would much rather sandbox software like that in the browser where I can close a tab and be done with it (and where I'll be prompted before an app tries to read passwords from my clipboard or access files from my filesystem).


You can sandbox desktop apps, you don't need to run them in a browser that is so complex that nobody can even imagine writing a new one.

For instance Android does that sandboxing by default.


> You can sandbox desktop apps

You can do a lot of things in theory, but in practice browsers are much better sandboxes than desktop operating systems.

> a browser that is so complex that nobody can even imagine writing a new one.

I'm not sure how this is relevant? As a user I don't care how complex a browser is. I care that it sandboxes applications better than my desktop operating system. Unless you mean to say that the complexity implies a greater surface area for security related bugs, in which case surely the underlying os is even more complex (which is what native apps run on). I would imagine writing a new desktop os would be even more complex than writing a browser app that runs on top of it.

> For instance Android does that sandboxing by default.

Ok, so now we're talking about mobile operating systems rather than desktop operating systems, which to me feels like an implicit concession that desktop operating systems are indeed bad at sandboxing applications.

But even if we do shift the goalposts — even mobile operating systems pale in comparison to a web browser when it comes to sandboxing. Android and iOS will notify you if an app reads from the clipboard (which they've only recently started doing), but your browser won't just notify you, it'll ask you to confirm before a website reads from the clipboard. A website can't even make a request to a third party domain unless the website has cors enabled. And new vulnerabilities pop up all the time. There was an article that generated a lot of traction on HN just a few weeks ago about how certain iOS applications can pinpoint your location by scanning for known hot spots your device has access to [1].

[1] https://news.ycombinator.com/item?id=38720656


> but in practice browsers are much better sandboxes than desktop operating systems.

Do you know anything about sandboxing, or are you throwing that there for the sake of the argument?

> I would imagine writing a new desktop os would be even more complex than writing a browser app that runs on top of it.

Oh right, I guess you don't really know about sandboxing then. So it won't be a super constructive debate given that your position is apparently fundamentally based on your intuition about sandboxing.

> As a user I [...]

I don't know who invented that "As a user" thing, but I find it completely stupid. In my view it is just used as a justification for anything one wants when they don't have a better argument.

In this case users are perfectly fine running standalone apps on their smartphone, let's not pretend they would really not want the same model on their desktop.

> which to me feels like an implicit concession that desktop operating systems are indeed bad at sandboxing applications.

Not at all, I was just giving a real-world example of sandboxing of apps at scale.

> even mobile operating systems pale in comparison to a web browser when it comes to sandboxing

If your baseline is sending a full web browser with every app you make, on desktop you could run each app in a VM and it would obviously be better.

> A website can't even make a request to a third party domain unless the website has cors enabled.

An app can't make a request unless it has the internet permission, what's your point?

My point is that webapps move everything into the browser, going towards a world where something like ChromeOS is the only valid way to use a computer. I want to choose my OS, I don't want to rent an OS provided by BigTech, whether it is ChromeOS, Windows or anything else. The model where users pay for a product but don't own it is good for companies, not for users. "As a user", I want to own the product I pay for. And I want to pay for the product I want to own.


> I don't know who invented that "As a user" thing, but I find it completely stupid. In my view it is just used as a justification for anything one wants when they don't have a better argument.

> "As a user", I want to own the product I pay for.

Ironically, you're the one who lacks a concrete argument, which is why you're attacking the wording of my statement, rather than the substance of it. You're honing in on the first three words of my sentence, because you can't debate the argument on its merits. You then use the exact same wording in your final sentence, but encase it in quotes as if that somehow absolves you of your hypocrisy. Given that this website is frequented by software developers, I think there's a useful distinction to be made between thinking about problems in terms of their development, versus thinking about them in terms of their utility to end users.

> Not at all, I was just giving a real-world example of sandboxing of apps at scale.

It would be charitable of me to call it a "real-word example". You simply said "Android does that sandboxing by default", without a single supporting statement or example, despite your so-called extensive knowledge of sandboxing.

> If your baseline is sending a full web browser with every app you make, on desktop you could run each app in a VM and it would obviously be better.

No one is "sending a full web browser with every app you make". Browsers come preinstalled on every popular operating system.

> An app can't make a request unless it has the internet permission, what's your point?

What in the world is "the internet permission"? I've never had an operating system ask me if I'd like to grant an app "the internet permission". Have you operated a computer before?

> Do you know anything about sandboxing, or are you throwing that there for the sake of the argument?

> Oh right, I guess you don't really know about sandboxing then. So it won't be a super constructive debate given that your position is apparently fundamentally based on your intuition about sandboxing.

As someone who's written both web apps and desktop apps, I do in fact know a considerable amount about sandboxing. Do you know anything about sandboxing? You're questioning my knowledge to deflect from your lack of a coherent rebuttal. What exactly have you written during this conversation to demonstrate your comprehensive knowledge of sandboxing? You're concluding that I lack knowledge on sandboxing because I admitted to not having single-handedly written an operating system or web browser? Really? Are you writing an OS in your free time when you're not writing about the mysterious "internet permission"?

> My point is that webapps move everything into the browser, going towards a world where something like ChromeOS is the only valid way to use a computer. I want to choose my OS, I don't want to rent an OS provided by BigTech

You've got it backwards. If I want to add support for users running a free open-source operating system like Linux, as a web app developer I don't have to do anything special. Linux can run web browsers, and therefore it can run web apps. Case in point is Photoshop. Neither Photoshop nor the rest of the Adobe Creative Suite runs on Linux, but the Photoshop web app does, because web apps are universal. There's a reason why Apple took years to finally add push notification support to iOS web apps, because web apps threaten the mobile operating system duopoly.


> Ironically, [...] You then use the exact same wording in your final sentence

Thanks for explaining to me what I did ;-).

> You simply said "Android does that sandboxing by default", without a single supporting statement or example

Are you questioning the fact that Android apps are sandboxed? If yes, you may need to do some reading on your own. I am not here to teach you how Android works.

> No one is "sending a full web browser with every app you make".

All the webapps that try to look like Desktop apps have to ship a browser with them. You mentioned VScode, right?

> What in the world is "the internet permission"? I've never had an operating system ask me if I'd like to grant an app "the internet permission". Have you operated a computer before?

Oh come on... you just don't have the slightest idea how native apps work, do you? It's literally called "android.permission.INTERNET". Have you ever tried something not web?

> Are you writing an OS in your free time when you're not writing about the mysterious "internet permission"?

As a matter of fact, not an OS but embedded distributions. That... wait for it... use sandboxing. Do I need to get back on the "mysterious" internet permission? Wait, here's a link to help you: https://developer.android.com/develop/connectivity/network-o....

> as a web app developer I don't have to do anything special.

Not even Google "Internet permission" before dismissing someone's point. I love that kind of webapp developers.

> because web apps threaten the mobile operating system duopoly.

They threaten every platform by making everything a ChromeOS system (no, not literally ChromeOS, but something based more and more around Chromium, which is owned by Google).


> Thanks for explaining to me what I did ;-).

You're welcome.

> Are you questioning the fact that Android apps are sandboxed? If yes, you may need to do some reading on your own. I am not here to teach you how Android works.

I'm not questioning whether or not Android apps are sandboxed. I'm questioning how well they're sandboxed relative to web apps, which is why I gave you several examples of capabilities that a native app has that a web app does not. You're losing the thread of the conversation.

> All the webapps that try to look like Desktop apps have to ship a browser with them. You mentioned VScode, right?

> Not even Google "Internet permission" before dismissing someone's point. I love that kind of webapp developers.

I never said anything about VSCode. You can't even remember what we've talked about. I love that kind of commenter. We're not talking about desktop apps that use web technologies vs desktop apps that don't. We're talking about desktop (and now apparently mobile) apps vs web apps that run in the browser. Allow me to quote your original comment that I replied to as a memory refresher: "I, for one, don't want them rendered in my browser. I have an OS that can run apps, and I want my browser to be an app that renders simple HTML pages." This is what we are debating. You want to shift this conversation into a flamewar about desktop apps built with Electron, because you know your actual argument has less merit. This whole conversation has consisted of you shifting goal posts, and retreating to a lesser version of your original argument. I'm still waiting for you to compare the security of desktop apps to web apps, which was my entire original point.

> Oh come on... you just don't have the slightest idea how native apps work, do you? It's literally called "android.permission.INTERNET". Have you ever tried something not web?

I'm talking about permissions that a user has to intentionally grant via an explicit prompt, not a list of bullet points that appear to a user if they happen to view an app's detail page [1]. Your own link explains it best: "Note: Both the INTERNET and ACCESS_NETWORK_STATE permissions are normal permissions, which means they're granted at install time and don't need to be requested at runtime."[2]

But you were actually responding to my comment about CORS when you brought up "the internet permission", which unlike the coarse grained permissions that most operating systems offer allows any website to prevent any other website from accessing its resources. Which means I can't use a web app to form a botnet that attacks some innocent server, unless that server explicitly allows it via a CORS header (and also ignores the incoming origin header). A desktop app can connect to any domain it wants, and can even directly connect to the server's ip and impersonate a legitimate client by forging the origin and user-agent headers.

> They threaten every platform by making everything a ChromeOS system (no, not literally ChromeOS, but something based more and more around Chromium, which is owned by Google).

No...they don't. Have you forgotten that Firefox and Safari exist, or should I send you a link to their home pages? But even if we put that aside, during this entire discussion you've been championing Android which is...wait for it... developed by Google. Please tell me you're being intentionally obtuse?

[1] https://developer.android.com/guide/topics/permissions/overv....

[2] https://developer.android.com/develop/connectivity/network-o...


> But even if we put that aside, during this entire discussion you've been championing Android which is...wait for it... developed by Google.

I am not AT ALL saying that we should push for Android everywhere. I am just saying that Android (and iOS, but I don't know the details of how iOS works) are sandboxing apps. I don't think security is an argument for PWA. The argument for PWAs is "I know webtech and it would be cheaper if everything ran in Chromium".

Be assured that if the discussion was about using Android everywhere (web, mobile, desktop), I would be against it as well. I don't want a one-size-fits-all solution, because it usually doesn't fit that well, and it kills diversity.


> but I don't know the details of how iOS works

Oh right, I guess you don't really know about iOS sandboxing then. So it won't be a super constructive debate given that your position is apparently fundamentally based on your intuition about iOS sandboxing. Remember that line [0]?

> I don't think security is an argument for PWA. The argument for PWAs is "I know webtech and it would be cheaper if everything ran in Chromium".

Security is MY argument for distributing software in the browser vs as a desktop or mobile application. If you refuse to engage me about the point I'm making, then you're arguing against a straw man, which ultimately indicates that you just don't have a strong rebuttal, which is what I've been saying since the very beginning [1].

> Be assured that if the discussion was about using Android everywhere (web, mobile, desktop), I would be against it as well. I don't want a one-size-fits-all solution, because it usually doesn't fit that well, and it kills diversity.

So you're an Android developer who's mortally afraid of Google hegemony? That's some next level cognitive dissonance. If you're afraid of Google dominance, I'm sorry to tell you this, but Android is their best tool for accomplishing that goal. The EU fined them 5 billion in 2018 over this, and told them to stop "forcing manufacturers to preinstall Chrome and Google search in order to offer the Google Play Store on handsets. Google will also need to stop preventing phone makers from using forked versions of Android" [2]. You're afraid of the influence of Chrome, and want people to develop directly for Android, but Google is using their Play Store and all of its Android apps as leverage to force manufacturers to preinstall Chrome (and Google search). Android apps give Google the leverage to force Chrome down everyone's throats.

You're afraid of a Google browser monoculture, and don't think Firefox and Safari present enough competition, and your solution is for people to develop apps directly for Android and iOS, where there's even less competition? And by the way, the only competition Android has is a closed source operating system (iOS) that doesn't even allowing sideloading apps or competing app stores. If web apps were more popular we wouldn't have a mobile duopoly (iOS and Android), or a desktop duopoly (Windows and macOS), because the web is an open platform and there are web browsers on every operating system (including desktop Linux, the various BSD variants, Ubuntu Touch, et cetera). This is why I told you a long time ago that you've got it all backwards [3].

[0] https://news.ycombinator.com/item?id=38913989

[1] https://en.wikipedia.org/wiki/Straw_man

[2] https://www.theverge.com/2018/7/18/17580694/google-android-e...

[3] https://news.ycombinator.com/item?id=38917623#:~:text=You%27....


Alright, let's take a step back. First, I am not a mobile developer. I was mentioning Android as an example of sandboxing outside the browser (mobile developers don't have anything to do with that sandboxing). Other examples include whatever iOS does (which I don't know), containers (docker and the likes), VMs, and everything in-between (like what snap or flatpak use). My point there was that running code in a browser is not - and by far - the only way to do sandboxing.

Sandboxes usually have to give permissions, with some granularity. The more permissions you give, the larger the attack surface. There is nothing that makes browsers inherently safer than other sandboxes: a browser is just a process running in user space. If anything, modern browsers are so complex (and getting worse with time) that the attack surface is big, which is why they require a ton of resources in terms of security.

Moreover, Web UIs bring their own class of issues that don't really apply to native apps. You insisted on CORS, which is one mitigation for some of those issues. But CORS is really a browser thing, I don't think it really makes sense to compare it to anything outside the "webview world".

If security is your concern (and you seem to insist that it is), then webapps are really not better than the alternatives. Actually, the Apple Store and the Play Store (to give an example in the mobile world) allow Apple and Google to somehow monitor the apps that users install, which is most certainly more secure than a model where anyone can load any webapp from any website.

I see many reasons to want PWAs (which I may or may not share), but security is not one.


> Alright, let's take a step back. First, I am not a mobile developer.

I think you're whichever kind of developer your current position requires. You've been talking about Android non-stop throughout this conversation, and conversations you've had with others on this website [1]. When you were lambasting me about my perceived knowledge of mobile development you were touting your Android knowledge, and taunting me about whether or not I've done anything outside the web. Now that I've proven Android is actually one of the primary tools Google uses to promote Chrome (and you admitted you don't know much about iOS) you want to distance yourself from mobile development altogether.

> Other examples include whatever iOS does (which I don't know), containers (docker and the likes), VMs, and everything in-between (like what snap or flatpak use).

We're not discussing theoretical means with which you could sandbox an application, we're talking about how apps are actually used in reality. If you need to fire up a virtual machine every time you use your favorite desktop apps, then you're only proving my point that they're not inherently very secure. Not to mention, the average user probably has no idea what Docker or a virtual machine even is. Like I said in my original response, lots of things are possible in theory, but in practice web browsers are much better at sandboxing apps than desktop operating systems (and even better than mobile operating systems). And by the way, you can run a browser inside of a vm too, so if anything the technologies you're advocating for bolster the security of web apps rather than compete with them.

> If anything, modern browsers are so complex (and getting worse with time) that the attack surface is big

Ironically, a lot of that complexity arises from the web's insistence on security. V8 is complex because it has so many safeguards in place to sandbox JavaScript, and that sandboxing is taken very seriously. There's a reward anywhere from 10,000 to 150,000 USD if you can escape the sandbox [2][3]. Browsers are inherently more secure than desktop apps because they limit access to the underlying platform. Someone developing malware as a web app has to first escape the browser sandbox, just to gain the privileges that a desktop app has natively. If it helps, you can think of every desktop app as a webapp which has already escaped the browser.

> Moreover, Web UIs bring their own class of issues that don't really apply to native apps.

No, web developers have just spent so much time thinking about security, that native app developers haven't even realized these security issues are relevant yet. It took years for Apple and Google to come to the brilliant conclusion that they should notify users when an app is reading from the clipboard, something which at the time was considered just a browser "class of issue". Maybe in 2034 they'll figure this out for desktop apps.

> But CORS is really a browser thing, I don't think it really makes sense to compare it to anything outside the "webview world".

It makes sense to compare it to things outside of the browser because it protects users and servers. You seem to want to disqualify any point I make that you can't disprove. If you don't think web technology is comparable to anything outside the browser, then what are we even arguing about? This whole discussion has been about comparing the security of web apps to non-web apps.

> If security is your concern (and you seem to insist that it is), then webapps are really not better than the alternatives. Actually, the Apple Store and the Play Store (to give an example in the mobile world) allow Apple and Google to somehow monitor the apps that users install, which is most certainly more secure than a model where anyone can load any webapp from any website.

Security is not some new thing I'm insisting on, it's been my whole point from the very beginning. You're just finally deciding to engage with me about it, instead of derailing the conversation constantly. Apple and Google have to monitor which apps make it to their app stores, BECAUSE apps are so much more prone to security problems. You once again have it completely backwards. No one has to gatekeep websites because browsers are so much better at sandboxing applications. And allow me to remind you that you admitted you have no idea how iOS sandboxing works, so you can't really be confident about this stance even if it did make sense.

And now you're arguing in favor of the app store duopoly which contradicts your point about software diversity. You can't have it both ways. You're trying to hold on to two contradictory points at the same time: you don't like the supposed lack of browser diversity (which is why you seem to detest Chromium), but you like the supposed security guarantees of the mobile app store duopoly, which is even less diverse.

[1] https://news.ycombinator.com/item?id=38919389

[2] https://github.com/google/security-research/blob/master/v8ct...

[3] https://bughunters.google.com/about/rules/5745167867576320/c...


> You can't have it both ways. You're trying to hold on to two contradictory points at the same time: you don't like the supposed lack of browser diversity (which is why you seem to detest Chromium), but you like the supposed security guarantees of the mobile app store duopoly, which is even less diverse.

Ok I get it.

Let me rephrase it just to make it clear: It is true that I don't like the lack of diversity (that would come from everything being webtech on top of Chromium), and it is also true that I like the security that comes from a managed app store. I do! I can have it both ways! Isn't that marvelous?

If you can't understand how this is possible, I think we can stop here. We won't get anywhere if you can't understand what I write.


You've completely abandoned any attempt to argue the point about the security of web apps vs non-web apps, which was the original point of this discussion, so now let me address all the tangents you like going on to deflect. You're an expert at cherry picking which arguments you'd like to reply to, to avoid tackling the main issue at hand.

> It is true that I don't like the lack of diversity (that would come from everything being webtech on top of Chromium), and it is also true that I like the security that comes from a managed app store.

You've said previously: "My point is that webapps move everything into the browser, going towards a world where something like ChromeOS is the only valid way to use a computer. I want to choose my OS". [1]

So you think the best way to increase OS diversity is to get developers to submit their apps to proprietary app stores that only run on their own respective operating systems, instead of using open web standards that work on every operating system? How does that make sense?

> I do! I can have it both ways! Isn't that marvelous?

No! You can't! Not if you value logical consistency.

> If you can't understand how this is possible, I think we can stop here. We won't get anywhere if you can't understand what I write.

I don't think you comprehend what you're writing, or rather, you're not willing to admit that what you're writing is incomprehensible. Saying "my argument makes sense, you just can't understand it" is just you being petulant. You want to "stop here" because you've argued yourself into an illogical corner.

[1] https://news.ycombinator.com/item?id=38913989


> Saying "my argument makes sense, you just can't understand it" is just you being petulant.

I did not say that. I said that my preferences are consistent. Security and diversity are orthogonal concepts. I can say: "I want as much security as possible AND as much diversity as possible". It is not an argument, it is a preference.

You come and say: "Aha, I got you! You cannot want both security and diversity! You have to want one or the other, not both, because I say so! You just lost the debate, you dumb ass".

Fine, I lost the debate, you're the best.


First of all, I've been saying from the very beginning that your stance implies both less security AND less diversity. But I knew you would grasp onto the security part like a lifeline, because you've run out of ways to derail the conversation, which is why I clarified in my previous comment. You ignored my clarification, and once again decided to argue with a straw man. I've never seen so many bad faith straw man arguments in my life. Forget the security aspect of it since you clearly can't debate that, and just focus on the diversity, and you're STILL wrong.

As you like to say when you're clarifying, "let's take a step back here". I'll just repeat my last comment, and hopefully you won't evade it like you always do:

You've said previously: "My point is that webapps move everything into the browser, going towards a world where something like ChromeOS is the only valid way to use a computer. I want to choose my OS". [1]

So you think the best way to increase OS diversity is to get developers to submit their apps to proprietary app stores that only run on their own respective operating systems, instead of using open web standards that work on every operating system? How does that make sense?

Do you get it yet? You're claiming you want OS diversity, but you're advocating for the solution that results in LESS OS diversity, that's why you're contradicting yourself, and that's why your position is logically inconsistent. You absolutely know this, which is why you're dodging every attempt to actually debate it. And I know you know this, because you purposely omitted the first sentence of my paragraph when you quoted it, which was [2]: "And now you're arguing in favor of the app store duopoly which contradicts your point about software diversity." That part didn't fit your narrative, which is why you omitted it. You're better at evasion, and rhetorical trickery than you are at actually discussing technical topics. If you had said instead: "I admit my position implies less OS diversity, but in this case I'm willing to make that trade off in exchange for better security guarantees", then we could move on to the security question (and you'd lose that debate too).

You can admit that one of those pesky web developers you're so fond of condescending to actually has a good point, it won't hurt.

[1] https://news.ycombinator.com/item?id=38913989#:~:text=what%2...

[2] https://news.ycombinator.com/item?id=38934276#:~:text=did%20...


> So you think the best way to increase OS diversity is to get developers to submit their apps to proprietary app stores that only run on their own respective operating systems.

No, I don't. I think that having different tools, more or less specialized for particular platforms, is better than using webtech everywhere. My reason being that I tend to hate webtech and all it represents to me: I don't like unmanaged language package managers like npm and how they allow devs to have no clue about their dependencies. I don't like Javascript. I don't like having to run a browser to access Discord, or alternatively to have a fake Desktop app that is essentially a hardcoded one-tab browser. I don't like to run complicated webapps in a tab that can freeze my whole browser. I don't like that if my browser crashes, all my webapps stop. I find that pushing for WebAssembly to run everything in the browser is completely overkill given that we already have tons of ways to run stuff on different OSes. I don't like how web people tend to not know anything not web (including native/non-native-but-not-web mobile apps, native/non-native-but-still-not-web Desktop apps, mobile OSes like iOS/Android/Linux-based-but-not-ubuntu, Desktop OSes like Windows/macOS/Linux/-BSD, embedded OSes like OpenWRT/-BSD) but still claim that webtech is better.

I like C when it makes sense, I find merit to C++ in many situations, I think Rust is interesting (except for the language package management which seems to come straight out of the webtech hell). I like Java/JVM and its evolution in the last years (no, it's not just an interpreter and web applets since the beginning of the century, but too many web people missed the memo), I find that Android has done a lot of interesting stuff with JIT and AOT, I think that GraalVM is really promising. I love Scala and Kotlin, and the new Jetpack Compose way for UIs (coming to Desktop apparently). I wish I could spend more time on Swift and discover SwiftUI, and I had fun learning Flutter and Dart (though it's still has the fundamental issues of cross-platform frameworks IMO). I don't know anything about .NET, but it doesn't seem bad. I like making custom Linux with fun tools (buildroot, Yocto, pmbootstrap) or learning how relatively mainstream distributions work. I like running stuff on -BSD (not in a browser, actually on the system). I like how Linux distributions approach their package management.

I am a big fan of open protocols, which mean that I can run my TUI IRC client (written in C) on my OpenBSD, my favorite email client (written in Go) on my Alpine Linux, and a whole bunch of stuff like git/gpg/ssh/podman/pass in CLI. I can even enjoy tools written in niche languages like Hare!

Those things I like, TO ME, represent diversity, and allow me to choose the tools that are more ergonomic for me, and even to contribute to them. Webtech, TO ME, represents those shitty Slack/Discord/Teams/NameYourCloud proprietary apps (and those are the good ones), written by people who want a one-size-fits-all solution so that they can be more productive by knowing ONE tech and making ONE mediocre app that will run badly on all those systems they never cared to study, governed by rules like "no need to optimize for memory, memory is cheap ahahaha!!!1!". All that forcing me to run full-blown apps (and not websites anymore) in a damn browser, in a world where Safari is Apple's way of refusing webtech for as long as they can, Firefox is a joke (which I use, don't get me wrong) and everything else non-Chrome is about customizing Chromium and pretending that they own their codebase.

PWAs are a promise to move that shitty world out of the browser and into mobile devices (because ElectronJS already succeeded in moving that shitty world out of the browser and into the Desktop... by duplicating a browser I did not choose, and in my back). All of that is transforming my Desktop OS and my mobile OS into basically a big browser that I hate (Chromium) running bad apps written with webtech that I hate.

Native Android and iOS apps are not perfect of course. But they are not webtech. And at this point I'm holding to anything that is not damn webtech (or worse: "AI" bullshit).

Go on, tell me why I should not feel the way I feel or, even better, prove it to me, with cross-references to whatever you find (I still won't click on your links, though, I really don't give a shit).

> then we could move on to the security question (and you'd lose that debate too).

I am not here to win (is there a price for the winner?). I would genuinely be very happy if you taught me something (just a small thing) about why browsers are fundamentally better in terms of security than any other kind of sandbox I can imagine. But something constructive, like why it is that whatever is used to sandbox processes in a browser cannot be used to sandbox processes outside the browser. Or why granular access control works in the browser and fundamentally cannot be used outside of it.

But if it is to tell me that browsers are better because smart people spend a lot of time working on V8, or that web people invented access control last year, please don't lose your time.*


> I don't like how web people tend to not know anything not web

This is the reason why your responses have been so arrogant. This is why you assumed I lacked knowledge about sandboxing before we'd even had a chance to discuss the topic in any sort of depth. You have this preconceived notion that all web developers are myopic and can't see anything outside of the web, and you've projected this stereotype onto me as if you're omniscient. If you truly do enjoy engaging in good faith arguments, and learning from other commenters, then you wouldn't start with the pompous assumption that the person you're talking to is ignorant.

> I don't like unmanaged language package managers like npm and how they allow devs to have no clue about their dependencies. I don't like Javascript.

Finally, you just came out and said it. You have a deep seated visceral hatred of JavaScript and anything even tangentially related to it. This is why you've been trying to bait me into talking about Electron, to the point of literally fabricating statements (at one point you claimed I was talking about VSCode). This is your pet issue, and your clamoring for a chance to talk about it. I get it, you don't like JS. It's a popular opinion amongst snobbish developers who like to promote this culture of contempt that pervades the software development world [1].

The problem is...we're not talking about the pros and cons of JavaScript as a language, or npm as a package manger. I have feelings about that as well (which I may or may not share), but my primary conjecture has always been that software is safer when run in the browser (especially on desktop operating systems). That's why I originally responded to your comment about Figma and Photoshop, and provided my own anecdote about my experiences using Adobe Photoshop on my desktop computer.

> Those things I like, TO ME, represent diversity, and allow me to choose the tools that are more ergonomic for me, and even to contribute to them.

The preceding paragraphs read like a CV with every technology you've ever interacted with, and many of them are very interesting, but all of that is completely besides the point. I'm going to quote you again here, you said: "My point is that webapps move everything into the browser, going towards a world where something like ChromeOS is the only valid way to use a computer. I want to choose my OS".

We're not talking about the diversity of tools used to build applications, we're talking about the diversity of operating systems used to run graphical user interface apps. You absolutely refuse to stay on topic. Submitting apps to proprietary app stores that only run on their respective operating systems is not the best way to promote operating system diversity. If I build an app for the browser it'll run on every operating system (since they all ship with a web browser), that's just an objective fact.

> is there a price for the winner

You should be a comedian. I'm here to talk about technology.

> I would genuinely be very happy if you taught me something (just a small thing) about why browsers are fundamentally better in terms of security than any other kind of sandbox I can imagine.

We're not talking about what you can fundamentally imagine, we're talking about how software is used in reality.

> why it is that whatever is used to sandbox processes in a browser cannot be used to sandbox processes outside the browser. Or why granular access control works in the browser and fundamentally cannot be used outside of it.

I hate to keep repeating myself but, we're not discussing theoretical means with which you could sandbox an application, we're talking about how apps are actually used in practice. You seem to want to discuss how desktop apps could theoretically be just as safe as web apps, but I'm more interested in reality than theory. I've given you several examples of security features which are present in the browser, and have no proper analog built in to desktop operating systems.

Here's a non-exhaustive list of things that make webapps more secure than desktop apps (many of these points haven already been mentioned, but you keep ignoring them):

- Webapps can't read from the clipboard without user confirmation.

- Webapps can't make themselves truly persistent the way a desktop app can.

- Webapps can't record your keystrokes when their tab isn't active, whereas keyloggers are one of the most pervasive forms of desktop malware. On a Mac for instance, I normally have to use Reikey to mitigate this threat.

- Webapps can't forge the origin and user-agent HTTP headers to impersonate legitimate clients.

- Webapps can't read the response of an HTTP request to a third party origin unless the site allows it via a CORS header.

- Webapps can't read a single file from your filesystem unless you explicitly allow it.

- Webapps can't see which SSIDs your computer is connected to in order to pinpoint your location by matching them against known wifi networks.

Could some of these protections be implemented on the desktop in the future? Sure, and if they do I'd be happy to revisit this discussion in a few years. But my arguments are firmly rooted in reality, not speculation about future enhancements. And please don't bring up onerous security measures like virtual machines. First because that only proves that desktop apps are insecure by default, second because most users are likely unaware that such measures even exist, and third because those measures can be applied to a browser as well, so they only augment the security of webapps if anything.

[1] https://blog.aurynn.com/2015/12/16-contempt-culture


Well… if you have ever supported a desktop app you know how difficult “version dispersion”, users that never update their OS, users that always update their OS, different hardware, other hostile software, etc. can be. If you know, you know.


Sure, I'm not saying it's easier. It would be completely stupid to go down the webapps road if the desktop apps one was both better and easier.

I kind of find it ironic, though: why not write one desktop app that only supports the latest version of Windows, and tell your users to use that? If you're big enough, surely you can force them to use the OS you want, right?

I am convinced that most people who love webapps kind of hate the idea of being forced to use the latest version of Windows. But somehow they find it okay to force everyone to use Chromium? What's the difference?


What I remember was mostly minimum and standard requirements listed on the product pages.


For stuff like figma and photoshop I can't help but suspect that the creators would be better of writing their program in CPP with the GUI toolkit of their choice, and compiling it for the web with emscripten.


I believe Figma is indeed written in C++ and uses emscripten. It's pretty much the polar opposite of your standard CRUD app.

Old article: https://www.figma.com/blog/webassembly-cut-figmas-load-time-...


Huh, Guess I was right :p

So neither of those are a good argument in favour of these complicated javascript stacks, are they?


Just because Figma opted for this route doesn't make it right for all interactive web applications.


Yeah. That's what the little `:p` at the end of that comment meant.


The existing C++ Photoshop codebase was largely ported to WASM with Emscripten https://web.dev/articles/ps-on-the-web


I tend to dislike this approach for the simple reason that it's an extra "compatibility layer" where you give up control. If you're developing for the web you may every now and then want to do things a specific way or use a specific feature and be unable to do so because the transpiler doesn't support it or is programmed not to.


Why? If there's one thing JavaScript and browser tech is good at, it's making GUI dev easier. Just look at how even Qt is basically pivoting completely towards QML which to my naive eyes look very very similar to how GUI/Layouts/styling is done with html5/Js. Why would you purposefully use something worse just to not use browser related tech? I would agree if this was about raw number crunching, where compiling to wasm makes sense and where a html5 GUI can be used as a frontend, but the GUI itself has no reason to be built with CPP.


It’s reactive/declarative UI programming, which Android does with Jetpack Compose, and iOS and MacOS with SwiftUI. The other way is doing imperative UI, like the web was doing with jQuery.


I don't think that workflow was even a realistic option when Figma development was initiated, or even when it first launched.


(300MB download warning)

Here's an example of libreoffice running entirely in the browser: https://lab.allotropia.de/wasm/

Once it does it's painfully long download and bootstrap it works pretty nicely. This is a big complicated legacy app, but I'm sure if reasonable file sizes and graceful loading was an actual goal you could get some pretty good results. Sure, it's not going to be as easy to hire for right now, but I think for complicated programs that general kind of workflow is likely to be better than the big pile of JS scripts.

Google seems to think the same if flutter is any indication.


Are you saying that that was built back in 2016/2017?


It's funny that you, and probably a lot of HN folk, consider MPA simpler than SPA. It's opposite in my experience. The name itself is actually telling you that it has more complexity (multi page vs single page).

In practice, you can make both as complicated as you want, but SPA seems like a simpler starting point.


The earliest web apps I worked on were multi-page apps, with pages generated by Perl CGIs, later PHP. There was almost nothing going on on the client side except form submissions and a bit of JS-based form validation. I can tell you with 100% certainty this was simpler to build than most anything I see today with React SPAs and REST APIs. Even a simple form submission can be a PITA with modern tools.


> Even a simple form submission can be a PITA with modern tools.

SPA excel in complex interactions if you need only very simple forms it will be PITA


I think the argument is generally that most applications' "complex interactions" are artificial contrivances and are unnecessary. I certainly think so.


SPA definitely have their place. However, when I see them be used for content-oriented sites with minimal user interaction (few forms, etc.) I wonder what guided the decision.


It can definitely seem this way if you only consider the front end. But a challenge that many SPA apps run into is that for the vast majority of SPA apps you end up in a situation where the front end and back end need to share business logic, and this can be a very complex thing to model and maintain, with either duplicated effort (and the potential of drift) or complicated solutions to keep them in sync, particularly if your front end and back end technologies aren't identical.

Most MPA apps treat the browser and front end as dumb clients basically - strictly responsible for putting stuff on a screen


The complications are not coming from the M or S part of the acronym, it comes from the words “Page” and “App” being intertwined. Or in other words 18 years of trying to hammer the web browser (conceived for “pages”) into an app platform.


It is the spectrum of interactivity. If you are a CPP/AI/Go dev who needs a static blog with simple form, you'll believe server rendered MPA are way to go. If your site has interactivity and dynamic status/notifications, you'll believe SPA makes sense. Unfortunately like in everything now days people assume other side is idiot and pick up pitchfork.


How is SPA a simpler starting point? It requires more code and more abstractions in the client from the onset. One might argue that that complexity would just exist in the backend in an MPA, but that's not true: there is some additional backend complexity, but not nearly as much as required to support the multitude of clients that exist for the baseline in an MPA.


Because in most web apps you still need to have client-side logic anyways, like form validation and such, so familiarizing yourself with a SPA framework is simpler than learning to implement this in addition to the MPA framework you'll probably end up using anyway.


Last month a client asked me to build them a crud form only using old school C# MVC with Razor templates. And by golly it was much harder for me than just doing it in React+Next.

I had some nostalgic notion that MVC was going to be a smooth ride. That all this JS cruft was slowing me down. Then I needed a search bar, then validation, then I kept running into weird surprises with Razor templates. After a month I ruefully concluded I'd have gotten it done a heck of a lot faster just using my usual stack.

I now regret all those times I complained about how much bloat there is in the JS ecosystem. Yes, is it possible to make crud apps with Razor, sure, people do it everyday. But for me, with a project with increasingly complex display logic and validation, I definitely should have tried a little harder to talk them into using Next and TS.

As an aside, I was surprised that getting HMR with C# is kind of a pain. I never figured out how to get it while debugging. So every time I wanted to debug some new issue with the template not sending data right to the controller (which felt like every few seconds of work) I'd have to restart the server and wait 10+ seconds for it to restart and then renavigate to where I was, reenter my form fields, and then try again. After I was done and wanted HMR, I'd have to restart the server again. That extra hassle really started to grind on my patience.


The jump from JS to the microsoft world is always hard. Hell, jumping from anywhere into microsoft environments has to be hard. No surprise that it was a difficult transition if that wasn't in your background so far. The docs alone will cause a culture shock!

> But for me, with a project with increasingly complex display logic and validation

It always sucks if the requirements are not clear in the beginning, if the dev can't anticipate how they will grow in advance?

But what has validation to do with it? That's completely backend, how would that have been easier with React+Next?

> As an aside, I was surprised that getting HMR with C# is kind of a pain.

Proper systems dynamically interpret their templates on every request in dev mode because of this, like Ruby/Sinatra does.


IMO validation / boundary code ought be shared between frontend and backend.


You can never trust the client. The only thing that you can do is warning about bad input, aka UI decorations.


Sorry, but what’s wrong with microsoft docs?


Didn't necessarily say they is something wrong with them :) Though in this specific case the ones I skimmed did not look very useful, and in general they are organized quite differently than FOSS projects usually organize their code, was my impression so far.


What is HMR? Is it edit/refresh?


Hot Module Reload, changing the code in a "module" and not having to refresh the whole page.


Are you surprised that it took you longer with a tool you don't know compared to a tool you do know? IMO, that should be expected.


More surprised at how much I'd forgotten and how it can be pretty fussy to use. I had in my mind a nostalgic feeling of it being so much easier and faster.


Having done a similar project this year I completely agree with you. Maybe that's more a statement against Razor than MVC itself - I suspect it's much cleaner/easier to do a trad MPA+MVC-style app in a language+framework combo that is made for it like Rails.

I think the reality is that Razor has been left rotting for a while. It still works, and there's nothing that bad about it but the DX is not good, especially compared to modern JS frameworks.

It's hard to explain to people who haven't worked with Razor how annoying things like validation are. There is a "happy path" where the different parts of the validation framework all talk to each other, but it's not obvious and the Razor documentation on MSDN is very _not good_. It doesn't help that there's like 5 completely different things all called "Razor/Blazor" and a lot of semantic overlap between them all. Very frustrating to need to sift through so much to find something that feels like it should be easy and well supported.


This exactly summarizes how this last month has felt. Weird, frustrating issues that are weirdly hard to even find documentation on.


> “Web development shouldn’t need a build step”

Everything running in a browser is interpreted. There is no reason for webdev to require a build step, and it largely does so because JS standards haven't delivered anything around static typing. Even a "this syntax is valid but ignored" would enable IDEs to provide checks via LSPs, but for no-build running.

Build steps and development iteration overhead is something to be avoided at all costs. For it to have been introduced, with multi-second latency, to web dev. is a sign of developer experience dropping off a cliff.

Time-to-iterate, tool quality, release speed, etc. are essential to being able to build mental models of code (etc.)


>Everything running in a browser is interpreted.

That's exactly the reason for a build step.

In a build step the code can be optimized to reduce and/or accelerate the code that needs to be interpreted.


JS builds today do not appreciably accelerate code, and minimization is of marginal benefit when served assets are appropriately compressed using technologies browsers already support.


No matter how fancy your compression, that code with comments stripped will compress to less bytes than the same code with comments.


No matter how much would you try to save those 10 KB, someone will put 30 MB image file just to display tiny flag in the top right corner. Stripping comments is unnecessary optimization.


Ease of deploying is a huge thing which gets lost. Over focusing on a single area of an application makes things really obvious the big picture was lost somewhere.


I appreciate the build step in setups like Vite/Vue because the development server can automatically and accurately hotpatch the application when I make changes. I don’t think you’d want to change the standards to couple DOM and js in the way that makes this possible in Vue but it’s an iteration speed improvement nonetheless.


I wouldn’t want to have either 100 lodash dependencies or ship a ton of functions that aren’t used. Tree shaking is a pretty nice build step.


A build step is fine. I think the point misses the real issue: that build step changing every 6 months in the web world.


Every time you have to go back to an "old" project you haven't worked on in six months there is a webpack/npm dependency hell greeting you.


> “Web development shouldn’t need a build step”

A build step is a huge barrier that makes authoring your own websites require significantly more expretise than it otherwise would. It thus makes web development less accessible, and puts anything even a little bit complicated out of reach of anyone who isn't already an experienced web developer or exceptionally dedicated. It also discourages the slow development of some pet website of a not-primarily-a-web-developer-by-trade into something more featureful, since expanding beyond the point where you can still reasonably avoid a build step suddenly requires acquiring a lot more exprertise and expending a lot of effort all in one go, instead of just being a smooth expansion into a more featureful project.

The simpler it is to make your own website without having extensive web development experience, the more people will be able to have their own website instead of being directed to the endless array of corporate silos like social media (which has largely replaced personal websites) and corporate middlemen (which have largely replaced in-house commercial websites for smaller actors) that take care of the burden of making your own website for you, with some rather obvious downsides.


I mean, a build step is not required to build a website, but I'd say anyone who wants to have more than one HTML page and one stylesheet will probably want some sort of build step sooner rather than later.

Like, if you have two or more pages, you probably want them to share the same header or footer, and unless you want to A) repeat the same markup on each page or B) inject them with a client-side script, you will need some sort of build step. There are more accessible solutions out there, like Hugo.

How else are you going to achieve that? Sure you could use PHP, but I don't see how that is more accessible or maintainable than having a build step.


> How else are you going to achieve that?

People already thought about that problem when the World Wide Web was invented and they came up with Server Side Includes [0], a scripting standard that predates the Apache HTTP Server.

Looks like this inside the HTML document:

<!--#include file="header.html" -->

<!--#include file="footer.html" -->

Not many people use this anymore, but it is easy to share common markup and very accessible for people with just basic HTML knowledge. Major web servers of today still support it.

[0] https://en.wikipedia.org/wiki/Server_Side_Includes.


The tech sector is riddled with such divisions of perspective.

”In a high-level language like C...” - chip designer

”In a low-level language like C...” - application programmer


Anecdote: Coming from the application side i had always thought of C as a low-level language but in one company where i worked with chip designers who only did Verilog, i was gobsmacked when in my conversations with them they said they didn't know higher-level languages like C and cannot program in it.


We're not gobsmacked when you don't know Verilog, so I'm not sure why you think you can be gobsmacked some chip designers don't know C...


Because until that point i didn't know much about HDLs like Verilog/VHDL and how they were at a completely different level than "standard" programming languages like C/Python/etc. My then assumption was that since C was a low-level language and chip designers were working at a low-level they would be able to program in Assembly/C in their sleep and that they would be able to initiate me into the mysteries of how my C code was actually translated into electrical signals in the processor circuits. It was a big disappointment for me when i realized we were living in completely different worlds.

I actually made a deal with some of them to teach them C/C++ in return for them teaching me Verilog/SystemC but unfortunately that never came to pass. I even got me a couple of Verilog/VHDL books and FPGAs to teach myself what i call "Actual and True Hardware Programming" but haven't really sat down with it. Hopefully sometime in the future so i can finally know everything from the bottom-most layer to the top-most.


Ok, I guess that's fair. Having worked at one of the large chip makers, I can tell you there are plenty of people who know C and Verilog, you just weren't talking to any of them. Those who need to do, and those who don't, don't. It's certainly an industry with high degree of specialization.


So riddle me this; do those who know C & Verilog have a better idea of how the whole C->Assembly->MachineCode->Physical Processor circuitry works? I don't mean the logical model; but how exactly the program bitstream gets transformed into electrical signals by their HDL code.


No, not usually. In my experience anyway, most random engineers in the semiconductor industry that you would run into who know both C and Verilog would be just using those tools to do their job. There is a lot of ECE stuff to unpack in your question, but the subfield of ECE in question is called VLSI. You'd want to talk to someone who works in VLSI, or did VLSI as their focus in ECE undergrad or grad school.


They probably used TCL for scripting though... it's bizarrely ubiquitous.


“Riddled” has a tinge of negativity to it. I would say it’s actually a useful thing, “level” is a count of abstraction layers relative to the abstraction you’re familiar with. It’s more just a way to communicate some personal responsibility/knowledge range. I’ve heard people call Python “a low level way of using a computer” or similar.


Riddles are also explanatory.

The best programmer I ever worked with writes drivers for a hobby.


> When I worked on animations, I was surprised at how many people believed that some animations “run on the GPU” (the browser can offload some animations to a separate process or thread that updates animations using the GPU to composite or even paint each frame but it doesn’t offload them to the GPU wholesale)

Not to nitpick his nitpick, but...I've said this exact thing in the past, and his parenthesized explanation is what I meant. It's too much of a mouthful to try and be super accurate and specific all the time.


Technology is not a religion and it doesn’t need prophets. Why are people so hell bent on convincing others to join them in their use of whatever technology?

Pick what solves your problem. In the context of web development that overwhelming means using whatever is most popular given your preferred programming language.


"it’s hard to imagine how [Figma and Photoshop for web] could work without JavaScript" is a bit of a strawman


> As an example, the Eleventy documentation seems to avoid using client-side JavaScript for the most part. As Eleventy supports various templating languages it provides code samples in each of the different languages. Unfortunately, it doesn’t record which language you’ve selected so if your chosen language is not the default one, you are forced to change tabs on every single code sample. A little client-side JavaScript here would make the experience so much more pleasant for users.

This may actually be an ePrivacy limitation (cookie law) not a desire to avoid JS. Persisting the setting across pages requires client size storage, which in ePrivacy countries requires either that the setting be obvious to the user that it's persisted, getting per-action consent (ex: a "preferred language" setting) or site-wide user consent (a cookie pop-up).


As long as it’s implemented as a session cookie (or local storage as IIRC it also behaves as a session cookie) it should not need consent (as per 3.6 “UI customisation cookies” of https://ec.europa.eu/justice/article-29/documentation/opinio...)


I agree for something that's clearly customizing the UI of the whole site, but if it looks like you're choosing a language only for the currently displayed snippet then I think that wouldn't qualify?


I think it still qualifies - in the guidance doc (and a related doc focused on tracking cookies) the focus is on the purpose of the cookie - the intention is to avoid tracking the user - a session cookie that is specifically for remembering a choice a user made in relation to the UI passes the test of a) being necessary to enable a piece of functionality and b) being set as a result of an explicit action on the part of the user in clicking on the particular language tab.


I think the place where it gets tricky is whether you're meeting the "strictly necessary in order to provide an information society service explicitly requested by the subscriber or user" standard. If the control looks like it only affects the current box, then I don't think storing the choice to apply beyond the current page meets that standard.


agree to disagree ¯\_(ツ)_/¯


My understanding of ePrivacy (mostly GDPR) is that this kind of feature does not require consent. It's only features that would allow you for tracking of the user that require consent. Storing some setting in a local storage, never sending it to the server is fine.

Things get a bit muddy when sending to server but even then you may not need a consent if it is a feature that is required for correct working of the website or better experience without tracking and profiling.


This is correct. Unless it involves personal data and/or tracking the user somehow, then GDPR isn't relevant.

Example: If you're storing lightOrDarkTheme in a cookie/localStorage, then there is no need to try to follow any directives, nor are you required to inform the user about that you're storing the preference.


GDPR and ePrivacy are different regulations. Under the latter even purely local data storage still requires consent unless it's "strictly necessary" to implement something the user has requested.

For example, see the discussion the sibling commenter linked around storing UI customization choices for only the duration of the current session: https://ec.europa.eu/justice/article-29/documentation/opinio...


You don't need consent in this case, as clearly stated in 3.6 UI customization cookies

--- start quote ---

3.6 UI customization cookies

These customization functionalities are thus explicitly enabled by the user of an information society service (e.g. by clicking on button or ticking a box) although in the absence of additional information the intention of the user could not be interpreted as a preference to remember that choice for longer than a browser session (or no more than a few additional hours). As such only session (or short term) cookies storing such information are exempted under CRITERION B

--- end quote ---


See discussion above: https://news.ycombinator.com/item?id=38901520

As long as it's clear to the user that they're making a site-wide UI customization choice and just not choosing the language for this specific example I agree with you, but I don't think it's clear in the typical case.


IMO this is splitting hairs, and no one will take you to court because you changed the snippet language across the entire site for the session


Recently I was reading the Learn CSS the pedantic way book and the definition for inline boxes did not match the way that anonymous block boxes were generated when an inline-level element had a block-level element as its child. So I went looking elsewhere for a more appropriate definition for that case and found this issue on standards: https://github.com/w3c/csswg-drafts/issues/1477 It was really interesting to know that I was not the only one confused. My question was: Does the inline-box generated by the inline-level element contains the box generated by the block-level child or there wasn't an inline-box that was a parent of them all but there were 2 siblings inline-level boxes of the block-level box that were wrapped in another anonymous block boxes? Reading that issue I got to know the concept of fragments, which I did not know browsers had. But the issue seems to suggest that the box tree for this case should have the inline-box as being a parent of the block-box. Which led me to another question, in that case, if I apply a border to the parent inline-level element, shouldn't it apply to the overall box that is generated (it does not)? The answer is that borders between block-boxes and inline-level boxes should not intersect but that is really difficult to derive from reading the standards alone. Anyway it was headache-inducing trying to learn the box-model pedantically :) I wish I could learn more about layout in browsers and I trying to read the code of LayoutNG in Chromium but I need more aspirin hehehe


I'd like to add another one: I don't need a separate NodeJS (or whatever engine) service to build my service dashboard. Before NodeJS got popular, backend engineers like me simply put web assets into a folder in my web app, so my service will have an admin page or a dashboard for per-node administrations. For some reason, such practice has become a taboo. My engineers have been insisting that they set up a separate NodeJS service just to build even a simple admin page. But I fail to see why. The reasons I got are usually these three: 1) a NodeJS service gives us optimized performance, though techniques like server-side rendering; 2) a separate service is easier to scale; 3) a separate service offers separation of concerns. However, 1) and 2) are premature optimization to me. All I need is standardized per-node admin page for my service. The QPS is probably one per day by a human. Why would I care about SSR or scalability at all? And 3) is quite hand-wavy. On the other hand, the overhead of managing a separate service as well as the dependencies brought by NodeJS' ecosystem seems high.

So, what's wrong with the old way of having embedded web assets in a service for building simple admin pages?


Reminds me of the fact that for a long time C++ compiler engineers didn't themselves write code in C++ or know best practices despite writing the implementation.


> Many didn’t know about new CSS features that had shipped 10 years ago. What’s more, even when we told them about them, they didn’t seem too excited. They were doing just fine with jQuery and WordPress, thank you.

...and if jQuery and WordPress do the job, they are sound technical decisions.

What is not a sound technical decision is forever chasing the latest fashion in technology. Also known as the "oh look a shiny thing development paradigm".


> There are Twitter/X polls, for example, but they tend to only be answered by the Web developers on the bleeding edge and are easily skewed by who spreads the word about the poll.

Maybe MDN should have a comment section, like the PHP docs. That would be more representative.


The PHP docs comment section is pretty fascinating. I am not sure how these are allowed. But you will find 10year old comments with very distilled specific techniques or super well explained nuances. Not something i've seen in other languages.


Well written informative article.

What does it say about this whole domain when, as the author says, Web Apps and Sites/Blogs (also add in Mobile apps) are so very different from each other using a myriad set of technologies each of which has a learning curve? Where is the uniformity and commonality in all this? Why are developers even perpetuating this?

That said, this might be a good place to ask for recommendations for study since i am not a "Web Developer";

1) Comprehensive books/other sources on full-stack Web App and Site development. Bonus points if they use a single language for frontend/backend/everything else.

2) The same as above but using C/C++ languages.


Businesses want uniformity and commonality since, in theory, it should lower development costs (see projects like flutter and fuschia, which have the goal to make every platform web based).

The problem is that users/customers have higher expectations for their user experiences than the web can offer on mobile/desktop/etc.

Robinhood, Duolingo, Slack are a few good examples of UX being huge differentiators.


I understand what you are saying but am not clear as to why it should be so. Having programmed GUI apps in Microsoft Windows and X-Windows/Motif (which can be remote) on Unix systems i am not sure why we cannot have a similar uniform architecture for "Web Apps". After all the "Browser" is considered a platform in itself. Also given that HTTP has now been munged into practically being a Transport protocol for anything, previous limitations are no longer an excuse.


While I agree with "web engine developers and web spec deveopers have little-to-no idea about web development" I disagree with "web browsers are good at handling complex and long-lived DOM trees with dynamic changes now"


They are certainly better than before at handling the memory and bookkeeping of a large DOM tree. Every browser had so many unexplainable little bugs, but nowadays they can be relied on to correctly handle their own internal data without crashing. It’s a huge improvement.


> "web browsers are good at handling complex and long-lived DOM trees with dynamic changes now"

Is there an alternative renderer/something that handles "complex and long-lived $something-trees with dynamic changes" better than web engines does?

They've been optimized for just that during decades at this point, with huge investments both in human-hours and money. Hard to imagine there is something else that can handle that better than browser engines.


On Windows, WPF is better: https://en.wikipedia.org/wiki/Windows_Presentation_Foundatio...

The critical features missing from HTML are data binding and data templates. Last time I checked, many modern frontend frameworks contain overcomplicated, incomplete, and inefficient implementations of these features on top of HTML DOM.


I'm not super familiar with Microsoft's offerings, but is WPF available cross-platform? It's hard to argue something that is only on one platform is better when it only have to do 25% of what a cross-platform solution would do.


Sadly, WPF is Windows-only.

There’s an equivalent cross-platform GUI framework called Avalonia https://www.avaloniaui.net/ I don’t have a hands-on experience with it, but based on the internets I have an impression the tech is pretty good by now.


> Is there an alternative renderer/something that handles "complex and long-lived $something-trees with dynamic changes" better than web engines does?

Literally everything else.

My favorite recent example is: 1000 objects with complex behaviour, lighting and animations takes 4 microseconds to render, at 9:36: https://youtu.be/kXd0VDZDSks?si=PjqeFVoSTPSsbdIk

> They've been optimized for just that during decades at this point

You can't optimize beyond the limitiations of the ad-hoc hackish nature of the web. There's only so much optimisation you can do when even the simplest of things will cause re-flow and re-render o the entire page.

Well, games redraw the entire screen, but they can draw thousands of objects in a fraction of time it takes the web browser to figure out how to layout them.

Edit:

- Figma had to reimplement everything from scratch in WebGL because browsers (that is, DOM) are just bad

- Google Docs and Google Sheets reimplemented everything in canvas, once again bypassing the "greatest renderer on earth" to be able to render and control the rendering.


I spend a lot of time with DNS (it just happened, man) and I see the same general thing where a lot of intention and expertise is imputed to others. True systems thinking around large systems is rare. Buffer Bloat (and the lack of progress in remediation, and the lawfare and near literal acts of congress due to misattribution and misunderstanding of the problem) as this thing slouches along and utters its phlegmatic growl could be emblematic.


nothing wrong with build steps until they go wrong.

then you will spend 5 hours replacing/updating deprecated/newly incompatible npm/system issues.


The horrifying guidance I've gotten a number of times when I step into a project is "downgrade your node version 5 years and never update anything."


a pragmatic solution. sometimes you don't have time and stakeholders don't understand or appreciate the side issues you need to deal with.


Absolutely fantastic writing —- clear, kind, authoritative. Don’t agree with it all but I think the last section sums it all up nicely, and is something I’ve felt for a while.

With new SSR frameworks like Next.js, I think this whole MPA/SPA dichotomy starts to dissolve a little bit. I’m thrilled that browser standards are evolving to help it along!


It would be interesting to measure battery savings made by disabling JavaScript on mobile devices. While it might be cheaper to plop website on GitHub Pages or Netlify and others, somehow I feel that all costs are just handed down to the user in bandwidth and battery use.


If what the blog suggests are the optimal ways of doing it, it would have been a larger reality automatically by virtue of people continuing to try to optimize. Idealistic views aren’t necessarily practical, and may not get traction in the real world.


I don't mind a build step, per se, just that the JS ecosystem's build step is particularly more painful than others.


The article title is actually "Weird things engineers believe about Web development".


2024 is the year of Rust and HTMX.


> Maybe 2024 will be the year where client-side Rust/WASM frontend frameworks start to get traction and if that’s the case, we’d better get used to having a build step!

That's nonsense, there isn't any progress here. You're just rebuilding things that could easily be built 20 years ago, but with 10X the complexity. And then to solve slow load times your solution is to add even more complexity..


Who said every site should work without JS, even Figma?

Honestly, who is saying that? Who is saying these things?


A ridiculously small but very vocal minority which is active on forums like HN.


> https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que... (25 results)

> https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que... (28 results)

So at least some are saying something along those lines. I'm sure if you search Xitter you'll find even more hits.

I think the argument is usually not made in isolation like that though, and usually with exceptions. "Websites without heavy user interaction should work without JS", "Websites should work without JavaScript, unless based on live data" and so on, which makes a lot more sense.


I tend to make all my pages function without Javascript even if I don't expect my users to turn off Javascript. It makes things easier to cover with automated tests if I can just exercise my backend servers with scripts vs. a headless browser, and if something does go wrong with serving Javascript or something it's nice to know my site is functional.

Generally speaking the experience will be horrible (what should be a modal or a partial page reload will be a full reload) but using the technologies I use it's not that hard and it makes my development experience easier while delivering some (rarely needed) benefits to end users, so it's an unqualified win for my particular use case.


I think the points in this blog are made up. No one with a bit of experience “believes” such things.


fyi you are shadowbanned, write hn an email to clear that up


“Every” is a bit extreme but I’ve been a believer in Graceful Degradation for years. This is where jQuery excelled.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: