The onboarding funnel: Only a concern if you're trying to grow your user base and make sales.
Conversion: Only a concern if you're charging money.
Adwords: Only a concern if, in his words, you're trying to "trounce my competitors".
Support: If you're selling your software, you kind of have to support it. Minor concern for free and open source.
Piracy: Commercial software concern only.
Analytics and Per-user behavior: Again, only commercial software seems to feel the need to spy on users and use them as A/B testing guinea pigs.
The only point I can agree with him that makes web development better is the shorter development cycles. But I would argue that this is only a "developer convenience" and doesn't really matter to users (in fact, shorter development cycles can be worse for users as their software changes rapidly like quicksand out from under them.) To me, in my open source projects, my "development cycle" ends when I push to git, and that can be done as often as I want.
Today, even the minimal steps of creating a desktop app have lost their appeal, but I like showing how I solved a problem, so my "apps" are Jupyter notebooks.
- CAD / ECAD
- Artist/photos
- Musician software. Composing, DAW etc
- Scientific software of all domains, drug design etcMost things I create in my free time are for my and my family's consumption and typically benefit immensely from the write once run everywhere nature of the web.
You can launch a small toy app on your intranet and run it from everywhere instantly. And typically these things are also much easier to interconnect.
KDE has analytics, they're just disabled by default (and I always turn them on in the hopes of convincing KDE to switch the defaults to the ones I like).
Times have changed quite a bit from nearly 20 years ago.
These concerns may not matter to you, the developer, but they absolutely matter to end-users.
If your prospective user can't find the setup.exe they just downloaded, they won't be able to use your software. If your conversion and onboarding sucks, they'll get confused and try the commercial offering instead. If you don't gather analytics and A/B test, you won't even know this is happening. If you're not the first result on Google, they'll try the commercial app first.
Users want apps that work consistently on all their devices and look the same on both desktop and mobile, keep their data when they spill coffee on the laptop, and let them share content on Slack with people who don't have the app installed. Open source doesn't have good answers to these problems, so let's not shoot ourselves in the foot even further.
If a piece of software doesn’t have users and the developers don’t care about the papercuts they are delivering, I would argue what they have created is more of an art project than a utility.
And his point about randomly moving buttons to see if people like it better?
No fucking thanks. The last thing I need is an app made of quicksand.
For some things a desktop app is required (more system access) or offers some competitive UX advantage (although this reason is shrinking all the time). Short of that user's are going to choose web 95% of the time.
Ignoring the fragmentation of course; although that seems to be getting less and less each year (so long as you ignore Safari).
The impact on people's time, money and on the environment are proportional.
Remember Livescript and early web browsers? It was almost cancelled by big tech because Java was supposed to be the cross platform system. The web and Javascript just BARELY escaped a big tech smack down. They stroked the ego of big tech by renaming to Javascript to honor Java. Licked some boots, promised a very mediocre, non threatning UI experience in the browser and big tech allowed it to exist. Then the whole world started using the web/javascript. It caught fire before big tech could extinguish. Java itself got labeled a security threat by Apple/Microsoft for threatening the walled gardens but that's another story.
You may not like browsers but they are the ONLY thing big tech can't extinguish due to ubiquity. Achieving ubiquity is not easy, not even possible for new contenders. Pray to GOD everyday and thank her for giving us the web browser as a feasible cross platform GUI.
Web browser UI available on all devices is not a failure, it's a miracle.
To top it all off, HTML/CSS/Javascript is a pretty good system. The box model of CSS is great for a cross platform design. Things need to work on a massive TV or small screen phone. The open text-based nature is great for catering to screen readers to help the visually impaired.
The latest Wizbang GPU powered UI framework probably forgot about the blind. The latest Wizbang is probably stuck in the days of absolute positioning and non-declarative layouts. And with x,y(z) coords. It may be great for the next-gen 4-D video game, but sucks for general purpose use.
You've reminded me of the XKCD comic about standards: https://xkcd.com/927/
Do you really want a universal app engine? If you don't have a good reason for ignoring platform guidelines (as many games do), then don't. The best applications on any platform are the ones that embrace the platform's conventions and quirks.
I get why businesses will settle for mediocre, but for personal projects why would you? Pick the platform you use and make the best application you can. If you must have cross-platform support, then decouple your UI and pick the right language and libraries for each platform (SwiftUI on Mac, GTK for Linux, etc...).
It would have been great if browsers remained lightweight html/image/hyperlink displayers, and something separate emerged as an actual cross-platform API, but history is what it is.
Let's also remember that it's infinitely easier to keep a native app operational, since there's no web server to set up or maintain.
That's a job for a web page. It doesn't need to be installed.
1-4. Google, find, read... this is the same for web apps. 2. Click download and wait a few seconds. Not enough time to give up because native apps are small. Heavy JS web apps might load for longer than that. 3. Click on the executable that the browser pops up in front of you. No closing the browser or looking for your downloads folder. It's right there! 3.5. You probably don't need an installer and it definitely doesn't need a multi-step wizard. Maybe a big "install" button with a smaller "advanced options". 3.6. Your installer (if you even have it) autostarts the program after finishing 4. The user uses it and is happy. 5. Some time later, the program prompts the user to pay, potentially taking them directly onto the payment form either in-app or by opening it in a browser. 6. They enter their details and pay.
That's one step more than a web app, but also a much bigger chance the user will come back to pay (you can literally send them a popup, you're a native app!).
Nowadays, it seems to be that mobile apps have the "best metrics" for b2c software. I'd be interested to read a contemporary version of this article.
This reminds me of a past job working for an e-commerce company. This wasn’t a store like Amazon that “everyone” uses weekly, it was a specific pricey fashion brand. They had put out a shitty iOS app, which was just a very bare-bones wrapper around the website. But they raved about how much better the conversion rate rates were there. Nobody would listen to me about how the customers that bother downloading a specific app for shopping at a particular retailer are obviously just superfans so of course that self-selected group converts well.
So many people who should be smart based on their job titles and salaries, got the causation completely backwards!
Do you have principles on how to tackle this? I feel stuck between the irrationality of anecdata and the irrationality of lying with numbers. As if the only useful statistic is one I collect and calculate myself. And, even then, I could be lying to myself.
I'd wager there are more people paying for software for their smart phone than any other platform they use.
Your employer most likely has.
I wonder whether Google, in its Don't Be Evil era, ever considered what they should do about software piracy, and what they decided.
I'd guess they would've decided to either discourage piracy, or at least not encourage it.
In the screenshot, the Google search query doesn't say anything about wanting to pirate, yet Google is suggesting piracy, a la entrapment.
(Though other history about that user may suggest a software piracy tendency, but still, Google knows what piracy seeking looks like, and they special-case all sorts of other topics.)
Is the ethics practice to wait to be sued or told by a regulator to stop doing something?
Or maybe they anticipate costs and competition for how they operate, and lobby for the regulation they want, so all they have to do is be compliant with it, and be let off the hook for lawsuits?
In the early days of Google in the public consciousness, this turned into "you can make money without being evil." (From the 2004 S-1.)
Over time, it got shortened to "don't be evil." But this phrase became an obligatory catchphrase for anyone's gripes against Google The Megacorp. Hey, Google, how come there's no dark mode on this page? Whatever happened to "don't be evil"? It didn't serve its purpose anymore, so it was dropped.
Answering your question really depends on your priors. I could see someone honestly believing Google was never in that era, or that it has always been from the start. I strongly believe that the original (and today admittedly stale) sentiment has never changed.
The public already demonstrated that they adopted, misused and weaponized the maxim. Its retirement just sharpened the edge of that weapon. Now instead of "What happened to don't be evil?" it's become "Of course Google is being evil." and everything exists in that lens.
Tech industry culture today is pretty much finance bro culture, plus a couple decades of domain-specific conditioning for abuse.
But at the time Google started, even the newly-arrived gold rush people didn't think like that.
And the more experienced people often had been brought up in altruistic Internet culture: they wanted to bring the goodness to everyone, and were aware of some abuse threats by extrapolating from non-Internet society.
And if it were the altruistic Internet people they hired, the slogan/mantra could be seen as a reminder to check your ego/ambition/enthusiasm, as well as a shorthand for communicating when you were doing that, and that would be respected by everyone because it had been blessed from the top as a Prime Directive.
Today, if a tech company says they aspire not to be evil: (1) they almost certainly don't mean it, in the current culture and investment environment, or they wouldn't have gotten money from VCs (who invest in people motivated like themselves); (2) most of their hires won't believe it, except perhaps new grads who probably haven't thought much about it; and (3) nobody will follow through on it (e.g., witness how almost all OpenAI employees literally signed to enable the big-money finance-bro coup of supposedly a public interest non-profit).
For example, my impression at the time was that people thought that Google would be a responsible steward of Usenet archives:
https://en.wikipedia.org/wiki/Henry_Spencer#Preserving_Usene...
FWIW, it absolutely was believable to me at the time that another Internet person would do a company consistent with what I saw as the dominant (pre-gold-rush) Internet culture.
For example of a personality familiar to more people on HN, one might have trusted that Aaron Swartz was being genuine, if he said he wanted to do a company that wouldn't be evil.
(I had actually proposed a similar corporate rule to a prospective co-founder, at a time when Google might've still been hosted at Stanford. Though the co-founder was new to Internet, and didn't have the same thinking.)
If anything, it’s my very faint hope that AI would give companies - especially non-software companies - the bandwidth to release two real native apps instead of just 2 builds of a shitty Electron app. Fat chance though, I think, not least because companies love to use their “bRaNdInG” on everything - so the native look and feel a real app gives you “for free” is a downside for the clowns that do the visual design for most companies.
Entry suggestions/completions are formally deprecated with no replacement since 2022. When I did get them working on the deprecated API there was an empty completion option that would segfault if clicked. The default behaviour didn’t hide completions on window unfocus, so my completions would hover over any other open window. There was seemingly no way to disambiguate tab vs enter events… it just sucked.
So after adding one widget I abandoned the project. It felt like the early releases of SwiftUI where you could add a list view but then would run into weird issues as soon as you tried adding stuff to it.
Similarly trying to build an app for macOS practically depends on Swift by Sundell Hacking with Swift or others to make up for Apple’s lack of documentation in many areas. For years stuff like NSColor vs Color and similar API boundaries added friction, and the native macOS SwiftUI components just never felt normal while I tried making apps.
As heavy as web libraries and electron are, at least work mostly out of the box.
QtWidgets is extremely good though, even if it is effectively in maintenance mode.
Avalonia also seems good too though I haven't used it myself.
For prototyping / one-offs, I've always enjoyed working in Tcl/Itcl and Tk/Itk - object oriented Tcl with a decent set of widgets. It's not going to set the world on fire, but it's pretty portable (should mostly work on every platform with minor changes), has a way to package up standalone executables, can ship many-files-as-one with an internal filesystem, etc..
Of course, I spent ~15 years in EDA, so it's much more comfortable than for most people, but it can easily be integrated into C/C++ as well with SWIG, etc.
Anthropic has the resources of a fully armed and operational Claude Mythos (eyeroll), but they still choose to shit out an electron app on all of their users instead of going native like their competitors have done.
That's not true at all, any number of things could have killed bitcoin in its infancy. The stakes were just low. Somewhere out there is a lost collection of wallets of mine, collectively holding ~100btc ($1000 at the time). If regulators cracked down hard, that 100btc would have become just as worthless and either way I'd be out $1000.
"Risk" is an epistemic claim about the future taking the worse path. Obviously looking back it looks like risk-free money. That's not how it looked at the time. The "currency of the future" thing was always niche, especially after the crash in 2013, until a much larger cultural shift happened around 2015-ish.
Plenty of people will chime in with early bitcoin stories, and how they always believed it was going to go to the moon, etc. I always find it curious because my memory of the time period is that it was a means to an end, and that's how the overwhelming majority saw it and treated it. Funnily enough, it was thanks to that overwhelming majority that led to it being worth anything at all. If it was just a bunch of yahoos clamoring about the "currency of the future" thing, it probably would have gone absolutely fucking nowhere. The irony that the yahoos ended up becoming the majority I think is underappreciated.
ok, now do this analysis for mobile apps...
To save you a click: It's harder to monetize desktop apps than webapps.
Lol. LMAO, even.
ig remote work is the best of both worlds
People who focus this much on "conversion" et al are dinosaurs who deserve extinction.
More importantly, the author is talking about the realities of trying to earn a decent living shipping independent software. That requires paying customers.
It's perfectly reasonable to want to be paid for your work, and it certainly doesn't warrant the vitriol in your comment.
"Over roughly the same period my day job has changed and transitioned me from writing thick clients in Swing to big freaking enterprise web apps."
I mean, the web kind of won. We just don't have a simple and useful way to design for the web AND the desktop at the same time. I also use the www of course, with a gazillion of useful CSS and JavaScript where I have to. I have not entirely given up on the desktop world, but I abandoned ruby-gtk and switched to ... jruby-swing. I know, I know, nobody uses swing anymore. The point is not so much about using swing per se, but simply to have a GUI that is functional on windows, with the same code base (I ultimately use the same code base for everything on the backend anyway). I guess I would fully transition into the world wide web too, but how can you access files on the filesystem, create directories etc... without using node? JavaScript is deliberately restricted, node is pretty awful, ruby-wasm has no real documentation.
There's an interview with him on the subject that is sadly behind a paywall now: https://www.indiehackers.com/post/how-i-grew-my-appointment-...
The world has changed a lot since then. The days where 37 Signals could build an empire out of simple web form apps and individuals could build and sell a SaaS that sends reminder texts are long gone. Most of the low hanging fruit was mined out long ago and most simple services have seen 100 different startups try to serve them already.
As much as Appointment Reminder was my prime example of a successful indie SaaS, the author's second startup has (with all due respect) become one of my prime examples of not validating product-market fit before building your product. They went on to build Starfighter, a company that was supposed to be a candidate vetting platform where people could do complex coding challenges and then get matched up with companies wanting to hire people. It was built partially in the open through their newsletter and in Hacker News posts.
If you thought doing LeetCode problems to get interviews was annoying, imagine having to spend hours or days going through a CTF where you hack multi-core CPUs to do something complex with a simulated stock market. I can't even remember the entire premise, but every time I read something about the company it was getting more and more complex. At the same time I was on other forums where candidates were going the opposite direction: becoming frustrated with the proliferation of coding interviews and refusing to do interview challenges that would take hours of their time.
I remember through the entire process thinking that it seemed like a questionable business plan that wouldn't really appeal to companies or to candidates. Even the Hacker News comments were full of (surprisingly polite) feedback saying that investing a lot of hours into solving programming puzzles to maybe get some recruiter interest wasn't appealing - https://news.ycombinator.com/item?id=10480390
Some amazing foreshadowing in that thread from one of the co-founders (not Patrick McKenzie):
> I literally lack the ability to form coherent sentences about our business that don't somehow involve how to render a graph of AVR basic blocks in a React web app, is how little we're thinking about how the game interacts with recruiting right now.
> We are going to get the CTF right, and then work from there to a sustainable recruiting business. We should have done it the other way around, but we didn't. :)
As you might have guessed, it didn't work out at all. It was weird for me to follow one of my indie startup heroes on their journey into their second business that skipped all of the normal startup advice and then reached the exact conclusion that advice was warning against.
It was enlightening to follow along and I'm glad they tried something different and shared it along the way, but watching it happen was a turning point for me in how I approach advice from any one individual author, blogger, writer, or influencer.