I just switched from QuickBooks to Beancount+Fava for my sole proprietorship, and couldn't be happier. I've added a text-based simple invoice system, a text-based vehicle mileage tracker, and have validators that ensure that every expense with a tax status has a document attached to it.
It's far easier and faster to use than QuickBooks, I don't have to put up with ads, and with git + RFC3161 attestation of commits, I can prove I made additions when I said I made them, and there's no accidental erasures from lazy text edits, and it's a simple command to see exactly when each entry was made.
All based on plain text at the core, but I've now added Fava extensions so that I can do it all in the browser when I want. If there was a TUI fava with graphs, great, but the web isn't so bad either. Now, let's see what my accountant thinks of this...
Maybe that was the peak, but you had some very good TUIs in the early 1990's for DOS apps, where Windows hadn't quite completely taken over yet, but you very likely had a VGA-compatible graphics card and monitor, meaning you had a good, high-resolution, crisp and configurable-font text mode available, and also likely had a mouse. This is the stuff I grew up with: QBASIC and EDIT.COM for example. Bisqwit has a cool video about how some apps from that era could have a proper mouse cursor, even: https://www.youtube.com/watch?v=7nlNQcKsj74
Text-mode versions of Wordperfect, Wordstar, and Lotus 1-2-3 were pretty good too.
I will now get to have Kafkaesque conversations with computers in MarkDown.
Certainly part of it is also people of my generation being nostalgic for the TUIs of DOS file managers and editors.
You mean like https://silvery.dev/examples/layout.html ? This is definitely not a UI development paradigm I would have expected to see.
Look at the amount of engineering resources we pour into OS GUI toolkits and then browsers. Those layers of complexity aren’t there because we stood back and said, “given what we know in 2026 how should we design a GUI compositor?”. The majority of the stack is written how it is by archeological happenstance. One generation adds on top of the prior since the 60s.
I’d say start from the terminal, fix the rendering limitations that drove the split from terminal and then to the browser. If we pin down efficient GUI, we could have machines that cover non graphics workloads which is the vast majority with solar and the equivalent of a 6502.
The amount of energy wasted on modern stacks relative to the tasks being delivered is incalculable.
Like you pointed out, the current stack is heavily unoptimized and has a terrible architecture; it's only the way it is because of happenstance and tides of the market (companies always reaching for faster over better). An actual "nirvana" in computing like the other guy said would require bulldozing a good chunk of our current stack, keeping only kernels and core utilities, if even.
I really wish we had a bigger focus on getting good foundation instead of making yet another JS framework and SaaS, but then again, who's paying developers to actually do something of quality nowadays?
You easily have 4k pixels, why use a tiny subset of those in a very inefficient way? We have proper hardware to make a bunch of these computations actually fast, and yet we should stuck with drawing relatively expensive text everywhere?
If you only care about the UX of TUIs, that I can stand behind (though mostly as a guideline, it doesn't fit every workflow), but you can do that with a proper GUI just as well.
This is a confusing concession. Of course we love TUIs because of the UX, what other reason is there?
Constraint breeds consistency and consistency breeds coherence.
Take 1,000 random TUI designers and 1,000 random GUI designers and plot the variations between them (use any method you like)—the TUI designers will be more tightly clustered together because the TUI interface constrains what's reasonable.
Yes of course you CAN recreate TUI-like UX in a GUI, that's not the issue. People don't. In a TUI they must. I like that UX and like that if I seek out a TUI for whatever thing I want to do, I'm highly likely to find a UX that I enjoy. Whereas with GUIs it's a crapshoot. That's it.
It constrains what’s possible, not what’s reasonable. For example, one could typically fit more text on a screen by compressing it, but most of the time, that’s not the reasonable thing to do.
I’m saying most of the time because of the existence of English Braille (https://en.wikipedia.org/wiki/English_Braille#System) which uses a compression scheme to compress frequently used words and character sequences such as ‘and’ and ‘ing’ shows that, if there is enough pressure to keep texts short, humans are willing to learn fairly idiosyncratic text compression schemes.
colorforth (https://en.wikipedia.org/wiki/ColorForth) is another, way less popular example. It uses color to shorten program source code.
One could also argue Unix, which uses a widely inconsistent ad-hoc compression scheme, writing “move” as “mv”, “copy” as “cp” or “cpy” (as in “strcpy”), etc. also shows that, but I think that would be a weaker argument.
> No. It took a long time. It was really hard to do because you've got to remember that I was trying to make it usable over a 300 baud modem. That's also the reason you have all these funny commands. It just barely worked to use a screen editor over a modem. It was just barely fast enough. A 1200 baud modem was an upgrade. 1200 baud now is pretty slow. — "Bill Joy's greatest gift to man – the vi editor". The Register. 2003.
Why do you say "constrains what’s possible, not what’s reasonable", as though it's one and not the other? Does possibility conflict with reasonability? I would think it's not an either/or, it's a both/and.
The set of reasonable things is bounded by the set of possible things. So if the constraints of TUI design make certain things impossible, surely they make those same things unreasonable at the same time.
A "proper GUI" is rarely better than a well-designed TUI for communicating textual information, IMO. And the TUI constraints keep the failure-states for badly-designed UI tightly bound, unlike GUI constraints.
When you are "drawing text everywhere", you end up not having to draw all that much text. 3d models have more and more polygons as graphics cards improve, but the 80x24 standard persists for terminals (and UX is better for it). And I'm not even that convinced of "relatively expensive". Grokking UTF-8 and finding grapheme cluster boundaries has a lot of business logic, but it isn't really that hard. And unless you're dealing with Indic or Arabic scripts that defy a reasonable monospace presentation, you can just cache the composed glyphs.
(I'm not actually sure what the UX of TUIs is I love so much. Relative simplicity / focus on core features? Uff, notepad wins this one on vim. Fast startup times? I use gomuks, that takes a minute for the initial sync. No mouse? Moving around in TUI text editors with hjkl is slow. I either jump where I want to go with search or use the mouse. Lightness over SSH/network is the only thing I can't come up with a counterexample for.)
Reminds me of this decade old post (and discussion) by Graydon Hoare, "Always bet on text".
Dylan Beattie has a thought-provoking presentation for anyone who believes that "plain text" is a simple / solid substrate for computing: "There's no such thing as plain text" https://www.slideshare.net/slideshow/theres-no-such-thing-as... (you'll find many videos from different conferences)
FINALLY.
So my question is: what are we leaving on the table by over focusing on text? What about graphs and visual elements?
Whenever I hear this, I hear "all text files should be 50% larger for no reason".
UTF-8 is pretty similar to the old code page system.
Anyway, what are you comparing it to, what is your preferred alternative? Do you prefer using code pages so that the bytes in a file have no meaning unless you also supply code page information and you can't mix languages in a text file? Or do you prefer using UTF-16, where all of ASCII is 2 bytes per character but you get a marginal benefit for Han texts?
With UTF and emojis we can't have random access to characters anyways, so why not go the whole way?
But if you have to rely on a byte that may have already gone past? No way to pick up in the middle of a stream and know what went before.
Yes. Note that this is already how Unicode is supposed to work. See e.g. https://en.wikipedia.org/wiki/Byte_order_mark .
A file isn't meaningful unless you know how to interpret it; that will always be true. Assuming that all files must be in a preexisting format defeats the purpose of having file formats.
> Most European languages use variations of the latin alphabet
If you want to interpret "variations of Latin" really, really loosely, that's true.
Cyrillic and Greek characters get two bytes, even when they are by definition identical to ASCII characters. This bloat is actually worse than the bloat you get by using UTF-8 for Japanese; Cyrillic and Greek will easily fit into one byte.
Maybe if you're one of those AI behemots who works with exabytes of training data, it would make some sense to compress it down by less than 50% (since we're using lots of Latin terms and acronyms and punctuation marks which all fit in one byte in UTF-8).
On the web and in other kinds of daily text processing, one poorly compressed image or one JavaScript-heavy webshite obliterates all "savings" you would have had in that week by encoding text in something more efficient.
It's the same with databases. I've never seen anyone pick anything other than UTF-8 in the last 10 years at least, even though 99% of what we store there is in Cyrillic. I sometimes run into old databases, which are usually Oracle, that were set up in the 90s and never really upgraded. The data is in some weird encoding that you haven't heard of for decades, and it's always a pain to integrate with them.
I remember the days of codepages. Seeing broken text was the norm. Technically advanced users would quickly learn to guess the correct text encoding by the shapes of glyphs we would see when opening a file. Do not want.
The byte order mark has has no relation to code pages.
I don't think you know what you're talking about and I do not think further engagement with you is fruitful. Bye.
EDIT: okay since you edited your comment to add the part about Greek and cryllic after I responded, I'll respond to that too. Notice how I did not say "all European languages". Norwegian, Swedish, French, Danish, Spanish, German, English, Polish, Italian, and many other European languages have writing systems where typical texts are "mostly ASCII with a few special symbols and diacritics here and there". Yes, Greek and cryllic are exceptions. That does not invalidate my point.
For one thing, pure text is often not the only thing in the file. Markup is often present, and most markup syntaxes (such as HTML or XML) use characters from the ASCII range for the markup, so those characters are one byte (but would be two bytes in UTF-16). Back when the UTF-8 Everywhere manifesto (https://utf8everywhere.org/) was being written, they took the Japanese-language Wikipedia article on Japan, and compared the size of its HTML source between UTF-8 and UTF-16. (Scroll down to section 6 to see the results I'm about to cite). UTF-8 was 767 KB, UTF-16 was 1186 KB, a bit more than 50% larger than UTF-8. The space savings from the HTML markup outweighed the extra bytes from having a less-efficient encoding of Japanese text. Then they did a copy-and-paste of just the Japanese text into a text file, to give UTF-16 the biggest win. There, the UTF-8 text was 222 KB while the UTF-16 encoding got it down to 176 KB, a 21% win for UTF-16 — but not the 50% win you would have expected from a naive comparison, because Japanese text still uses many characters from the ASCII set (space, punctuation...) and so there are still some single-byte UTF-8 characters in there. And once the files were compressed, both UTF-8 and UTF-16 were nearly the same size (83 KB vs 76 KB) which means there's little efficiency gain anyway if your content is being served over a gzip'ed connection.
So in theory, UTF-8 could be up to 50% larger than UTF-16 for Japanese, Chinese, or Korean text (or any of the other languages that fit into the higher part of the basic multilingual place). But in practice, even giving the UTF-16 text every possible advantage, they only saw a 20% improvement over UTF-8.
Which is not nearly enough to justify all the extra cost of suddenly not knowing what encoding your text file is in any more, not when we've finally reached the point of being able to open a text file and just know the encoding.
P.S. I didn't even mention the Shift JIS encoding, and there's a reason I didn't. I've never had to use it "for real", but I've read about it. No. No thank you. No. Shudder. I'm not knocking the cleverness of it, it was entirely necessary back when all you had was 8 bits to work with. But let me put it this way: it's not a coincidence that Japan invented a word (mojibake) to represent what happens when you see text interpreted in the wrong encoding. There were multiple variations of Shift JIS (and there was also EUC-JP just to throw extra confusion into the works), so Japanese people saw garbled text all the time as it moved from one computer running Windows, to an email server likely running Unix, to another computer running Windows... it was a big mess. It's also not a coincidence that (according to Wikipedia), 99.1% of Japanese websites (defined as "in the .jp domain") are encoded in UTF-8, while Shift JIS is used by only 1% (probably about 0.95% rounded up) of .jp websites.
So in practice, nearly everyone in Japan would rather have slightly less efficient encoding of text, but know for a fact that their text will be read correctly on the other end.
If that really was the argument, then it is, in 2026, obsolete; utf-8 is everywhere.
He also discusses code pages etc.
I don't think the thesis is wrong. Eg when I think plain text I think ASCII, so we're already disagreeing about what 'plain text' is. His point isn't that we don't have a standard, it's that we've had multiple standards over what we think is the most basic of formats, with lots of hidden complications.
And yes, ASCII means mostly limiting things to English but for many environments that's almost expected. I would even defend this not being a native English speaker myself.
Plain text is text intended to be interpreted as bytes that map simply to characters. Complexity is irrelevant.
Anyone know of a terminal program that can do proper dotplots?
https://stackoverflow.com/questions/123378/command-line-unix...
gnuplot, feedgnuplot, eplot, asciichart, bashplotlib, ervy, ttyplot, youplot, visidata
And there's a lovely ASCII plot in the AWK book: https://dn790008.ca.archive.org/0/items/pdfy-MgN0H1joIoDVoIC...
In the absence of an encoding declaration, the encoding is in some cases detected automatically based on the first four bytes: https://www.w3.org/TR/xml/#sec-guessing-no-ext-info Again, that means that XML is a binary format.
- https://asciidraw.github.io/
Anybody know more?
A visual editor of UTF-8 BOX DRAWING characters, contrary to "ascii" in the name.
No server, no installation: browser-side Javascript only.
The title just talks of plain text though, and plain text usually means UTF-8 encoded text these days. Plain, as in conventional, standardised, portable, and editable with any text editor. I would be surprised if someone talked about plain text as being limited to just ASCII.
Would an emoji count as plain text?
What about right to left text? I have no idea how many editors handle that.
Curious though — do you think the real limit of plain text is readability at scale (like configs turning messy), or is it more about lack of enforced structure compared to proper systems?
Part of the lowest common denominator are the (printable) ASCII characters. If you ever opened a text file mostly consisting of a script you’re not familiar with, it might as well have been binary. Add to that right-to-left languages where you can’t even be sure which element follows which without knowing the scripts.
It’s “good enough” for many purposes, but it’s important to keep in mind the limitations.
It’s like SMS vs MMS or modern chat. With pure text, you can at best add a link to a picture (which could get rotten or inaccessible for other reasons), but you cannot directly graphical content.
data:image/gif;base64,R0lGODdhMAAwAPAAAAAAAP///ywAAAAAMAAw AAAC8IyPqcvt3wCcDkiLc7C0qwyGHhSWpjQu5yqmCYsapyuvUUlvONmOZtfzgFz ByTB10QgxOR0TqBQejhRNzOfkVJ+5YiUqrXF5Y5lKh/DeuNcP5yLWGsEbtLiOSp a/TPg7JpJHxyendzWTBfX0cxOnKPjgBzi4diinWGdkF8kjdfnycQZXZeYGejmJl ZeGl9i2icVqaNVailT6F5iJ90m6mvuTS4OK05M0vDk0Q4XUtwvKOzrcd3iq9uis F81M1OIcR7lEewwcLp7tuNNkM3uNna3F2JQFo97Vriy/Xl4/f1cf5VWzXyym7PH hhx4dbgYKAAA7
There are limitations though. Compare a database of .yml files to a database in a DBMS. I wrote a custom forum via ruby + yaml files. It also works. It also can not compete anywhere with e. g. rails/activerecord and so forth. Its sole advantage is simplicity. Everywhere else it loses without even a fight.
So many users wants the Special fonts but in here simple is Special to eyes and Mind.
As a developer I agree. Sometimes simplicity is more Special and powerful than complex formats.