WTH is it with some requirements for "advanced" terminal emulators?

cracauer@

Developer
I observe an increasing tendency of TUI/terminal software that needs special "advanced" terminal emulators.

Homebrew on macOS is an example. If I am on FreeBSD in an xterm and ssh into one of my Macs, I can't directly run homebrew commands to manage software. The problem being that theyy use terminal sequences that xterm doesn't understand and that end up spewing into control sequences making a mess, and a potentialy dangerous one. Of course nobody stopped to think about using termcap for this, which would (as the name implies) either send the right sequences, or at least not send ones not understood.
Likewise, Claude Code (the so-called CLI that is really a TUI) also requires special terminals, otherwise output is garbled. And they say that they need them to process Shift-Return. Never mind that Shift-Return works just fine on xterm. https://code.claude.com/docs/en/terminal-config

Now, what are th terminals wanted? The two non-Mac specific ones are ghostty and kitty. Both have FreeBSD ports. So I can open one of them on my FreeBSD screen and then use Claude Code or ssh into a Mac. Sounds good? Not quite. Both are using 3D APIs (Vulkan and OpenGL) and have no fallback to X11. So you can have one over ssh forwarding. Configuration is also laughable compare to xterm. Big loss of functionality here.

Opinions? I wish I could blame this on Linuxisms, but half of it is Mac driven. Weak rant?

Is anybody voluntarily using ghostty or kitty?
 
I should imagine expectation and experience have a lot to do with why apps are written like this now.

Since a lot of new apps are written on and for macOS, I think the lack of accommodations for the norms of other platforms is either secondary to many developers or not a conscious concern at all. From what I've seen, the default Terminal.app rarely sees use on macOS, and the popular third-party terminals are all actively-developed, fast-moving projects that are purposefully attempting to extend the capabilities of the terminal. I should not be surprised if most macOS developers thought that a similar situation would be the case on other platforms. Popular online discourse especially would lead one to
believe that. Combining with the general spirit of macOS to move with the trends of the times and create apps that "fit in", I imagine that even if they were to consider terminals that did not have advanced features, they may see that as regressive and holding the platform back.

I see this as an expression of the excitement of developers raised in the new style of development. Of course, there will be
exceptions, but on the whole, I see this pattern coming from people who learned to develop on macOS or popular Linux systems. Those platforms present a more monolithic and often progressive view of software systems which often doesn't value, much less discuss, the reasoning for slower-moving platforms, backwards compatibility, or cross-platform compatibility. Many of these developers have not and might not ever use a UNIX system that wasn't filtered through the lens of GNU, and don't know how to design UNIX programs. Some may not even consider it a worthwhile endeavour. As time goes on, future developers will grow up with these new programs as the norm as this generation probably grew up with homebrew and npm's "fancy" GUIs as their norm.
 
Opinions?
Symptom of bad quality software. Whoever wrote the software (that requires special terminals) didn't think about requirements, usability, integration. They wanted to quickly solve their own problem (installing software on a Mac, using an AI to code) without thinking about all the externalities.

Why do we have bad quality software? I blame two closely related things. One: the demise of the waterfall process. The correct way to write these two pieces of software (and all others!) is to not write a single line of code, until you have thought through the requirements and the overall design. The requirements document for these two pieces of software should have had a section about the operating environment the software expects. In there, someone should have written down: "The text user interface uses exactly the following set of escape sequences", or "... uses that xterm sequences documented in XXX", or "... uses the ANSI sequences from the following standards document", or "... uses the VT200 sequences from Digital in 1987", or some other such statement. And then the engineering manager, quality manager, and software architect should have spent 5 minutes in a meeting arguing which of the above lines exactly applies.

Alas, we don't do waterfall any more. We do agile: Code up some crap, throw it against the wall, see whether it sticks. It if is useful at all (stays up for longer than 10ms before core dumping), ship it to customers and improve. Agile claims to "learn from your mistakes" and "iterate quickly". Except that usually once it "works well enough", nothing gets done or fixed, and people move to other things.

Underlying this is another problem: Software has a shelf life. It used to be that software was commercially sold. People who buy it (and pay for support) have an expectation that it continues to function. As an example, the two largest projects I've worked on were both started in the 90s (one shipped its first version about 92, the other about 2000), and are still on use, with paying customers (one is software support for supercomputers, the other is an industrial control system for very expensive semiconductor fabrication machine). Both are still being sold and supported. That's 25-30 years of lifespan for software. If you write software that you need to enhance and support for a long time, you'll design quality and good architecture right into it, because otherwise you'll face the consequences. But this is not how things are done today. We expect software to be free. The people who write software know they won't get paid for it. By that I don't mean the individual software engineer (they make good salaries), but the corporate entities that pay those entities salaries. And that also leads to "do the minimum amount of work to ship the first version, and then we'll work on something else. Or to say it in German: "Nach mir die Sintflut" (after me comes the biblical flood).

Sorry, I responded to your rant with another rant. I think having a good stiff drink right now would be good.
 
Software has a shelf life. It used to be that software was commercially sold. People who buy it (and pay for support) have an expectation that it continues to function. As an example, the two largest projects I've worked on were both started in the 90s (one shipped its first version about 92, the other about 2000), and are still on use
I designed a medical computer in 1992. Before that, to upgrade any hardware, they had to throw away all the electronics inside the machine for any upgrade. My design had small, pluggable cards that allowed upgrading of everything by just swapping a card. That included the processor, memory, external I/O, etc. The system design was still in use for all new systems as of 2010.
 
Opinions?
I agree with your rant and the excellent rant by ralphbsz
Too many people only test on what they have access to.
Develop on Mac you wind up with assumptions as to the environment and "it works for me".
Things that run fine on one browser but not on others.
Kind of like the early days of commercialization of the internet when a lot of residential was POTS dialup. Websites designed and tested on internal 10M networks, they work great, but all the spinny/flashy stuff sucked on a dialup.
Modern day is develop website on a desktop browser, it looks like crap on mobile.
 
I just about had it with your constant barrage of useless, meaningless replies everywhere.

Why are you doing it?
Excuse me I will be less "verbose". But i like critics ,and l will earn from it.
I will say nothing, unless I have relevant information , which we are all interested in. Otherwise i'm just a waste of time. [Place i don't want to be]
 
I mean I see why this is happening, the authors of the programs don't care about Unix, only a small piece of it.

Both MS and Apple and, since recent times, Red Hat and Canonical only want first-tier access to FOSS devtools.
The reason why OSX succeeded, the devs could just use the FOSS console tools. Not their own builds under their own console where half of the stuff doesn't work correctly, command line arguments are sent differently, paths are different format, etc.

The number of us programmers working on a POSIX-y system which isn't WSL/Darwin or a mainline Linux distro is very niche. Not even enough to raise voice about somebody missing a good chunk of terminal emulation standards in some well known software.

I don't know how you can deal with this cracauer. I see it as a consequence of large effort to move away from Unix ways. If I were to have some tinfoil around I'd say we're looking at orchestrated embrace, extend, extinguish pattern. Maybe a pipe through a process that mangles the differences? I can't see them fixing their console code just because xterm doesn't work, they'll laugh at the issue.

Excuse me I will be less "verbose". But i like critics ,and l will earn from it.

If I may add, try to read the first post content and understand the issue before automatically posting first thing that comes to mind after reading the post title. For example in my Realtek sound topic one could've assumed the driver is not loaded after reading the topic title, but after reading the first post there should be absolutely no question that driver is indeed working as it should, yet all your posts were based about loading the driver.
 
Back
Top