Old joke from the 80'ies: "Q: Why is Emacs an OS? A: It manages all resources"
Emacs was big in those days and took much RAM & many small workstations started heavy swapping...
This is close to the issues with IDEs. They assume i.e a specific formatting convention. Most developers work on such as wide variety of codebases, each with their own standards that modifying the autoformat is a full time job. Whats really frustrating is that so many IDEs do not have clean ways to simply disable all autoformat which is sometimes the best solution (Visual Studio can't fully do it and NetBeans doesn't even attempt with the last revision I tried (including Oracle Developer Studio).To facilitate "pair programming" style collaboration, you were not allowed to customize it to your preferences so that someone else could jump to do something - with no suprises.
For me, this is still the way quite honestly. Works well over SSH where I see a number of collegues struggle with the pretty poor VSCode SSH integration. Debuggers like lldb and gdb are first rate. The cli tools have so many more features than most attempts at wrapping them with a GUI.When I started programming, there were no IDEs, no tmux, no screen. If you wanted "source" debugging, you grabbed two VT220 terminals, and ran a debugger in one terminal with the source code open in the other. Single step debugger in left terminal, and then press the down-arrow in the right arrow. I do not miss those days.
So a lot of people call it "modern" but its popularity rose due to a technical limitation on DOS (i.e MS-DOS), where multi-tasking was so poor, that an all-in-one monolithic program became the norm for student/hobby developers using that platform back in the day. Then as they entered the industry, they brought this idea with them.It's overrated anyway. Having a cockpit style development environment is pretty useless and doesn't improve anything. Why do 100 program functions need to be immediately available in 1 window? I think you can uninstall the WM too if you're there...
I think it was actually a scam to make a protected mode multitasking OS look superior. Everything in Windows 95 graphics mode was suddenly proprietary and DOS stayed limited to SVGA.So a lot of people call it "modern" but its popularity rose due to a technical limitation on DOS (i.e MS-DOS), where multi-tasking was so poor, that an all-in-one monolithic program became the norm for student/hobby developers using that platform back in the day. Then as they entered the industry, they brought this idea with them.
Individual tools is so much more resilient. Especially in embedded where you really aren't going to change entire IDE just to use a different chip (i.e MPLAB X for PIC, then µVision for (some) 8051, etc..). This is why gdb-stub exists for some of the more featureful MCUs.
I think it was actually a scam to make a protected mode multitasking OS look superior. Everything in Windows 95 graphics mode was suddenly proprietary and DOS stayed limited to SVGA.
So a lot of people call it "modern" but its popularity rose due to a technical limitation on DOS (i.e MS-DOS), where multi-tasking was so poor, that an all-in-one monolithic program became the norm for student/hobby developers using that platform back in the day. Then as they entered the industry, they brought this idea with them.
I rememer those DOS4GW modules like for Quake 1 that the last generation of 16-bit games used for high resolutions but I have never seen a user applcation that makes it available to the owner of the computer. It was all commercially boarded up. The game houses had (still have) special permissions and information to operate graphics chips on low level. If it only works via an API, the manufacturer controls what a user is able to.Interesting thought, but there were 3D APIs for DOS. Like RenderWare, and Glide.
Since we seem to be evolving further now, IDEs are becoming simpler again and embedding a raw command line into them, it does continue to demonstrate this IDE explosion from the DOS days was an anomaly from the limitations of the platform. The mentioned GW BASIC being born on DOS precisely because of that limitation whereas, DEC's BASIC-PLUS could remain a standalone text editor because it had a proper multi-tasking OS behind it.Again interesting angle, but I think it was just evolutionary path of merging programming tools with text processing IDEs.
Dropping to the shell was quite different to what it is today and outside of multi-tasking systems like DESQview, it meant blocking execution of the original program. So, on DOS you actually don't drop back to the shell; you spawn a new shell (aka system()). This design had a number of issues (growing RAM, sharing environment, etc). It certainly meant that you couldn't run the compiler and continue to work in the text editor whilst it was happening in the background. Merging it all into one monolithic program is what allowed this because custom scheduling could be implemented.There is stuff like IBM's Professional Editor, basically from PC's beginning there was a "programmers text editor", something that could open multiple documents, run macros, and drop back to DOS "shell" to do extra work.
Turbo C was the consumer product but the IDE made it into Borland C++ 3.x because it was a successful approach and again, it really came from the DOS days to pick up the slack of poor multi-tasking. With this IDE you could do a few tasks in parallel compared to pausing execution and spawning a new shell. Debugging was a clear win there, whereas Watcom's WDB was superior in facilities, it required spawning a whole new program, requiring excessive ram, so you had to exit the text editor first (WVI) and then run the debugger.Stuff like first language IDEs like Turbo C, were years ahead, and just merging the compiler tools with a slightly specialized version of a programmers text editor - one that has a few functions to build, enter debuggger, instead of defining your own macros.
Yeah I recall the battle between Borlands early OWL (predecessor to VCL) and Microsoft's offerings. But ultimately Microsoft only became competitive once Windows gained traction, they owned the platform, they could pull any trick they needed to win. It gave rise to the naive idea that "Microsoft's compiler is obviously the best choice for Microsoft's OS". The fact that MFC "won" over VCL is a laughably painful example of this.The IDE thing is definitely Microsoft's kitchen. Well, it is Borland's recipe, but MS poached everyone they could in early 90s of Borlands staff.
IDE explosion from the DOS days was an anomaly from the limitations of the platform. The mentioned GW BASIC being born on DOS precisely because of that limitation whereas, DEC's BASIC-PLUS could remain a standalone text editor because it had a proper multi-tasking OS behind it.
Dropping to the shell was quite different to what it is today and outside of multi-tasking systems like DESQview, it meant blocking execution of the original program. So, on DOS you actually don't drop back to the shell; you spawn a new shell (aka system()). This design had a number of issues (growing RAM, sharing environment, etc). It certainly meant that you couldn't run the compiler and continue to work in the text editor whilst it was happening in the background. Merging it all into one monolithic program is what allowed this because custom scheduling could be implemented.
That is factually incorrect.& then kotlin forces you to use intelligi editor. Why 100's of libraries to link & no command line available.
I can understand your bias/confusion in this matter but in the strictest sense DOS most certainly is an OS. Just because modern OS better protect the hardware doesn't make historical OS any less of an OS. I do a lot of 32bit microcontroller programming, and use RTOS that are meant to run in real mode and have access to all system resources. They are still an OS (realtime-operating-systems). By definition an OS only need provide an abstraction between the user program and hardware resources. There is no requirement that the user program MUST honor the OS API. The protected-mode/virtual addressing of modern OS is an enhancement, not a requirement.Heh that's not the way it works. Quake 1 is not a 16 bit game, it is 32 bit.
DOS isn't a OS in 2026 terms. It has residency - it sets up code in particular pieces of memory, and wires them into software interrupt services. The program called from DOS can access the entire PC, including 32 bit facilities of 386+ CPU. It needs to be careful to not overwrite DOS residency. However the program is free to kick into 32 bit protected mode using DPMI and then communicate with ISA hardware, BIOS and DOS interrupt services via v86 facility of 386 CPU.
The software libraries used to set up this thing are called extenders. It is all thoroughly documented, and there were free extenders even back in the day.
This has nothing to do with 3D API stuff. Only consequentially the extenders have to be used, because there are no 3D cards for 16-bit ISA, they're on 32-bit VLB/PCI and 16-bit code cannot communicate with PCI address space.
There is no "working on the API level". These peripherals didn't run entire firmware on them for black box operation, their work was done by setting up registers and moving memory in/out
To an extent. They can do iterative work (i.e scan a little bit of the file, then poll the GUI) but the debugger (cooperative and interrupt driven rather than pre-emptive scheduling) very much allowed you to pause and interact with the program during its run rather than wait for it to complete in its entirety. Probably via the magic of INT 3. But mostly that the tool didn't need to be closed to access debug break points, tweak code, repeat. This is important because the only other approach would be to save the entire state of the program and restore it when you need to launch it again.The Borland/Microsoft tools with IDE do not have any sort of scheduling - they call the compiler and you wait, instead of looking at CLI output progress is shown in a fancy window...
To clarify, you couldn't really drop to the shell. You could simply spawn a new shell ontop of it (probably we are talking about the same thing). But this was terrible for the limited RAM of the machines at the time. You were then looking at running:If you ask me, the workflow of tools is optimal. You have option of 'interrupting' the TUI/GUI program and dropping to DOS shell if you have manual steps. That is, and was enough, because early to mid 80s it was uncommon to have more than 640kB of RAM on the PC.
This debate has sailed back in the 90's so probably isn't worth the discussion but the only thing I will say is that cognitive context switching between them tends to be tricky. This is why around 50% of people tend to go "full cli" and the other 50% tend to go "full gui". Yes, kind of extremism but also a more natural workflow whichever has been chosen.Anyhow on the topic of IDE evolution, I think using totally separate tools via Unix style CLI is a bit of text-mode extremism, just like IDE RADs are GUI extremes
Yeah DDD was good. It kind of rotted sadly. I tried to find an easy Win32 alternative for my students many years back because they were exceptionally green when it came to CLI work. There are relatively few GDB UI frontends minus a bizarre web based approach which I honestly couldn't think of a worse idea.As size of code in text moves forward the GUI stuff is handy to visualize object hierarchies and debugging sessions. I remember a great visual gdb frontend, DDD, that I used almost daily back in the 00s and early 10s. It was a gdb console + visualizer that would automatically display structs and pointer relations from the symbols in the scope.