C++ Looking for a functioning IDE for C++

It's overrated anyway. Having a cockpit style development environment is pretty useless and doesn't improve anything. Why do 100 program functions need to be immediately available in 1 window? I think you can uninstall the WM too if you're there...
 
To facilitate "pair programming" style collaboration, you were not allowed to customize it to your preferences so that someone else could jump to do something - with no suprises.
This is close to the issues with IDEs. They assume i.e a specific formatting convention. Most developers work on such as wide variety of codebases, each with their own standards that modifying the autoformat is a full time job. Whats really frustrating is that so many IDEs do not have clean ways to simply disable all autoformat which is sometimes the best solution (Visual Studio can't fully do it and NetBeans doesn't even attempt with the last revision I tried (including Oracle Developer Studio).

When I started programming, there were no IDEs, no tmux, no screen. If you wanted "source" debugging, you grabbed two VT220 terminals, and ran a debugger in one terminal with the source code open in the other. Single step debugger in left terminal, and then press the down-arrow in the right arrow. I do not miss those days.
For me, this is still the way quite honestly. Works well over SSH where I see a number of collegues struggle with the pretty poor VSCode SSH integration. Debuggers like lldb and gdb are first rate. The cli tools have so many more features than most attempts at wrapping them with a GUI.

Though I tend to use job control (ctrl-z, fg, etc) rather than multiple terminals. But this does have the disadvantage that Windows SSH, whilst rare to use, simply has no job control (Microsoft thinks this is job control but really that is just background process that can't ever be reattached so f-ing useless)
 
Sniff++ was an excellent IDE, but it's gone. IDK, but chances are the company was target of a M&A and the codebase was integrated into another IDE or the product was just renamed. Would be interesting to know if an successor is still availabe s/wh.
 
When it comes to using a debugger:

I like being able to set breakpoints directly in the edit buffer as well as the next person. But in practice my debugging needs quickly evolve to the point where I need extensive .gdbinit statements, and that gets me back to commandline use whether I have an ide or not.

Also keep in mind that even if you can't invoke gdb from your editor, it works the other way round. When you are on a frame in a backtrace in gdb you can just say "edit" and an editor for that source location will pop up. Very handy. Even works for kernel debugging.

The last IDE I used was Borland Turbo C 2.0.
 
A bunch of printf statements is all you need for debugging.

8eb-3172428839.jpg

programmerhumor-io-debugging-memes-programming-memes-b732cd8ea00bd7d-406124875.jpg
 
It's overrated anyway. Having a cockpit style development environment is pretty useless and doesn't improve anything. Why do 100 program functions need to be immediately available in 1 window? I think you can uninstall the WM too if you're there...
So a lot of people call it "modern" but its popularity rose due to a technical limitation on DOS (i.e MS-DOS), where multi-tasking was so poor, that an all-in-one monolithic program became the norm for student/hobby developers using that platform back in the day. Then as they entered the industry, they brought this idea with them.

Individual tools is so much more resilient. Especially in embedded where you really aren't going to change entire IDE just to use a different chip (i.e MPLAB X for PIC, then µVision for (some) 8051, etc..). This is why gdb-stub exists for some of the more featureful MCUs.
 
So a lot of people call it "modern" but its popularity rose due to a technical limitation on DOS (i.e MS-DOS), where multi-tasking was so poor, that an all-in-one monolithic program became the norm for student/hobby developers using that platform back in the day. Then as they entered the industry, they brought this idea with them.

Individual tools is so much more resilient. Especially in embedded where you really aren't going to change entire IDE just to use a different chip (i.e MPLAB X for PIC, then µVision for (some) 8051, etc..). This is why gdb-stub exists for some of the more featureful MCUs.
I think it was actually a scam to make a protected mode multitasking OS look superior. Everything in Windows 95 graphics mode was suddenly proprietary and DOS stayed limited to SVGA.
 
I think it was actually a scam to make a protected mode multitasking OS look superior. Everything in Windows 95 graphics mode was suddenly proprietary and DOS stayed limited to SVGA.

Interesting thought, but there were 3D APIs for DOS. Like RenderWare, and Glide.

So a lot of people call it "modern" but its popularity rose due to a technical limitation on DOS (i.e MS-DOS), where multi-tasking was so poor, that an all-in-one monolithic program became the norm for student/hobby developers using that platform back in the day. Then as they entered the industry, they brought this idea with them.

Again interesting angle, but I think it was just evolutionary path of merging programming tools with text processing IDEs.
The most primitive IDE I can think of is GW BASIC which came bundled with DOS and in early PC option ROMs. It was developed from MBASIC which was influenced by DEC's BASIC-PLUS. MBASIC brought screen editor with it, and GW added function keys.

There is stuff like IBM's Professional Editor, basically from PC's beginning there was a "programmers text editor", something that could open multiple documents, run macros, and drop back to DOS "shell" to do extra work.

Stuff like first language IDEs like Turbo C, were years ahead, and just merging the compiler tools with a slightly specialized version of a programmers text editor - one that has a few functions to build, enter debuggger, instead of defining your own macros.

The IDE thing is definitely Microsoft's kitchen. Well, it is Borland's recipe, but MS poached everyone they could in early 90s of Borlands staff. The legend is their agents sat in front of Borlands building and gave developers "blank checque" type of contract. They did like they did with the rest - automate mostly everything behind a GUI paradigm. For PC, Microsoft is the executor, but these are Apple ways. They were the ones to force GUI, WYSIWYG, etc.
 
Interesting thought, but there were 3D APIs for DOS. Like RenderWare, and Glide.
I rememer those DOS4GW modules like for Quake 1 that the last generation of 16-bit games used for high resolutions but I have never seen a user applcation that makes it available to the owner of the computer. It was all commercially boarded up. The game houses had (still have) special permissions and information to operate graphics chips on low level. If it only works via an API, the manufacturer controls what a user is able to.

It would be nice, graphics on 1920x1080 with a total OS imstallation of 300KB. You need 3 files to get a prompt and start a program.
 
Heh that's not the way it works. Quake 1 is not a 16 bit game, it is 32 bit.

DOS isn't a OS in 2026 terms. It has residency - it sets up code in particular pieces of memory, and wires them into software interrupt services. The program called from DOS can access the entire PC, including 32 bit facilities of 386+ CPU. It needs to be careful to not overwrite DOS residency. However the program is free to kick into 32 bit protected mode using DPMI and then communicate with ISA hardware, BIOS and DOS interrupt services via v86 facility of 386 CPU.

The software libraries used to set up this thing are called extenders. It is all thoroughly documented, and there were free extenders even back in the day.

This has nothing to do with 3D API stuff. Only consequentially the extenders have to be used, because there are no 3D cards for 16-bit ISA, they're on 32-bit VLB/PCI and 16-bit code cannot communicate with PCI address space.

There is no "working on the API level". These peripherals didn't run entire firmware on them for black box operation, their work was done by setting up registers and moving memory in/out
 
Again interesting angle, but I think it was just evolutionary path of merging programming tools with text processing IDEs.
Since we seem to be evolving further now, IDEs are becoming simpler again and embedding a raw command line into them, it does continue to demonstrate this IDE explosion from the DOS days was an anomaly from the limitations of the platform. The mentioned GW BASIC being born on DOS precisely because of that limitation whereas, DEC's BASIC-PLUS could remain a standalone text editor because it had a proper multi-tasking OS behind it.

There is stuff like IBM's Professional Editor, basically from PC's beginning there was a "programmers text editor", something that could open multiple documents, run macros, and drop back to DOS "shell" to do extra work.
Dropping to the shell was quite different to what it is today and outside of multi-tasking systems like DESQview, it meant blocking execution of the original program. So, on DOS you actually don't drop back to the shell; you spawn a new shell (aka system()). This design had a number of issues (growing RAM, sharing environment, etc). It certainly meant that you couldn't run the compiler and continue to work in the text editor whilst it was happening in the background. Merging it all into one monolithic program is what allowed this because custom scheduling could be implemented.

Stuff like first language IDEs like Turbo C, were years ahead, and just merging the compiler tools with a slightly specialized version of a programmers text editor - one that has a few functions to build, enter debuggger, instead of defining your own macros.
Turbo C was the consumer product but the IDE made it into Borland C++ 3.x because it was a successful approach and again, it really came from the DOS days to pick up the slack of poor multi-tasking. With this IDE you could do a few tasks in parallel compared to pausing execution and spawning a new shell. Debugging was a clear win there, whereas Watcom's WDB was superior in facilities, it required spawning a whole new program, requiring excessive ram, so you had to exit the text editor first (WVI) and then run the debugger.

(I find it weirdly charming and horrifying that Turbo C is still taught in schools in India because their curriculum dictates it)

The IDE thing is definitely Microsoft's kitchen. Well, it is Borland's recipe, but MS poached everyone they could in early 90s of Borlands staff.
Yeah I recall the battle between Borlands early OWL (predecessor to VCL) and Microsoft's offerings. But ultimately Microsoft only became competitive once Windows gained traction, they owned the platform, they could pull any trick they needed to win. It gave rise to the naive idea that "Microsoft's compiler is obviously the best choice for Microsoft's OS". The fact that MFC "won" over VCL is a laughably painful example of this.
 
IDE explosion from the DOS days was an anomaly from the limitations of the platform. The mentioned GW BASIC being born on DOS precisely because of that limitation whereas, DEC's BASIC-PLUS could remain a standalone text editor because it had a proper multi-tasking OS behind it.

We're talking about a diskless 1981/2 IBM PC. The first PC came with a tape drive, and ROM BASIC. Continued below

Dropping to the shell was quite different to what it is today and outside of multi-tasking systems like DESQview, it meant blocking execution of the original program. So, on DOS you actually don't drop back to the shell; you spawn a new shell (aka system()). This design had a number of issues (growing RAM, sharing environment, etc). It certainly meant that you couldn't run the compiler and continue to work in the text editor whilst it was happening in the background. Merging it all into one monolithic program is what allowed this because custom scheduling could be implemented.

DOS shell is exactly what the name says, a DOS shell. Multitasking is not implied. The Borland/Microsoft tools with IDE do not have any sort of scheduling - they call the compiler and you wait, instead of looking at CLI output progress is shown in a fancy window...

If you ask me, the workflow of tools is optimal. You have option of 'interrupting' the TUI/GUI program and dropping to DOS shell if you have manual steps. That is, and was enough, because early to mid 80s it was uncommon to have more than 640kB of RAM on the PC.

There are multiuser, concurrent DOS solutions. People at large did not buy them just to have multitasked development workflow.

The code was simple, it wasn't big, it didn't require much navigation, and people used to have most of it on paper too, in many other forms.

Anyhow on the topic of IDE evolution, I think using totally separate tools via Unix style CLI is a bit of text-mode extremism, just like IDE RADs are GUI extremes, on the other side of the spectrum. If now we have something like LSP that formalizes code lookup maybe we should've had all the things in modular shape long ago. So everyone can also construct their own graphical IDE if they want, not just work with termux/vi/cscope etc.

As size of code in text moves forward the GUI stuff is handy to visualize object hierarchies and debugging sessions. I remember a great visual gdb frontend, DDD, that I used almost daily back in the 00s and early 10s. It was a gdb console + visualizer that would automatically display structs and pointer relations from the symbols in the scope.
 
& then kotlin forces you to use intelligi editor. Why 100's of libraries to link & no command line available. Speak of vendor lock in.
I liked borland text based gui editor & Delphi.
 
Heh that's not the way it works. Quake 1 is not a 16 bit game, it is 32 bit.

DOS isn't a OS in 2026 terms. It has residency - it sets up code in particular pieces of memory, and wires them into software interrupt services. The program called from DOS can access the entire PC, including 32 bit facilities of 386+ CPU. It needs to be careful to not overwrite DOS residency. However the program is free to kick into 32 bit protected mode using DPMI and then communicate with ISA hardware, BIOS and DOS interrupt services via v86 facility of 386 CPU.

The software libraries used to set up this thing are called extenders. It is all thoroughly documented, and there were free extenders even back in the day.

This has nothing to do with 3D API stuff. Only consequentially the extenders have to be used, because there are no 3D cards for 16-bit ISA, they're on 32-bit VLB/PCI and 16-bit code cannot communicate with PCI address space.

There is no "working on the API level". These peripherals didn't run entire firmware on them for black box operation, their work was done by setting up registers and moving memory in/out
I can understand your bias/confusion in this matter but in the strictest sense DOS most certainly is an OS. Just because modern OS better protect the hardware doesn't make historical OS any less of an OS. I do a lot of 32bit microcontroller programming, and use RTOS that are meant to run in real mode and have access to all system resources. They are still an OS (realtime-operating-systems). By definition an OS only need provide an abstraction between the user program and hardware resources. There is no requirement that the user program MUST honor the OS API. The protected-mode/virtual addressing of modern OS is an enhancement, not a requirement.
 
The Borland/Microsoft tools with IDE do not have any sort of scheduling - they call the compiler and you wait, instead of looking at CLI output progress is shown in a fancy window...
To an extent. They can do iterative work (i.e scan a little bit of the file, then poll the GUI) but the debugger (cooperative and interrupt driven rather than pre-emptive scheduling) very much allowed you to pause and interact with the program during its run rather than wait for it to complete in its entirety. Probably via the magic of INT 3. But mostly that the tool didn't need to be closed to access debug break points, tweak code, repeat. This is important because the only other approach would be to save the entire state of the program and restore it when you need to launch it again.
If you ask me, the workflow of tools is optimal. You have option of 'interrupting' the TUI/GUI program and dropping to DOS shell if you have manual steps. That is, and was enough, because early to mid 80s it was uncommon to have more than 640kB of RAM on the PC.
To clarify, you couldn't really drop to the shell. You could simply spawn a new shell ontop of it (probably we are talking about the same thing). But this was terrible for the limited RAM of the machines at the time. You were then looking at running:
  • Shell
  • Editor
  • Shell
  • Debugger
  • Program
So merging them altogether into a monolithic IDE was heavier than the individual programs, but combined was the lesser evil.
Anyhow on the topic of IDE evolution, I think using totally separate tools via Unix style CLI is a bit of text-mode extremism, just like IDE RADs are GUI extremes
This debate has sailed back in the 90's so probably isn't worth the discussion but the only thing I will say is that cognitive context switching between them tends to be tricky. This is why around 50% of people tend to go "full cli" and the other 50% tend to go "full gui". Yes, kind of extremism but also a more natural workflow whichever has been chosen.

As size of code in text moves forward the GUI stuff is handy to visualize object hierarchies and debugging sessions. I remember a great visual gdb frontend, DDD, that I used almost daily back in the 00s and early 10s. It was a gdb console + visualizer that would automatically display structs and pointer relations from the symbols in the scope.
Yeah DDD was good. It kind of rotted sadly. I tried to find an easy Win32 alternative for my students many years back because they were exceptionally green when it came to CLI work. There are relatively few GDB UI frontends minus a bizarre web based approach which I honestly couldn't think of a worse idea.
 
Back
Top