The most simplified thing about this topic was already mentioned, but let's repeat:
Hardware is cheap, Software is expensive.
Software is expensive because software development is complex (and yes, hardware development is complex as well, but in a much more limited domain. The problems you solve in software are
much more varied).
So, if you can greatly cut down software development effort by using e.g. simplified languages, libraries, frameworks, and so on, and the price is you will need more RAM (for all that "hidden" complexity), that's a good deal.
Then, games are special, and development for a gaming console is even more special. Games (definitely 20 years ago) were relatively simple in structure. There's a limited amount of things that can happen (or the player can do) during gameplay. Furthermore, if you write a game for a console, you will find a special-purpose OS (if any) and a defined hardware. There's no need for any abstraction layers you will need with your typical portable PC software. You will never have to think about other processes, there will only be those your game needs. Directly accessing audio and video hardware isn't a problem at all, cause again, you're "alone" on that machine and you know exactly which hardware is used. Comparing memory amounts needed for this to programs running on a general-purpose multiuser/multitasking OS on your general-purpose machine is fundamentally flawed.
Finally, I think the "extraordinary small task" you talk about is also mistaken. Please compare such software to software on "desktop" machines 20 years ago. Back then, it wasn't uncommon that a crashing program would crash your whole system (yep, it was the time of e.g. win 9x on your typical x86). UIs were "simple", but very lacking in functionality (and UX). Few things, if any, were configurable by the user. Sound was an exclusive resource. Well, this list goes on, depending on which software you actually look at...
So now, what's "Software bloat"? Maybe the "excessive" use of libraries and frameworks. This
does happen. I'm looking for example at node and electron. I don't like them for other reasons (portability isn't as nice as advertised, packaging software using them is a PITA), but I personally wouldn't mind the "wasted" memory, if I can have a well-working application quickly, because the devs didn't have to "reinvent the wheel" over and over.
And finally, I just stumbled over this:
https://v8.dev/blog/pointer-compression – it's IMHO an awesome example how today(!), in a lower level, a lot of effort is spent for optimizations. Doing it there makes a lot of sense because a lot of software will immediately benefit from it.