To begin with: As far as I know, all CPU vendors had problems in this area. That includes AMD, ARM, IBM Power. Intel's problems are the worst, but I can't tell whether that's just an effect of more attention in the press, and more security researchers going for a bigger target, or whether Intel really has a lower quality standard in this area. Intel also has the most open ecosystem, with the largest variety of systems (hardware) and software (operating systems...) running on their hardware, so the biggest problem patching the vulnerabilities.
Sure, AMD can try to turn this into an opportunity: their chops have gotten less bad press. But I don't know whether their chips are any less vulnerable; they might just exist more in the shadows.
Next thing: These problems are actually extremely old. One interesting thing is that the original papers about the Spectre/Meltdown bug quoted an article written in the late 60s or early 70s! It's just that until recently (the last 10-15 years) there hasn't been enough of a computer ecosystem to make computer crime profitable. In theory, it might have been possible to have a similar bug on the IBM mainframe I'll mention in a moment below, but in 1964 several things were different: (a) computers were only used by a very small set of people. (b) Those people were inherently trustworthy, otherwise one wouldn't have allowed them to access such a precious and expensive resource. (c) There was very little software around, and all software that existed was inspected in source form (IBM used to distribute the OS source to its customers, in the form of microfilm, since customers were expected to find and debug problems at the source level). Matter-of-fact, I think the IBM mainframes had not access control and no form of permission checking (except for the user/kernel separation) until the RACF product shipped in the late 1970s.
Finally: Speculative execution is simply necessary. You can't just wish it away because it happens to be inconvenient. Today, CPUs are not going faster: Clock speeds have improved from 33 MHz in 1994 to 1GHz in 2000 to 3GHz in 2005, but they have since stalled; the fastest CPUs I know of run at about 4.7 or 5 GHz today (and those are not generally available to the public). The CPUs that consumers can buy, and that are used in data centers and in the cloud, are all about 3GHz. To get more speed (Moore's law isn't dead yet, so you do get more gates per acre, just not more clock cycles per second), we need to do more at the same time. But putting more cores on each CPU is running into limits too, mostly that much software is hard to parallelize (either intrinsically hard, or we haven't done it yet). So to get more speed, we need to execute multiple instructions at the same time. This started in the mid 1960s, with the IBM 360/91 (which did out-of order, parallel, and pipelined execution, some if which it stole from the ill-fated IBM Stretch/Harvest machine). By the way, I've met people who worked on Stretch and the /91 (they were admittedly very old, about 15 years ago). Since then, all "serious" computers (not toy microprocessors) have had to work this way, to get to the desired speed. The real issue is that chip designers never thought that the information leakage from speculative execution through cache effects was significant enough to be considered a problem worth thinking about. And for better or worse, today it is considered a problem. It will take a few processor generations to seal those leaks.