> You can get the same sharing of hardware resources with hardwired control circuitry as with microcode.
Only with some other kind of state machine, though. I was maybe being a little loose with my definition for "microcode" vs. other state paradigms.
Maybe the converse point makes more sense: "RISC", as a design philosophy, only makes sense once there's enough hardware on the chip to execute and retire every piece of an entire instruction in one cycle (not all in the same cycle, of course, and there were always instructions that broke the rules and required a stall). Lots and lots of successful CPUs (including the 8086!) were shipped without this property.
My point was just that that having 50k+ transistor budgets was a comparatively late innovation and that given the constraints of the time, microcode made a ton of sense. It's not a mistake even if it seems like it in hindsight.
There's another implicit assumption here, though, which is that code size optimization is important. Which is obviously true given the era we're discussing, but it makes it clear why the implicit third possibility between "microcoded CISC" and "fully pipelined RISC" wasn't an option. Microinstructions themselves tend, especially with earlier designs, to be "simple" in the RISC sense (even if not pipelined, they're usually a fixed number of cycles/phases each). Why, then, are developers writing instructions that are then decoded into microinstructions, instead of writing microinstructions directly in an early equivalent of VLIW? Initially, as a code size optimization so essential as to go beyond an optimization; but then, once the cost of a the microcode decoder and state machine are already paid for, because the chosen ISA, now isolated from the micro-ISA, is a better and more pleasant ISA to develop for.
Not just better and more pleasant, but also because having an abstraction layer between the code and the microarch is nice: it lets Intel modify their CPU's internals as they see fit without worrying about backwards compatibility; it allows Intel to make CPUs of many different speeds/complexities and, regardless of their insides, they all get to be compatible; and, as Intel builds more sophisticated microarchitectures, it allows old code to see speed improvements via faster microcode that has more resources to operate with.
Only with some other kind of state machine, though. I was maybe being a little loose with my definition for "microcode" vs. other state paradigms.
Maybe the converse point makes more sense: "RISC", as a design philosophy, only makes sense once there's enough hardware on the chip to execute and retire every piece of an entire instruction in one cycle (not all in the same cycle, of course, and there were always instructions that broke the rules and required a stall). Lots and lots of successful CPUs (including the 8086!) were shipped without this property.
My point was just that that having 50k+ transistor budgets was a comparatively late innovation and that given the constraints of the time, microcode made a ton of sense. It's not a mistake even if it seems like it in hindsight.