Some versions of the venerable MOS Technologies 6502 have only 3,218 transistors. The Intel 8080 has somewhere between 4,500 and 6,000. 5k transistors is square in the middle of "plenty for a classic 8 bit micro". Enough to run a basic *nix or embedded RTOS.
There's something I'm missing, they say the prototype hits 2.5 GHz, how is that possible if they only have the equivalent of 5k transistors? Or is clock cycle independent of transistor count?
Adding more transistors doesn't make your clock got faster, and it doesn't increase the speed of an individual transistor. The reason computers got both more transistors and faster in the past was that the transistors were continually shrinking; for a traditional MOSFET, Dennard scaling means that the smaller the transistor (and therefore the smaller its capacitance and voltage), the faster it switches. This device doesn't use MOSFET technology, so its scaling rules are different.
It depends on what you have your transistors do. You could even have one single transistor that you switch on/off very very quickly. You'd need to find a transistor with sub-ns switching time to reach >1GHz. It's not a very interesting "computation," though.
I'm very confused but I mean this in the utmost sincerity:
What made you think transistor count and clock speed were linked? What was your line of thought? How did you think overclocking worked, by dynamically removing transistors from the chip?
GP is not entirely wrong. To achieve any reasonable speed pipelining is necessary. That is, breaking up the critical combinatorial path with registers. Which adds. transistors. Cache also is needed, which also consumes transistor budget.
That's basically my question. I'm not clear on what all the transistors on a chip are doing and I'm trying to understand what the significance of a chip with only 5k transistors but able to do 2.5GHz would be. I imagine there must be some limitation, I'm just not sure what.
Most of the things modern microprocessors spend transistors on are clever bargains to allow the CPU to execute a single thread faster. Caches which store instructions and data closer than main memory, translation lookup buffers which store already de-referenced memory locations, pipelining which reduces the amount of work and complexity per stage so each can be clocked faster, SMT to make better use of multiple decode ports and execution units, complex additional instructions like AVX for doing more work in fewer instructions, microcode for disconnecting the underlying architecture from the instruction set allowing significantly more design freedom and implementation of legacy instructions without requiring hardware.
A design as simple as an 8 bit micro implements the instruction set directly in hardware, with minimal pipelining, no caches - just a few registers for holding values currently being worked with. They may implement a few dozens to a little over a hundred instructions vs. thousands in a modern x86. It won't have any fancy integrated peripherals like a graphics controller or NPU, just an interface to memory and a few IO pins. Even a 2.5ghz 8bit micro won't be fast compared to a similarly clocked modern x86. The micro may dispatch 1 instruction per clock or per two or four clocks, whereas the x86 might decode 6 or 8 instructions per clock per core and have as many as 20 or 30 instructions in flight at any given time per core. But the 8bit micros are just beyond a threshold of complexity which is recognizably a CPU capable of arbitrary computation upon which you can bolt on anything else you might need.
Are barrel shifters automatically zero sum? How often do GPUs do renormalization? It's the switching and the location information.
Did we have this discussion 50 years ago? It is a set of brilliant ideas. I read about jjs when I was small, and later re-read about why they never panned out significantly.