Abstract
Olivier Temam, Director of Research at Inria, has shown why microprocessor performance has evolved considerably since the introduction of the Intel 4004 in 1971, while the basic principle, derived from the von Neumann architecture of the 1940s, has remained unchanged: connect to an external memory, and execute a program stored in memory on data also stored in memory, by decoding its instructions and executing calculations using arithmetic and logic units and temporary local memories.
Performance improvements have come from two sources: the decreasing size of wires and transistors, leading to an exponential increase in the number of transistors per chip (Moore's law) and higher clock speeds, and the improvement of processor logic architecture based on fine-grained parallelism and several space/time exchanges: pipelining, speculation, etc. Finally, memories are becoming more and more important. Finally, as memories become slower and slower compared to processors, several levels of memory-cache managed by sophisticated algorithms accelerate the mean time of data access.
O. Temam explained that the current challenge is energy consumption. Since we can no longer accelerate clocks without burning up circuits, the only solution to improve performance is to put several processors on each chip (currently 8, soon hundreds or thousands), which also improves manufacturing error tolerance since non-functional processors can be disconnected for testing. But a new problem has arisen, that of "black silicon": any processor that is working heats up, and must be quickly shut down and its program migrated elsewhere to cool down - a good proportion of processors are simply resting! Hardware architecture is therefore a constantly evolving subject, all the more so as new forms of thinking are emerging with quantum processors, biomimetics and so on. They are full of promise, but we mustn't confuse our desires with reality, as we all too often see in presumptuous pseudo-scientific assertions.