Abstract
For three decades, Gordon Moore's (Intel's founder) self-fulfilling observation about the increasing performance of electronic circuits has given processor designers golden years. They were able to build ever more sophisticated mechanisms to exploit an ever-increasing number of transistors calculating ever faster. This movement was accompanied by the design of portable and secure programming languages, with rich programming environments and compilers that translate their programs into executable machine code by applying fine-tuned optimizations, usually without the programmers' knowledge.
Over the past decade, however, the race for processor speed has come to a halt, as energy consumption and heat dissipation limits have been reached, leading to diminishing returns in terms of performance. Designers had no choice but to enter a new race for parallelism, with calculators and processors featuring more and more computing cores. This choice, imposed on system designers and programmers, will make programming and verification much more difficult. This will lead to a much smaller increase in productivity than previously. The future dominance of multi-core computing based on cache coherence and statistical optimization mechanisms also threatens the safety of mission-critical systems, as it makes it virtually impossible to analyze the maximum execution time of a code, which is essential to any proof of safety for real-time applications.