Algorithms open up the possibility of a new way of looking at science and technology that extends far beyond its practical applications. This lecture explains the basics.
Moore's Law predicts that computer power will double every two years. This progression has been the main driver of technological innovation worldwide for over forty years. Like all exponential growth, of course, Moore's Law is doomed in the short term. In fact, there's every reason to believe that it's already a dead letter. At any rate, this is the working hypothesis behind Microsoft's development of the next generation of operating systems. The future of computing, one might be tempted to conclude, belongs no longer to raw computing power but to algorithms, the discipline that has brought us the decoding of the human genome, search engines, electronic auctions, the encryption of online purchases, and speech recognition. Beyond its software applications, however, the algorithm is above all a "subversive" conceptual tool that opens up the possibility of a fresh look at science and technology.
This lecture explains the key elements of this ongoing revolution.
Algorithms make us think differently. Did you know, for example, that there is an algorithm capable of instantly convincing you of the validity of a proof such as Poincaré's dreaded Conjecture, by picking out no more than ten words at random? The intuition that leads you to think that any mathematical demonstration is at the mercy of the slightest error is fundamentally misleading. Did you know that you can convince an interlocutor of a truth without revealing the slightest information about its justification? These "zero disclosure" algorithms have radically transformed fundamental epistemological notions such as proof verification. In cryptography, their use enables transactions between entities that do not trust each other. For example, how can we authenticate a document or prove possession of a password without revealing it?