From a small implementation detail to an unlimited manifesto (Daniel Porumbel © 2023 ) Whether we like it or not, the practical speed of any algorithm that is really executed somewhere depends on the quality of its implementation. After ten years of talks at various Computer Science (CS) conferences, I have just realized that I always finished my speeches with a phrase like: "And finally, I will spare you the implementation details because I prefer to focus on the essentials and the high level concepts". Despite some merits, this phrase is far too simplistic to be truly realistic. It is likely to distract readers with an average attention span from the truth. Like a backdoor in a software program, an unspoken consequence is that the implementation has no relevant impact on the total computational cost. But in reality, this cost always depends on a constant (or even larger) factor K closely related to the quality of the implementation. Such factors K are invisible when calculating theoretical complexities such as O(n^2), O(n^2 log n), etc. But imagine a salesman trying to say that the cost to pay is 1000$ multiplied by some constant factor K=3 or K=6 which is not relevant in absolute terms. With all due respect to all rich people for whom that may be fine, I would not let myself convinced so easily. I am even very grateful to my fate that it spared me the emptiness of living a life in a such a great luxury. I once listened to a leading expert giving a keynote lecture: he said that a certain theory would be essential for all the algorithms that are now pervasive in our lives. I stood dumbfounded, watching how everybody was happy to think the existence of these algorithms was made possible mainly by that wonderful theory. Very few understood that this vision was based on too many simplifying assumptions so dear to theorists. First, by following this vision, one can easily forget that all these algorithms had to be implemented by someone. And we should never forget this: "the devil is in the details". This is why I would advise anyone to be more prudent before judging that one theory alone can provide all the answers. The essential remains quite often so difficult to grasp that even the greatest minds may need painstaking efforts to penetrate it. Let us only first consider only the running time: many theories can propose a large variety of tools to determine the time complexity, but the practice is so complex that it may very easily escape most simplifications and axiomatizations of any theory. The number of factors that can influence practical computation time is even too large to be listed here. Real life has more surprises than theory. I uploaded to https://youtu.be/9NE0rdLM3Zg two C++ programs that are both obviously exponential in theory. They perform the allocation and initialization of an array with $2^n$ elements. But if you check the video, you will see that one implementation takes virtually no time while the other is exponential (it takes 3.2 seconds for n=30, 1.6 secs for n=29, or 0.8 secs for n=28). This was not just some odd behavior due to an atypical configuration of my machine, but it was confirmed by around 50 colleagues on their machines. This is the result of a pool launched in November 2022 on the list of the French Operation Research community (gdr-ro), combined with a pool launched in February 2022 on the list of the Cedric CS laboratory. Only one person in around 50 reported results that contradicted this behavior. One thing is clear: if there is one machine with an atypical configuration, it is not one of the 50 machines that confirmed the behavior, but the one that contradicted it. I am quite sure that if you take a program with a theoretically-proven linear time complexity and you execute it on 50 different machines, one of the machines will need more than linear time to run it. It's human nature : certainties only exist in theory. Certain theoretical CS researchers (who aim to publish in conferences such as Focs, Soda, etc) often design algorithms by paying zero attention to the implementation. Some are even proud because they only work with algorithms designed to be never implemented! But a theoretical development in this direction alone will never come close to explaining the unexpected practical behavior of the two programs from the above video. I must confess I also wrote papers with no implementation (e.g., see my only paper on sub-modular function minimization). Still, we all have to accept that these types of algorithms are likely to remain only in our imagination; they may contain the most brilliant ideas, but as long as they live only on paper, they can not become "pervasive in our lives". I have a high regard for many physical, mathematical or philosophical theories that explain natural phenomena existing in real life. It is also very honorable that many mathematicians study beautiful theoretical ideas that captivated the minds of so many people over the centuries. If science served only concrete purposes, if the state invested only in sciences with an immediate impact on the economy, then that society would become empty, progressively drying up the vital sap required for long-term development. Still, what is the point of studying an algorithm that exists only on paper? If that is done only to escape the harsh realities of down-to-earth programming and real life, there is still a lot of room for improvement. The programming sciences have had an unflattering image since their inception. Here is how a great lady qualified the programming engineering work in the 1960s. [1, p. 46] >To her, math was an elevated intellectual pursuit, while engineering was the >grubby work of machinists -- a fairly prevalent attitude among those studying >math theory. "It's hard to describe how much contempt we had for the engineers in >the computer center.", [Jean] Sammet recalled. "And I thought of a computer as some >obscene piece of hardware that I wanted nothing to do with." When I first read these words I wondered if anything had changed in the last 60 years? But then a more important question naturally arose: what motivated these people to code like crazy in such an environment that offers no recognition? It goes without saying that they didn't do it to (try to) impress a woman like the one who wrote the above words. So why else would they do it? At the time, the huge economic potential of computers was not so visible and there was little (short-term) financial gain in sight. To (attempt to) provide an answer, I have to venture along the path of the non-exact science, Philosophy, but always with the goal of explaining down-to-earth phenomena. Since computer science is not completely disconnected from other sciences, let me think a bit outside the box and discuss an idea coming from Physics. I idealize Physics a little, because it is a science where we can make very high, very serious theories but always with the aim of connecting them to real life, of confronting them with reality; when we talk about theories in such a context, we really do *not* mean playing with soap bubbles that burst as soon as they touch the ground of reality. To represent this intuitively, imagine theory as a lightning bolt that emerges from the clouds, but makes a high intensity contact with the ground (of reality): such high level ideas materialize, they become reality! By bringing any new machine into action, the spark of (elevated) human thinking establishes a living contact with the people giving a great sense of worth to invention. This is how we give life to matter, to some extent forwarding the spark of life that the Almighty Creator gave to mankind. The process started long ago when the very first prehistoric man invented the first agricultural tools, raising himself above the animal level. I can now quote Tesla, a great physicist and engineer who should have won the Nobel Prize for the invention of radio (to say the least). >The progressive development of man is vitally dependent on invention [note: and >particularly dependent on software today]. It is the most important product of >his creative brain. Its ultimate purpose is the complete mastery of mind over >the material world, the harnessing of the forces of nature to human needs. This >is the difficult task of the inventor who is often misunderstood and >unrewarded. But he finds ample compensation in the pleasing exercises of his >powers and in the knowledge of being one of that exceptionally privileged class >without whom the race would have long ago perished in the bitter struggle >against [the] pitiless elements [of nature]. These words also cover the software aspects; the word "nature" here refers to the eternal laws of the universe that dictate the rules of our existence. If you are reading these lines on a computer screen, it is because humankind was able to master (electrical) energy and channel the (physical) laws of nature to display this text. This task was performed over multiple generations: one generation mastered electricity, another one invented transistors, yet another one designed the hardware; this progressive development enabled our generation to focus on the software aspects. This is how mankind succeeded in what Tesla called "harnessing the forces of nature to human needs". It's the inventors who freed us from the grip of gravity. There are multiple types of inventors who fought to free us from this grip of the pitiless forces of nature, each in its own way. A) The engineers mentioned above, the ones who designed the first software programs in the 1960s, contributed to putting the (physical) force of the (electronic) hardware at our service. This spark led to the gigantic computer science revolution we know today. Tesla was right about a second aspect: the inventor's activity is often misunderstood and unrewarded. This is indeed confirmed by the attitude of the lady mentioned above: mankind does not always respect its inventors at their true value. With the exception of the very few who achieve fame or fortune (not without performing networking and negotiation work), many scientists are not really appreciated at their true value. If we live better today than 100 years ago, it is more thanks to technological progress rather than to social advances (unlike what you may understand from reading the press). It would therefore be logical to honor our inventors at least much as our artists, men of letters, sociologists, journalists, actors, etc. But some researchers must even consider themselves lucky if they manage to earn a decent living today. To gain more insight into this, remember that the main driving force behind long-term economic development is technological progress, rather than the progress of the various socio-economic or political theories (left-wing, right-wing, liberalism, etc.) that are every day discussed in the media -- provided these theories are not disastrous. For instance, a technological lag of one generation will surely make any country lose any commercial, economic or military war. And if the technological level of a society remains constant, there is no reason to see any evolution of the truth regarding its optimal socio-economic organization. Without technological progress, the best way of organizing our society can remain the same. B) The designer of C++ once explained that one motivation for creating the new language was to bring relief to the pain generated by working on certain tasks. "I swore never again to attack a problem with tools as unsuitable as those I had suffered while designing and implementing the simulator [or a random task]." The new language has brought relief to millions of people. Despite this enormous achievement, Stroustrup is infinitely less appreciated than Turing. Even the (popular) press often lead us to believe that the Turing machine is an essential cornerstone supporting all computer science. Many have been led astray on this point. But the whole explosion of computing that we see today is mainly due to people like Stroustrup and not to people like Turing. I prefer the understanding of Christopher Strachey, the founder of the programming research group at Oxford [2, p. 100]: "most of the abstract and theoretical work is sterile because it has no point of contact with real computing" [2]. Indeed, if the Turing machine had not existed, the level of Computer Science development would have been exactly the same today. But if the designers of C and C++ had not existed, we would have been much less advanced. Or somebody else would have had to invent some equivalent languages. If we were to ban the Turing machine from all computer courses, that would have no impact on the lives of the millions of software practitioners walking on this Earth. These practitioners never face undecidable or semi-decidable problems; for them, nothing is uncomputable. But imagine the disaster in the world of computing if were to ban from all school curricula the concept of object-oriented programming (invented by researchers in operations research in Oslo [1, p. 108] who designed Simula because they needed a better organized language). Long-term solid progress usually comes in a gradual and progressive manner, as hinted by the first line of Tesla's quote. Computer science has reached the explosion we know today through a process requiring many small steps. The information revolution did not happen because of single "great idea" or because of a few individual inventions. It is very misleading to acknowledge only the contribution of few individuals (like Turing, Von Neumann, etc.), without thinking of the great mass and of the unknown soldier. In the very best, an individual invention might have been a spark that motivated the masses. Ignoring this, many have been led astray and credit Knuth for the entire LaTeX typesetting system with its countless packages available today. It is like crediting the Wright brothers for all the airplanes ever built by man. C) If we are not all condemned to use Windows today, it is partly thanks to the open-source movement, which fits very well into the above Tesla's quote. Although the open-source model is a low-paying one, it is often more useful than the highly-profitable companies that dominate the web. I came across a quote from an American economist, Adolf Berle, translated from French as follows: "The quest for profit or personal prestige never gives rise to great efforts to expand the capacity of men and women, to study their culture, to cultivate human resources inside a community, or to develop any non-commercial values. Thus, if an economic system were to depend exclusively on profit, it would [be too mechanical and] tend to stagnate.", p. 125, Science et vie, La Lune, 1969, reprinted in 2019. The open-source world has offered us more than just software; it has succeeded in developing motivations beyond profit to serve millions of people in a better way. Many theories that humankind invented did not have a similar success with such a large-scale impact. Highly theoretical research brings other benefits to mankind. First, it may encourage the development of certain intellectual qualities in man; the purely theoretical questions often inspire those who like to exercise their brain. I find this perfectly honourable and I think I have also done it to some extent. But there is a great underestimated risk: this attitude could encourage us to do science as if we were simply practicing an intellectual bodybuilding sport. Such a sport would have the merit of cultivating in us a spirit of sacrifice and a certain abnegation that may strengthen our collective motivation. But such intellectual sportsmen did not wage a direct struggle with the pitiless forces of nature in order to put them at our service. It's not them who freed us from the grip of gravity. The modern society is not so dependent on them; it is more dependent on the software produced by a down-to-earth world. Let us return to the woman cited above, the one so disgusted by the work of software engineering: "It's hard to describe how much contempt we had for the engineers in the computer center". Can a computer programmer find anything positive in such demeaning words? Paradoxically, the answer may be yes. Perhaps rather unexpectedly, this woman ended up her career in the computer room next, next to these detestable programmers. Her name is Jean Sammet and she is one of the designers of the Cobol language (she also taught computer science to Ritchie, one of the designers of C). I think she eventually became very alike to these belittled engineers. However, she is not known for any work she might have done on what she initially thought to be ''an elevated intellectual pursuit''. Maybe she realized this may lead nowhere --especially if ''an elevated intellectual pursuit'' merely means following some theory sheltered from the harsh difficulties of reality. One consequence of the arguments stated so far is that the scientific world finds itself trapped in a very negative phenomenon that I will try to describe now. Too many of the scientific elites have a very approximate understanding of software engineering; they often think that programming "is a kind of black art; it's one of the little things that has to be done to get results". As long as this philosophy remains prevalent, the community will be fractured as follows: a) On the one hand, there are many leading experts in different theories who have never seen an algorithm in execution and who lack insight when reasoning about software; for example, their speeches often focus on the "big picture" and the "great ideas", without realizing that the dynamics and efficiency of an algorithm can easily change if a "small" parameter is changed or even if a redundant constraint is introduced into the model. All this is because, as hinted above, they limit themselves to trying to understand in broad lines. It is not possible to penetrate to the core of such questions by studying them from lofty heights. This leads them to all kinds of approximation errors in their appreciations and evaluations of software. Let's not be under any illusion: anyone who has never struggled with a machine to control the dynamics of an algorithm will never know the secrets of algorithmics beyond certain (simplifying) theories. b) On the other hand, many programmers who struggle everyday with software and algorithms don't have the most appropriate thinking framework and reasoning tools to easily debunk the countless pitfalls of the system. Such down-to-earth people may fail to understand the broad design principles that support their work, lacking a very insightful vision over the general field. As long as this divide remains, I believe that the force of our scientific computing community will also remain broken. One consequence of this divide is that there are many exceptional theoretical algorithms out there, but we often forget to teach students how to produce a high-quality implementation for them. Thus, a large amount of intellectual energy may be lost in the translation of theory into practice. I continue with a slide (of A.V.Goldberg at ISMP2018): > What students learn? > * Theory of Algorithms > - Ignore constant factors for machine independence > - Analysis mostly worst-case > * Programming Languages > - Aim for compiler to take care of all low-level details > - High-level specification > * Software Engineering > - Correctness > - Development Speed > - Portability, maintainability and testability > RED: Tempting to ignore constant factors and low-level details This slide shows how easy it is to fail to teach students how to properly code the great algorithms produced by theory. The gap is quite obvious; and there is not enough communication between the community of theoretical researchers and that of programmers. I would advise anybody to stay strong against a well-known and yet misleading phrase: "Computer science is no more about computers than astronomy is about telescopes [biology is about microscopes, or chemistry is about test tubes, etc]". This is the underlying pitfall: telescopes are a tool for studying something else (celestial phenomena) while computers are not a tool for studying something else! Computers are a tool to study computing. They are both the means and the end. Only somebody who uses computers as a tool to study something else can say that the tool is not the most essential. Hence, such a phrase only brings confusion by obfuscating the links between the "fundamentals of Computer Science" and real computing. SECTION The difficulty of breaking a massive spell Whenever such ideas become prevalent, whenever we build foundations on confusion, it is society as a whole that sooner or later ends up paying the bill, i.e., we risk becoming dominated by large numbers of "leading experts in fundamental computer science" who barely understand the functioning of a computer any better than a 12-year-old boy, having no idea how an algorithm is executed by a CPU, etc. This is the projection on science of a more general phenomenon that appears very often in all branches of life and especially in politics. Decisions concerning many important aspects are often taken by people who know the subject matter very superficially. Our very emancipated world is dominated by bombastic but superficial people who only think about buzz words and appearances. Many speech tigers are in reality void on the inside. When superficial talking becomes pervasive in a society, the (academic) world may drift into chaos. It is hard to fight such a trend. The main problem is that the legitimate respect we owe to our elites can easily slip into a form of blind worship. For instance, too many people get easily star struck when speaking to a Nobel prize winner, trusting them without investigating their arguments. In this world, there is no truly independent spirit. Just as our planet revolves around the sun and the moon revolves around the earth, all individuals revolve around a stronger force of attraction. Thus, many leading experts with a strong attractive force can have a paralyzing effect on the brains of many. Normally, leading experts must give arguments for their claims in the same way as normal people do. Only the force of their own arguments should count and nothing else. But many leading experts who manage to rise to a pseudo-glory no longer feel obliged to give too many arguments for their ideas. They can anyway use what is called "argument by authority " (argumentum ab auctoritate). There is a well-known example of a leading biologist (Painter) who stated that the human being has 24 chromosomes. From 1920s to 1956, no researcher dared to oppose this "truth" that became widespread, almost in a "democratic" way. But today everyone agrees that man has 23 chromosomes. The root of such a deviation lies in the fact that we are (all) often too weak when faced with our superiors. This is natural law: we all have the interest to be well view and accepted by our elites. We are all touched by such human weakness and it can sometimes be understandable and acceptable. But certain people walk this road beyond what is necessary or useful, by mental routine. Even in very constraining circumstances, it is always useful to pay attention not to take it to the extreme and lose from another viewpoint: ''a man in such a position is seized with anxiety when any idea occurs to him, whether such will fit in with the aims and intentions of his superiors. This paralyses his thinking to such an extent that such ideas themselves no longer dare occur.'' [3]. To simplify the above analysis and return to Computer Science, we can take the example of the simple GOTO instruction. It is said that "GOTO is harmful"; the idea was first stated by Dijkstra and I imagine that many theory professors may be delighted to discuss it. They can seize an excellent opportunity to enlighten everybody with such lofty ideas on fundamental design principles. But the down-to-earth truth is rather different; here is a quote from the "linux kernel coding style". >Admittedly, all those goto's do look a bit ugly. However, they are usually limited to >error paths, and are used to reduce the amount of code required to perform cleanup >operations. Replacing these with Politically Correct if-then-else blocks would require >duplication of cleanup code. So switching to if-then-else blocks might be good Computer >Science theory, but using goto's is good Engineering. Since the Linux kernel is one >designed to be used, rather than to demonstrate theory, sound engineering principles take >priority. An inventor status should only be attributed to those who work on things "designed to be used rather than demonstrate theory". It is very easy to make a confusion between a researcher and an inventor. For me, the former is more committed to theory, while the latter tries to construct things to be used humankind, often by exploiting the most useful part of the theory produced by researchers. For an inventor, "sound engineering principles take priority". This is because he knows very well how violent the forces of nature are so that no effort can be spared when fighting to put them at our service. The very hard task of a great inventor is to become, as I said above, like a lightning bolt that emerges in the clouds (of lofty ideas) to make contact with the ground (of reality). Very few achieve such a thing. But this is why I prefer to celebrate the quest of inventors (engineers) rather than that of researchers (of theory) -- especially if we encourage researchers to simply play a high-level intellectual sport full of lofty ideas that risks vanishing into formulae and phrases. I give a final idea to help one cope with the invisible pressure. Do not hope to obtain freedom as a gift from the political system for being a very good citizen. This is too idealistic in a world in which the main driving force of many is the religion of the personal interest. Anyhow, there is more adventure in the rebellious act of building one's own ideas than in purely importing foreign ideas from a system. Taking the easy path never leads to honour. And it is only the development of independent ideas can best help a man to properly (ful)fill the framework of gifts and abilities that nature has given him. Finally, a lack of independent ideas can often lead to a resignation in front of various (pseudo-)elites; one may easily end up in state of confusion and ideological insecurity that was long described as follows [3]: >Some have laid in a store of foreign ideas that are most >imperfectly and always superficially understood; and in >their heads, of course, such ideas are always in danger of >evaporating into mere phrases and words. They shift these >about and perhaps try to fit them to one another like >dominoes; thus they compare what one has said with what >another has said, and again a third with a fourth, and from >all this they try to appear clever and smart. In such men we >should look in vain for a firm and fundamental view of >things and the world, one based on intuitive perception and >therefore thoroughly consistent and coherent. For this >reason, they have no decided opinion or fixed and definite >judgement about anything. I can conclude with an idea of V. Pareto, who was both a philosopher and an engineer (known in our field for the notion of Pareto frontier). >Changes of regime, revolutions, and so on occur not when rulers are overthrown >from below, but when one elite replaces another. The role of ordinary people in >such transformation is not that of initiators or principal actors, but as >followers and supporters of one elite or another. END References [1] Steve Lohr, Go To: The Story of the Math Majors, Bridge Players, ... Note at page 108: « Simula grew out of a discipline known as operations research. » [2] I give below the full citation extracted from www.cs.ox.ac.uk/activities/concurrency/courses/stracheys.html Strachey's Philosophy: The strongly held common philosophy of the Department (previously Oxford University Computing Laboratory) is admirably expressed in the following quotation from Christopher Strachey, the founder of the Programming Research Group "It has long been my personal view that the separation of practical and theoretical work is artificial and injurious. Much of the practical work done in computing, both in software and in hardware design, is unsound and clumsy because the people who do it have not any clear understanding of the fundamental design principles of their work. Most of the abstract mathematical and theoretical work is sterile because it has no point of contact with real computing. One of the central aims of the Programming Research Group as a teaching and research group has been to set up an atmosphere in which this separation cannot happen." [3] Arthur Schopenhauer, On the philosophy at the universities, around 1850