Название изображения

What is a supercomputer and can it be used to create a full-scale model of the human brain? Why is Russia behind other countries in the super computer race and which industries are in dire need of computerization? We discussed this and many other topics with a renowned scientist, RAS Academician Igor KALYAEV. 

INFORMATION. Igor Anatolievich Kalyaev ─ RAS Academician, Honored Scientist of the Russian Federation, laureate of the State Award of the Russian Federation, awards of the RF Government and RAS, Doctor of Technical Sciences, Science Area Leader at the Southern Federal University, Chairman for the Council for Priority of Science and Technology Development of Russia “Transition to digital intelligent manufacturing technologies, robotic systems, new materials, and methods of design and creation of bid data processing, machine learning and artificial intelligence systems.”

Is it true that science still lacks an accurate definition of the term supercomputer? 

─ Yes. Unfortunately, there is no clear definition. There are, for instance, some funny definitions such as: a supercomputer is a computer that weighs over a ton or a computation system that costs over a million dollars. But seriously, supercomputers seem to be primarily machines that are on the so-called TOP-500 list of the world's most powerful supercomputers. This ranking is updated twice a year. It’s a pity that there are just two Russian supercomputers on the list at the present time.  

Today, the world’s TOP500 list features two Russian supercomputers (Sberbank’s Christofari and Lomonosov 2 of the Moscow State University). Russia is in the 18th place in the world in terms of powerful supercomputers. “We are practically in the periphery of the supercomputer world,” emphasized Igor Kalyaev.

─ What kind of characteristics does a machine need for it to be called a supercomputer? 

─ In technical terms, there isn’t any fundamental difference between supercomputers and ordinary computers. To a degree, all computers use information processing principles defined by the mathematician John von Neumann as early as mid-20th century, which require a processing unit (processor) and a memory that stores data and a data processing program. It’s just that the modern supercomputers contain a huge number (dozens and even hundreds of thousands) of such processors that can process information in parallel (simultaneously): this is what makes them so fast. Besides, the so-called Moore’s law still applies, whereby the number of crystal transistors doubles every two years approximately, which is, to a degree, proportional to the increase in the power of the processors. All this, among other things, drives the rapid growth in the power of supercomputers.

FLOPS is a measure of computer performance showing the number of floating-point operations per second that the computer can do. PetaFLOPS is 1015 FLOPS; exaFLOPS — 1018 FLOPS). Source.

Example: The most powerful supercomputer in 2018 was the Chinese Sunway that could perform 125 petaFLOPS (i.e., 125x1015 floating-point operations per second); in 2019, it was the American Summit with 200 petaFLOPS (i.e., 200,000 trillion operations per second); and just a year later, in 2020, it was the Japanese Fugaku, with the performance of 537 petaFLOPS. Thus, supercomputer performance doubles practically every year. For comparison: the performance of the most powerful Russian supercomputer Christofari is only 9 petaFLOPS.

─ Quite a gap... 

─ Yes, and, sadly, it is still growing. For comparison, in the 2000s we were not quite so far behind other countries, having up to 12 supercomputers on the TOP500 list at a time. Moreover, in 2009, the Lomonosov supercomputer installed at the Moscow State University was in the 12th place on the list. But, since 2018, there haven’t been more than three Russian computers listed in the TOP500. The supercomputer world does not stand still; technologies are developing like at a breathtaking speed, so the Russian supercomputers, which used to be on the TOP500 list, have become obsolete and were stricken out of it long time ago, while we are not creating any new supercomputers. 

There is a supercomputer race going on in the world today. There is the image element in that, of course, a question of national prestige: every self-respecting country is trying to make it to the TOP500. The latest list features 212 Chinese systems, 113 American ones, 34 systems from Japan – and just two from Russia (the 40th and 156th place respectively). We are behind such countries as Brazil and Saudi Arabia; even Morocco recently launched a 5 petaFLOPS computer. Moreover, in terms of supercomputer performance per researcher, our numbers are 30 to 40 times lower than those of the world’s leading countries.

But, of course, it’s not worth it to participate in the supercomputer race just for the sake of prestige alone. Furthermore, the following circumstances should be taken into account. The performance of supercomputers on the TOP500 list is measured and compared using the so-called Linpack test. And there are many supercomputers on the list that do well only on this test, with their actual performance dropping with regard to solving many other problems of practical importance. So, essentially, the competition is about supercomputers performing well on that test to get a high rank in the TOP500 rather than being able to complete certain practically important tasks.

It is important to point out that creating a supercomputer that would be equally efficient in completing different tasks would hardly be possible. Every supercomputer has a specific architecture, which determines the class of tasks in which the computer shows its maximum performance. Therefore, efficiency in solving different problems requires not just one powerful supercomputer but a heterogeneous supercomputer infrastructure including supercomputers of various classes and architectures.

─ You mean a common computation system? 

─ Exactly. This should be the common computation system of the country (in a similar fashion as there is a Common Energy System of the country) or, in other words, the national supercomputer infrastructure, i.e., a system of supercomputer centers of various levels and specialization areas, combined into a common computation resource using high-speed communication channels. Such infrastructure should enable the Russian user, wherever they are within the country, to make computation-intensive calculations using any of the heterogeneous computation resources that are part of it. That would require a special intelligent control facility as part of the infrastructure, which could determine which supercomputer in the system could complete the tasks most efficiently, and, accordingly distribute user tasks among the computation resources available based on their specialization, workload, etc. This idea was incorporated in the Concept for the Creation and Development of the National Supercomputer Infrastructure that was developed by leading scientists and specialists of our country, and then reviewed and approved by the Council for Priority of Science and Technology Development of Russia, of which I am Chairman, back in May 2019. The document has been passed from one government agency to another ever since with zero result, while the world is advancing in great strides and we are going around in circles. As we started developing the concept in early 2019, the world’s most powerful computer was the Chinese Sunway, with the performance of about 100 petaFLOPS, while now it’s the Japanese Fugaku, whose performance is as high as 500 petaFLOPS. 

─ What prevents the practical implementation of this concept?

─ Primarily, lack of understanding of the role of supercomputer technologies in the modern world among our high-ranking bureaucrats. Implementation of such supercomputer infrastructure requires a special government program, which would certainly take a lot of material and financial investment. But the first question a bureaucrat asks is: “How profitable is this supercomputer infrastructure going to be?” In and of itself, it is not going to generate any direct profit, of course, but it would enable the Russian scientists to produce new results in the sci-tech sector, including world class results, while Russian manufacturers would gain a competitive advantage through reduced design cycle time for their products, improvement of their qualitative characteristics, etc. Unfortunately, not many people realize that by investing in national supercomputer infrastructure, the country would create a basis for increasing science-intensity and competitiveness of Russian products, which would eventually bring great technological and financial benefits to the country. 

Igor Kalyaev is a well-known specialist in multiprocessor computation and control systems. The scientific and teaching school led by the scientist is recognized as one of the best in Russia. The results of I. A. Kalyaev’s activities in science are reflected in more than 10 monographs and 380 scientific publications and inventions. He supervised and was directly involved in over 100 research and development projects, the results of which were implemented and are being used by various companies in Russia.

Igor Kalyaev is a well-known specialist in multiprocessor computation and control systems. The scientific and teaching school led by the scientist is recognized as one of the best in Russia. The results of I. A. Kalyaev’s activities in science are reflected in more than 10 monographs and 380 scientific publications and inventions. He supervised and was directly involved in over 100 research and development projects, the results of which were implemented and are being used by various companies in Russia.

 

─ What should be done to catch up on supercomputer development? 

─ Most regrettably, we have to admit that we can't create a complete Russian-made supercomputer. The primary reason is that Russia doesn’t have process lines to produce a modern microelectronic element base that can compare to the foreign counterparts. At present, the most advanced process lines for electronic component bases in Russia have the technological standards of 90-65 nm, while other countries already have production lines with the technological standard of 10-7 nm (Fugaku was built on the basis of such microchips). Unfortunately, after the collapse of the USSR, the short-sighted decision was made to buy everything in the west with petrodollars, including microchips. That killed the Russian microelectronic industry, which is the foundation of all computer technologies. Therefore, now we have to buy (or produce) the element base for supercomputers abroad, which, with the sanctions and the sharp rise of the Dollar to the Ruble rate is becoming increasingly difficult. On the other hand, we still have our strong science and engineering schools that are able to develop and create supercomputers with the characteristics on a par with the foreign counterparts. Our examples include Lomonosov supercomputers created by T-Platforms, a Russian company that has been working successfully at Lomonosov Moscow State University.

СSberbank’s Christofari was designed in cooperation with the American Nvidia. The machine occupies a huge room and consists of dozens of accelerator modules. Photo source: https://hi-tech.mail.ru

СSberbank’s Christofari was designed in cooperation with the American Nvidia. The machine occupies a huge room and consists of dozens of accelerator modules. Photo source: https://hi-tech.mail.ru

 

Supercomputers are needed everywhere

─ Which branches of science use supercomputers most often? 

─ Today, supercomputers are used everywhere. Firstly, the modern science itself has become computation-based in many ways, i.e., discoveries are not made on the tip of the pen, as used to be the case, but on the screen of a supercomputer ─ after processing gigantic amounts of data, searching for some kind of patterns in the data, etc. Unlike supercomputers, people are just physically unable to process such amounts of information. 

Secondly, there is industry. About half of the TOP500 supercomputers are used in the industrial sector (not in Russia though), which has a major practical effect. Russia has some good examples of supercomputers being used in industry. For instance, at the Vuzpromexpo exhibition held in Moscow in December 2020, Peter the Great St. Petersburg Polytechnic University presented a car – Kama 1 – based completely on the digital twin technology using supercomputer engineering. A supercomputer allowed them to design that car in just about one and a half years. Today, supercomputers are being actively used by the Central Aerohydrodynamic Institute in designing a new Russian passenger plane, MS 21. Rosatom are using supercomputers to design new, including radiation-resistant materials. The examples are numerous. According to specialists, over 700 design and engineering tasks set before the Russian industry require supercomputer calculations to various degrees.

There is another example of effective application of supercomputers, albeit in a not so serious field ─ animation. The British-American animation film The Jungle Book (2016) was created with computer graphics produced using a supercomputer. Please note that the movie cost around US$175 million and made nearly US$1 billion! That is, this animation movie alone paid for the creation and maintenance of a quite costly high-performance supercomputer for ten years ahead.

In 2017, the animation film The Jungle Book, created on a supercomputer, won the Oscar for Best Visual Effects. The box office of the movie totaled $966,550,600. Photo: https://www.kinopoisk.ru

In 2017, the animation film The Jungle Book, created on a supercomputer, won the Oscar for Best Visual Effects. The box office of the movie totaled $966,550,600. Photo: https://www.kinopoisk.ru

 

Another important example. As you know, today, in the midst of the pandemic, there is a race in the pharmaceutical industry across the world. This race will be won by whoever can do the so-called molecular docking, which is the basis of designing new medications. For this process to be implemented, a huge number of enumerations of various possibilities have to be performed, which can only be done with a supercomputer. And, of course, the global race will be won by the contestant with the fastest molecular docking tool (supercomputer). Therefore, supercomputers could be used in many very different areas, including for things that bring large profits.

─ We’ve been discussing rapid development of supercomputer technologies. But is the mathematical apparatus changing? 

─ This is a really relevant question since supercomputers are multiprocessor computation machines, and the more processors are involved in the calculations, the harder it is to coordinate them. When a supercomputer is working on applied tasks, there is often the following effect: the machine performance is increasing till a certain number of processors get involved in the task, but then it starts declining because organizing concerted operation of so many processors takes more time than the useful computation. That is why the critical issue with modern supercomputers with hundreds of thousands of processors working simultaneously is precisely how to organize smooth and effective operation of so many processors as they work on a common task. Special mathematical approaches have to be created to solve the problem. It should be noted that our country has very strong positions in this regard. For example, under the direction of Academician Boris Chetvertushkin, original methods are being developed, allowing computation processes to be parallelized, which would enable a simultaneous use of a large number of processor nodes. Fortunately, as regards the mathematical apparatus, we are keeping pace with other countries, which is more than can be said about supercomputer software that is mostly foreign-made at the moment.

─ How expensive is it to keep a supercomputer? 

─ It is true, maintenance of a standalone supercomputer, let alone more complex supercomputer infrastructure, is most expensive. In terms of electricity consumption alone, the most powerful supercomputers use dozens of megawatts. Great efforts are being made now to achieve “green” computation, i.e., reduce energy cost per supercomputer computation operation. Today, the most advanced supercomputers in this respect produce up to 20 teraFLOPS per watt consumed.

Of course, technologies are advancing fast, and I think that there will be exaFLOPS supercomputers in the near future, consuming just a few dozens of megawatts. But even this is a lot.

Sport and science

Apart from science, Igor Kalyaev is also a mountain climber and alpine skier (he has had this hobby for over 50 years). Over the last ten years, the scientists have been at the tops of the mountains Kilimanjaro, Ararat, Toubkal (the highest peak of the Atlas Mountains), Munku-Sardyk (the highest point of the Sayans), etc.

“Undoubtedly, being in shape is a key to intense activities in the sci-tech sector. I can't do without it, it’s like a drug. I am trying to keep myself in shape and go to various mountain ski resorts several times a year,” the scientist says.

Igor Kalyaev first started skiing in 1972. In 1980, he finished the Alpine Ski Instructor School (it was the first such training camp in the USSR). During a number of years, while on his winter vacation, he worked as a ski instructor in the North Caucasus.

 

Igor Kalyaev on top of Mount Kilimanjaro — the tallest stratovolcano in Africa. 

Igor Kalyaev on top of Mount Kilimanjaro — the tallest stratovolcano in Africa. 

 

Kalyaev with V. E. Fortov, ex-President of the Russian Academy of Sciences, with Mount Everest in the background. 

Kalyaev with V. E. Fortov, ex-President of the Russian Academy of Sciences, with Mount Everest in the background. 

 

 

Miniaturization of devices

─ Historically, we can see a trend for the miniaturization of all devices. Our normal personal computers and telephones used to be much bigger than they are now. Can we expect that the dimensions of supercomputers will become much smaller someday? 

─ I have mentioned the Moore’s law earlier, whereby the number of crystal transistors doubles every two years. This is what makes increasingly compact computers possible. But there is another problem: the density of the processor’s crystal elements cannot increase forever because of processor overheating. This is where laws of physics come into play, which say that the more computations you make in a particular volume, the more heat you need to remove from the volume. Personal computers use ordinary fans to remove heat. But in supercomputers, where dozens of thousands of processors have to work at the same time, air cooling is no longer enough. Water cooling was the solution till recently, when the heat was removed by water running through a special circuit that covered the microchips. But now, even water cooling is becoming insufficient for supercomputers. Now we are changing to the so-called immersion cooling, when the board of the supercomputer is just immersed in a special liquid similar to transformer oil. This allows us to increase heat removal almost by an order of magnitude, which in turn makes it possible to increase the board density and thus reduce the dimensions of the machine while keeping its performance unchanged. It’s most complex technology, but without it, creating promising supercomputers in the near future would be nearly impossible. 

The first programmable computer in the continental Europe (MESM), invented by Sergey Lebedev. Source:  https://hi-tech.mail.ru/news/den-informatiki/

The first programmable computer in the continental Europe (MESM), invented by Sergey Lebedev. Source:  https://hi-tech.mail.ru/news/den-informatiki/

 

How to live in the post-silicon era 

─ What is the difference between a supercomputer and a quantum computer? 

─ To create classical supercomputers, we use a silicon element base that is now nearing its technological and physical limit. According to specialists, the minimum technological standard for a silicon element base is 3 nanometres, while there are already microchips created to meet the technological standard of 7 to 10 nanometres. The same applies to the switching frequency of silicon logic elements. The maximum switching frequency for a silicon logic element is about 11 GHz. There are already modern mass-produced microchips with the frequency of 5 to 6 GHz; there is also an experimental elementary base operating with the frequency of around 8-9 GHz. That is why we have to start thinking now how to live in the post-silicon era. 

A qubit (quantum bit) is a quantum discharge or a basic unit of information stored on a quantum computer.

There are several approaches here, developing actively across the world. Firstly, quantum computers, and secondly – photon (or optical) computers. I am still somewhat skeptical about quantum computers. I am certain that, in the coming years, we are unlikely to create quantum computers capable of performing real, practically important tasks, better than classical computers. A quantum computer is based on the so-called qubits, which may be in a state of superposition i.e., have any state from zero to one, while in a normal elementary base, every element can only be in one of two states ─ zero or one. Quantum states of qubits are unstable and therefore are very prone to various noises and external impacts, which leads to a low accuracy of computations made by a quantum computer.

According to specialists, to solve the problems of practical importance, a quantum computer has to have at least 500 to 1,000 of such logic qubits. Moreover, to ensure an acceptable reliability of computations, another ten or even a hundred of additional qubits would have to be added to every such logic qubit, to provide quantum error correction (For comparison: today, a quantum computer has 72 qubits maximum). But all the same, even with a huge number of correcting qubits, there is no guarantee that you are going to receive an accurate result of problem solution. It’s clear that for problems in engineering simulation and design of important products, quantum computers are unlikely to be available in the near future, because what we need in such cases is a guaranteed accurate solution. However, for simulation of processes that do not require an accurate solution, e.g., in research of various physical or chemical processes, a quantum computer could produce a certain effect. At present, creation of the so-called quantum simulators is a fairly active developing area across the world.

Quite a lot of ongoing research all over the world concerns quantum computer creation issues. Russia also has strong teams of scientists working in this area, e.g., the teams lead by Professor S. P. Kulik at the Moscow State University and Professor A. V. Andriyash at Dukhov Automatics Research Institute (NIIA).

As I mentioned before, photon computers are another promising area relevant to the creation of supercomputers in the post-silicon era. The main idea behind this approach is to replace binary silicon logic elements, which are the core of the modern computer equipment, with identical logic elements that were, however, created using different principles of physics, namely the principles of photonics and optoelectronics. Research shows that the switching frequency of such photon elementary base would be 2 to 3 times higher than that of a silicon base with the same energy costs. And the switching frequency of logic elements depends on the performance of the processor and thus, on the performance of the supercomputer built with such processors. Another important advantage of such approach is that it allows software designed previously for classical silicon computers to be used for photon computers, with no dramatic change, which is not possible with quantum computers. That is why I think that photon supercomputers are more realistic as a prospect. Buy the way, there is significant groundwork regarding photon computers in this country, which was provided by the team of Professor S. A. Stepanenko in Sarov.

Google quantum computer. Photo: https://www.cnet.com

Google quantum computer. Photo: https://www.cnet.com

 

Is it possible to create a “hardware” equivalent of a human brain?  

Our brain is often compared to a computer. This may be a rough analogy. Nevertheless, the brain solves most complex problems, while consuming little energy, unlike a supercomputer, devouring megawatts. Does a supercomputer copy the operation principles of the brain in some respects, or is it an entirely different thing? 

─ Absolutely not. Any computer is an ordinary machine that does not think but follows a rigid program the programmer put into it. Therefore, there is no equivalency between a computer and the human brain, of course, for the brain can build an algorithm (a program) to solve a problem. 

In theory, a computer could simulate the human brain operation. Indeed, we know how an individual brain neuron works, how neurons are connected to each other, and therefore we could create a “hardware” equivalent of the human brain. I would say that simulating 100% of brain activities on a real timescale would take a supercomputer with the performance of at least 1021 FLOPS. That is, to simulate the human brain, we would have to increase the performance of supercomputers by three or four orders of magnitude.

But even if we succeeded in creating the required 1021 FLOPS supercomputer, its dimensions, given the modern technologies, would be equivalent to a building measuring 300 square meters at the base and 50 meters tall, with the energy consumption of around 15 Gwatt, which is comparable to three hydro power plants the size of the Sayano-Shushenskaya Dam. As you rightly said, our brains, on the other hand, take up as little as 0.0015 m3 and consume 15 to 20 watts, which corresponds to one light bulb. Feel the difference, as they say in Odessa (laughing). That is, trying to create an equivalent of the human brain using modern supercomputers would lead us nowhere.

All the achievements of artificial intelligence that are being discussed everywhere, were not produced because computers are getting smarter, but just because they are getting faster, which allows them to search for more options for more steps ahead. Therefore, in my view, everything that is called artificial intelligence now has nothing to do with artificial intelligence. This is just normal computer programs created by people, and programs are just implementation of an algorithm, i.e., a rigid sequence of actions.

Therefore, in my view, calling these things intelligent computer technologies or pseudo-intelligent computer technologies rather than artificial intelligence would be the right thing to do. I believe that artificial intelligence would be a system capable of building an algorithm to solve the problem at hand. When an algorithm has already been created by somebody else and programmed, as it happens in modern computers, what does this have to do with artificial intelligence?

Will we ever be able to create a fully-fledged artificial intelligence comparable to the human brain? I don’t think it’s likely. But time will show.

This interview was conducted with the support from the Ministry of Science and Higher Education of the Russian Federation and the Russian Academy of Sciences.

Interviewer: Yanina Khuzhina. Photos of Kilimanjaro and Everest were provided by Igor Kalyaev.