Digitalization and information technology have become part of our life. However, along with the obvious benefits, their drawbacks are getting more and more apparent. Social media and search engines have learned to adapt to our interests and influence decision-making. Large companies’ intention to please users has a serious side effect, which is the emergence of “information bubbles” that leads to social polarization. Professor of the Russian Academy of Sciences (RAS) Konstantin Vyacheslavovich Vorontsov on how to avoid falling into such filter bubbles.

Konstantin Vorontsov is a Doctor of Physical and Mathematical Sciences, Professor of RAS, and Professor at the Department of Mathematical Methods of Forecast, Faculty of Computational Mathematics and Cybernetics (CMC) MSU.

— At one of the meetings of the Presidium of RAS, you mentioned that information technology had run out of our control in some sense. How does this manifest itself?

— I am glad that you have started with this question specifically. This statement can be misinterpreted if it is taken out of context, so it requires an explanation. The meeting was devoted to artificial intelligence in the context of information security, and my report was about the use of artificial intelligence technology to ensure national information security.

Now we all use the Internet. Society perceives it as a blessing. Therefore, we will never abandon it. However, every technology has two sides. On the one hand, the Internet has provided us with unprecedented freedom to exchange information and express our ideas and opinions. On the other hand, every individual has obtained an opportunity to spread information that was used to be the prerogative of media professionals only. However, people do not assume responsibility for ensuring the accuracy of this information, at least now. Any user can spread rumors, gossip, pseudoscientific and conspiracy theories, unintentional misconceptions, and deliberate paid lies. One can be heard by millions of people in different parts of the world today on condition that they have certain technical skills. Some 30 years ago, we could not even dream of this. Naturally, people have lied to each other throughout human history. However, now, this phenomenon has acquired new forms and scales and received the name “post-truth.”

We tend to believe the lies that are pleasing or comforting to us, the one that fits our worldview and creates psychological comfort. What can we do, it is our nature that clearly manifests itself unless we consciously control it. We willingly share speculations and myths. Earlier, they would distribute slowly through old ladies’ gossip. Now, however, they are spread almost instantly through smartphones.

The truth is always the same, but there are multiple versions of a lie, and it is hard to find the former among the latter. A drop in the ocean. When we search for information on the Internet, we come across all these versions of a lie, their rebuttals, and rebuttals of rebuttals. All of them can be convincing. Even if a lie has long been refuted, and the truth has triumphed, all versions of the lie continue to exist and multiply in the information environment. Every day, people face the dilemma of what to believe. The phenomenon of “irrefutable lies” is not new to humanity. However, it has taken on a new dimension. Everything can be corrupted and questioned. Any corruption can be beneficial to someone.

— How can this problem be solved?

— It is important to promote information hygiene and develop critical thinking in this information environment. When we analyze information, we should always wonder why we know what we know. What is the source of this knowledge? Is this source interested in distorting the information? Is there any ideology behind it?

Информационная безопасность- от цифровой гигиены до криптографии

Now we all are forced to act as detectives in every situation from making a purchase to trusting authorities. However, of course, people do not have time, energy, or skills for this. After all, we have encountered this informational reality quite recently. Not all of us understand yet how critically one should treat everything published in social media and mass media. By the way, the Oxford Dictionary declared “post-truth” the word of the year in 2016 after the United States presidential election and the United Kingdom European Union membership referendum, Brexit. These events clearly showed that it is possible to persuade your opponents without any arguments or facts, through emotion alone.

“Irrefutable lie” gives rise to another interesting phenomenon, the one of “information bubbles.” It describes a situation where large social strata are locked into their prejudices or conspiracy theories. For example, a survey held by the Russian Public Opinion Research Center (VCIOM) reveals that about half of the Russian population believes in a moon landing conspiracy theory. This fake theory has been refuted hundreds of times and yet it continues to occupy the minds of our people, from ordinary people to government officials. The psychological underpinning of this widely adopted cognitive bias is clear: it is pleasing and comforting to realize that our geopolitical adversary is weak and deceitful. It is a dangerous prejudice. By the way, a lie is always mixed with the truth today, and the only question is as to the proportion as well as how we can tell one from the other.

The technology of social media, search engines and recommendation systems, advertising networks, and media campaign planning contribute largely to the phenomena of “irrefutable lie” and “information bubbles” that would initially have a merely social nature. That’s what I meant when I said that technology had run out of control.

— Can we say fake news and deepfake applications are consequences of the post-truth era?

— I would rather say this technology is unlucky to emerge in the post-truth epoch, in such a political climate where they pose a threat to our society and get out of control so to say. They can drown this drop of truth so that it is completely lost in the ocean of ​​various versions of lies. It is possible to create a compromising video about any person. It will be quite difficult for the latter to prove they did not do anything criminal. One can create pseudo-news using neural networks. They have been trained on billions of newsbreaks and can now generate something similar to what they already know. Ordinary people will believe that this newsbreak is real. However, in fact, it will be a set of words interconnected by a machine that does not know anything about the outside world. A neural network is capable of generating millions of fake news per minute, so all these different lies will make it difficult to find the truth. This can be compared to a situation in the military field: when a missile is launched, multiple decoys are thrown out to deceive the air defense system. In a similar fashion, we can see 200 false versions of the same events every day. We read them all and fail to distinguish between truth and lies at times.

We must understand that these risks are generated not by technology but by the way people use it. People create threats to each other, and technology should not be blamed for this. Modern society is sitting on a powder keg, probably, several kegs, and adding more gunpowder on an ongoing basis.

Every technology should have its safe operating procedures guide. However, we have not yet identified all risks that modern technology imply. The task of scientists is, thus, to develop remedies and clearly explain the risks to people.

Konstantin Vyacheslavovich Vorontsov

Konstantin Vyacheslavovich Vorontsov

Photo by Andrey Luft / Scientific Russia

There is always this balance between good and evil in the world. This is relevant to all technology types we create. They imply both risks and opportunities. As technology becomes more powerful, it will be getting more and more difficult to strike a balance. This is a challenge that humanity throws at itself.

— So we are the problem.

— Yes.

— Information technology is just a tool.

— Of course! So is an axe. One can use it to kill an old lady as in the famous novel or build a house.

— How then can we talk about information security? Can we say the modern Web is no longer secure?

— Of course, everything is not that bad. There is a shield for every sword. There was a time when everyone was afraid of spam and computer viruses. Today there is anti-spam software against spam, antivirus software against viruses, and anti-plagiarism software against plagiarism. This means we will inevitably have anti-fake, anti-post-truth, and anti-propaganda software someday.

Software providers have to decide where they stand: the light or the dark side. It is a fundamental uncompromising decision. I will never work on a technology that generates fakes, but I would be interested in creating a technology that recognizes fakes.

I believe that browser extensions will soon appear to help users identify the elements of the text that contain fakes, the use of manipulative speech techniques, alternative opinions, and links to evidence or refutations. People need guidance in the post-truth reality so the emergence of such technology is inevitable.

— Can we say that the higher the level of digitalization is, the more risks appear on the Web?

— I would put it more generally. The more new types of technology humanity creates, and the more powerful they are, the more risks emerge. It relates not only to the Internet but all technologies – from biological and medical projects to military and space ones. Each technology poses risks. However, neither progress nor development is possible without technology.

— What other problems exist in the field of modern information security?

— I cannot speak for the entire industry for I specialize in artificial intelligence, not in information security. Some examples of the problems that these two fields share include information monitoring, automatic fake news detection, manipulations, social polarization, and other signs of information warfare. By the way, there is an interesting observation in this area: in 2016, fake news detection would be the hottest topic in scientific literature. The number of publications increased by 50 times in a couple of years. Data analytics and AI specialists were actively engaged in these studies. At the same time, the number of humanitarian studies in the field of information warfare would not change in response to this hype. This field is traditionally characterized by a very low number of quantitative research studies that use big data or AI technology. There is clear disunity between two huge scientific communities. Combining their efforts in interdisciplinary work is an important task to be solved in the near future.

Today the problem of information warfare is not only extremely important but has strong technological implications, too. This is not confined to geopolitical or military conflicts. Companies operating in a highly competitive market are exposed to media attacks almost every day. Many ordinary people have not encountered these risks before, but we all are exposed to them now. We all need new anti-fake, anti-propaganda, and anti-post-truth technological solutions.

— What can an ordinary person unaffiliated with a government agency or a large company do to ensure their information security?

— This is a very important question. One’s awareness is extremely low in this field. They often tell us it is important to use critical thinking, to think for yourself. Alas, those are empty words. We underestimate the scope of the threat. We are trapped in our minds. Do you know that nine in ten people consider themselves smarter than average? Similarly, each of us thinks, “I certainly use critical thinking and think for myself.” Meanwhile, we stay in our comfortable information bubble.

We need generally accessible educational courses that would explain principles of information hygiene, cognitive biases, propaganda, and manipulation as well as provide insights into the history of scientific knowledge and how delusions spread in our society. We need skills to survive in the toxic information environment, and these skills can be acquired through case study analysis. They should become our fundamental skills. They should be taught at school.

Today we have already systematized hundreds of cognitive biases due to which we might misinterpret even the information that is true yet fragmented. There are plenty of examples in the history of science illustrating these biases. Our schools and universities lack such lessons and courses that would teach us to think and broaden our outlook.

Rationality and an open mind are a natural outcome of good education on condition that one is not lazy but motivated to seek the truth. Today, we are increasingly hearing there are very few people of that kind. But why? There were enough of them during the Soviet era. This is another illustration of how propaganda works, both negative and positive. Our recent history knows many examples of how this happens. In the 1990s, our attitude to many things that seemed unshakable was turned upside down in just a few years. Science was declared unnecessary, culture — secondary, feats — false, and achievements — of no special value. There appeared demotivators saying, “Why are you that poor if you are so smart?” and the like. They began to change the value system in the population of the huge country. A national inferiority complex was imposed on us, the goal of which was to get the population depressed and stop its creative development.

The experience of China has been more fortunate. They realized in due time that they must not form a depressive mindset in their population, although their history had as many mistakes and tragic pages as ours. However, the propaganda was positive — when revisiting the legacy of the Great Helmsman, they assessed it using the 70/30 ratio for his correct/incorrect decisions. Of course, it’s all made up. However, it is much easier to look to the future, develop, and improve with a belief that you only need to correct some minor mistakes of the past rather than rebuild everything from scratch.

Konstantin Vyacheslavovich Vorontsov

Konstantin Vyacheslavovich Vorontsov

Photo by Andrey Luft / Scientific Russia

— Let us return to the first question about the loss of control over technology. How can we regain it?

— If we approach it systemically, countermeasures should work at three levels.

At the state level, we need a large-scale system of information monitoring. Roskomnadzor is constantly on the watch to detect and block websites that distribute prohibited information concerned with terrorism, extremism, drugs, and dozens of other kinds of illegal activities. There is also a flow of information that is not covered by the legislation, its threat is less obvious or difficult to estimate, such as fakes, ideological, and information warfare. We cannot ban everything. There is a need for a proper balance between security and freedom of speech. Finding this balance is impossible without a public consensus, an explicit or implicit agreement between authorities and civil society.

The second level is public organizations and professional communities that deal with fact-checking. Usually, they are journalists and volunteers. They need technical support in the form of special information retrieval systems. Over a hundred platforms of that kind have appeared in the world in recent years.

The third level is the individual protection of Internet users. There is a need for built-in anti-fake or anti-post-truth systems to warn users about a potential threat: for example, the text is manipulative, contains pseudoscientific ideas, news blackout or other propaganda techniques, or comprises elements of a certain ideology. It is impossible and even inappropriate to prohibit people to read this. However, we can give them hints and provide links.

— Does Russia develop such technology?

— Yes, it does. Maybe not at all levels and maybe I do not know them all. I wish there was more development of that kind. Now we are establishing a laboratory at the MSU Institute of Artificial Intelligence that will be engaged in the research and development of technology used to detect fakes, speech manipulations, public opinion polarization, and the psycho-emotional impact of texts.

We are interested in both negative and positive impacts. I will note that such words as “propaganda” and “ideology” were initially neutral, they became abusive with time. Propaganda means the dissemination of ideas. Ideology means a self-consistent picture of the world that encourages practical activity. However, one can disseminate both constructive and destructive ideas. Those who create technology must put a positive value system into them. Do they merely want to make money or change the world for the better, that is, make it safer, smarter, and more convenient? Unfortunately, these goals are not always compatible.

Any scientist wonders, “Why am I doing this very research?” or “Where will my development projects lead in the long run?” Artificial intelligence is projected to surpass human intelligence within two or three decades. However, for some reason, the picture of the future that Raymond Kurzweil and other futurists envision is not rosy, but often apocalyptic. Why do technological achievements turn into threats in their forecasts? I think the reason is that they extrapolate to the future the modern technocratic thinking “I did it because I can” permeated with the ideology of individualism. The new mindset of our civilization should be based on another principle, which is, “I did it for the sake of preserving and developing human civilization.”

Human civilization has created so many potential threats that it will soon become impossible to fight, we will have to negotiate instead. We are at an important crossroad: we will either self-destruct as a civilization or change our policy and economy, social relations, our psyche, or even biology for the better. At least, we should change our goal-setting.

Therefore, I consider it my duty and priority to engage in the development of technology that reduces risks and threats instead of creating ones.

Recently, I started a blog called Civilizational Ideology on Yandex Zen where I discuss the future of artificial intelligence, the phenomena of post-truth, and propaganda from the viewpoint of a civilized value system.

— What are the potential consequences of inaction in this area?

— Any inaction and incompetence impede development and are always inferior to work and knowledge. It’s so obvious! If someone claims the opposite, it is propaganda, negative propaganda.

— Are there studies that analyze how this current reality affects people’s opinions and decision-making?

— There are quite many of them. Probably, one of the first studies is the ancient Chinese treatise The Art of War by Sun Tzu, it is about 2.5 thousand years old. One of its famous quotes says, “Let them whisper in the streets of the enemy capital that the prince is robbing the people, his advisers have betrayed him, officials have drunk themselves to death, and the soldiers are hungry and barefoot. Let the inhabitants mutilate the name of their prince and pronounce it incorrectly... Let them, with a well-fed life, think that they are starving. Let the wealthy envy those who graze livestock in Wei. Kindle an inner fire not with fire, but with a word, and the stupid will begin to complain and curse their homeland. And then we will go through the open gate.”

Search engines have made a huge difference as they made knowledge more accessible to people. Today, we do not have to go to the library, spend much time searching for information, or look for an expert to solve our problem. We can just type a request without leaving our workplace to get an answer. But is this answer true? Is it the knowledge that we have found or rather someone’s delusions, prejudices, or intentional deceit? Or maybe this is a toxic message from Wei?

The interview was taken with support of the Russian Ministry of Science and Higher Education and Russian Academy of Sciences.