Confirmed: We Are Very Bad At Detecting Deepfakes

We are very bad at detecting deepfakes

Do you know the term deepfake? Could you define it and explain how they affect us? Do you think you would know how to differentiate this type of video from a real one? Do you consider that deepfakes pose a danger to society? Is it really possible that the situation will get worse?

All these questions will be answered throughout this article. We start by contextualizing what deepfakes are. We continue to present the scientific evidence regarding our abilities to identify them and also address the impact they may have on our society. Finally, we talk about future prospects and the possibilities we have in the present.

What are deepfakes?

The term deepfake is an English creation, an acronym, that brings together several concepts. On the one hand, the term fake talks about falsification and, on the other, deep refers to deep learning, which would be translated as deep learning. The aim is to conceptualize this creation of false content through constant and deep learning of artificial intelligence (AI).

This artificial intelligence tool allows the editing of (fake) videos to make them look real. As a result, people who watch these videos (although they could also be voice files or just images) believe that they are seeing a certain person talking about a certain topic or exhibiting certain behaviors that have not actually occurred.

We are bad at detecting deepfakes

A recent study carried out in the United Kingdom, and published in the Royal Society Open Science, has studied people’s ability to identify deepfakes. The results are alarming since they indicate that most people have problems detecting them.

You may be interested:  Vasopressin (antidiuretic Hormone): These Are Its Functions

They found that even when people were warned that there was a chance that some of the videos they would see were fake, difficulties in identifying them persisted. These results suggest that people often tend to overestimate their detection ability. That is, they believe they are more capable of identifying them than they really do..

Why is it so difficult to realize that the content we are seeing is not real? There may be various factors that answer this question. However, it cannot be denied that these types of technologies have evolved and advanced a lot in algorithms and creations. Thus, the content is so realistic that it can be really difficult for the human eye to detect falsehood.

The impact of deepfakes

Due to the great advances that AI constantly makes, it is necessary that we become aware of the serious impact that this type of content can have on our daily lives. Especially because we have a hard time identifying it. On the one hand, human beings tend to believe those things they see and/or hear, especially if they agree with their system of values, beliefs and expectations.

This difficulty in distrusting what we see can have consequences in our daily lives. One of the most prominent areas in this sense is the issue of misinformation, as well as propaganda. If we add the evolution of technologies – which generates false content that is difficult for us to identify – with the widespread use of networks, we obtain an environment that is very easily manipulated..

You may be interested:  Choroid Plexus: Anatomy, Functions and Pathologies

Propaganda is the use of information, which has often been manipulated, to influence people’s opinions or decisions. Although this technique is not new at all, it is true that advances in AI make it much easier to falsify information and, of course, influence people.

To date, cases have already been observed in which deepfakes have been used to attack people, both anonymous and public figures. As if this were not enough, these types of creations have also been used to attack certain governments and countries. It is really necessary to become aware of the seriousness of the matter since this type of situation could trigger serious conflicts..

The worst is yet to come

One of the consequences observed as a result of deepfakes is the difficulty that some people have in trusting the credibility of what they see. Since it is so difficult for us to differentiate what content is real from what is not, it is easy to generate a general feeling of mistrust and confusion.

This situation is serious since artificial intelligence continues to constantly learn and sophisticate its tools. This happens at the same time that the barriers, the difficulties, to access this type of technology are becoming smaller and smaller.. That is, it is becoming easier to access and use them. It seems only a matter of time before anyone can have this type of technology at their fingertips.

If this really were to happen, there could be an exponential growth in the number of deepfake videos shared for the purpose of harming other people or entities. It seems that the tools to detect and combat fake videos are not developing at the same speed as the technologies that create them and this is a problem.

You may be interested:  The Relationship Between Neurotransmitters and Emotions

What can we do?

There are various points that must be taken into account and from which it is necessary to intervene. On the one hand, the generation of resources that allow this type of content to be detected is necessary. In this sense, The platforms on which they are published also have a difficult task ahead of them in terms of identifying and sanctioning false content..

On the other hand, it is crucial to invest in education for the population in order to awaken critical thinking. People should be able to question the credulity of the content they are watching, consuming and sharing with their environment. Along these lines, it can help us, for example, to check the source from which the content comes and keep in mind that, although it may seem real, there is a possibility that it is not.

Finally, it would be equally necessary for governments to become aware of the seriousness of the situation and apply the necessary measures in order to regulate the use or publication of this type of false content. Boundaries need to be set as they have already been used for harmful purposes and malicious intent.