Navigating The Digital Landscape: Combating Misinformation And Disinformation

*Click the Title above to view complete article on https://thefridaytimes.com/.

2023-05-20T18:26:59+05:00 Syed Waqar Amjad
“A lie can travel halfway around the world while the truth is putting on its shoes,” is a well-known quote attributed to Mark Twain, but there is always a room to doubt that Mark Tawin ever said this, thus ironically proving the point that at times we are misinformed and sometimes disinformed. These days, we hear a lot about the two terms “misinformation” and “disinformation”- what we need to know is the difference between them. Both terms describe different forms of inaccurate information, with a slight difference that misinformation is false information that is intentionally shared, whereas disinformation is deliberately shared wrong information to fulfill a certain objective, whether that is publicity or channel ratings.

It is not just a statement that misinformation and disinformation have a potential to evoke social unrest, but rather it is a proven fact that we have witnessed several times in the recent past. For instance communal violence in Sri Lanka in early 2018 started in the district of Ampara in the eastern province, where anti-Muslim hate was whipped up via posts across social media platforms such as Facebook, which framed Sinhalese Buddhists as being “under threat” from Islam and Muslims, and “consequently in need of urgent and if necessary violent pushback.” Facebook’s failure to remove hate speech and disinformation during the 2018 riots resulted in spurring on the deadly violence which had erupted in Sri Lanka. Though Facebook has apologized for its role in Sri Lanka’s 2018 anti-Muslim riots, however those posts, according to an investigation, were found to contain incendiary content, which has led to the deadly violence.

Besides this there is a new threat of misinformation called deep fakes. This new threat is a product of artificial intelligence. Deep fakes are AI-generated hyper-real, but fake videos purporting to show particular individuals saying or doing whatever the generator of the video intends to display. This development of advanced AI technology has created a new threat to the authenticity of information in the digital world. With the rise of AI-powered tools, the risk of disinformation has significantly increased. The new technology has raised concerns about its potential misuse to spread false and misleading information, fuel political propaganda, and manipulate public opinion.

This innovative technology has the potential to blur the lines between what is real and what is fake, making it difficult for people to distinguish between authentic and fabricated information. Experts warn that this could lead to a rise in the spread of disinformation, which could have serious consequences for society.

Deep fakes have already attained notoriety in the United States. For instance, a deep fake video featuring the resemblance of Facebook CEO Mark Zuckerberg stating “whoever controls the data, controls the future” surfaced on the Internet on the eve of the United States Congressional hearings on Artificial Intelligence which he was scheduled to attend, but he never intended to make any such remarks, however this deep fake statement left no stone unturned to defame him.

Internet content is designed to catch your attention with solicitous headlines that seek to foment outrage. Creating disinformation is easy, and anyone can invent any fact they want. Misinformation merchants have the only objective: to get engagements from users, moreover the more the people are enraged and engaged, yet divided, the more content they share, which is ultimately good for the business models for those platforms.

More often our desire for quick answers may overpower the desire to be certain of the validity of content so we often don’t evaluate the information we are exposed to critically before reacting to it; instead of being skeptical about the content that makes us angry, especially during a politically charged or polarized environment, thus facilitating the spread of propaganda. In such circumstances, our search for criticism of suspicious information and tracing the original source of the report can help slow down a lie, giving the truth more time to put on its shoes, but due to no time, skill or will, this examination rarely happens.

In the contemporary world, we all are vulnerable to fall for misinformation or disinformation through different ways. People in our social group who usually believe in the same ideology or narrative might share a particular piece of content which we are much more likely to believe and pass it on in kind, this is a process called homophily, which described a tendency that people have to seek out or be attracted to those who are similar to themselves. Homophily itself is not a bad thing, it’s the basis of how human beings build trust and complex societies, but when it comes to misinformation, homophily sometimes impel us to have near blind confidence in the source.

The spread of misinformation and disinformation has serious consequences for society, including a decline in public trust and an increase in social and political polarization. It is therefore imperative to take measures to combat this growing problem.

The question, as always, is what can be done. Shall we leave ourselves at the mercy of artificial intelligence, big data giants or state or anti-state propaganda, all of which seek to manipulate our emotions and blur the boundaries of our epistemic worlds. Or should we take some precautionary measures to save ourselves from the dual menace of misinformation and disinformation by trying to reconfirm the information we are exposed to through different reputable sources.

Most of the time, people share information only by reading the headline, instead of reading the complete article. The same is the case with video content. Social media is swamped with sensational video content, so one must be aware that sensational content is usually designed for propaganda purposes, or portrays falsehood in a manner that makes it seem like truth, though by promoting media literacy, fostering critical thinking skills, and encouraging responsible use of social media platforms through our educational system, we can create a more informed and responsible digital landscape that serves the greater good of society. Greater digital and medial literacy might as well help to curb some of the ills that will inevitably result from the rapid rise in popularity of artificial intelligence.
View More News