Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
27 views10 pages

Misinformation Has Created A New World Disorder

The document discusses how misinformation, exacerbated by social media, has created societal disorder by exploiting emotional triggers and existing prejudices. It highlights the role of technology in amplifying disinformation campaigns and the need for individuals to develop critical skills to navigate this chaotic information landscape. The author emphasizes that understanding the mechanics of social platforms and recognizing one's own susceptibility to misinformation are essential steps in combating its spread.

Uploaded by

Anh Kim Kieu Anh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views10 pages

Misinformation Has Created A New World Disorder

The document discusses how misinformation, exacerbated by social media, has created societal disorder by exploiting emotional triggers and existing prejudices. It highlights the role of technology in amplifying disinformation campaigns and the need for individuals to develop critical skills to navigate this chaotic information landscape. The author emphasizes that understanding the mechanics of social platforms and recognizing one's own susceptibility to misinformation are essential steps in combating its spread.

Uploaded by

Anh Kim Kieu Anh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

Misinformation Has Created a New World

Disorder
Our willingness to share content without thinking
is exploited to spread disinformation
 By Claire Wardle on September 1, 2019

https://www.scientificamerican.com/article/misinformation-has-created-a-new-world-disorder/




Credit: Wesley Allsbrook

As someone who studies the impact of misinformation on society, I


often wish the young entrepreneurs of Silicon Valley who enabled
communication at speed had been forced to run a 9/11 scenario
with their technologies before they deployed them commercially.

One of the most iconic images from that day shows a large
clustering of New Yorkers staring upward. The power of the
photograph is that we know the horror they're witnessing. It is easy
to imagine that, today, almost everyone in that scene would be
holding a smartphone. Some would be filming their observations
and posting them to Twitter and Facebook. Powered by social
media, rumors and misinformation would be rampant. Hate-filled
posts aimed at the Muslim community would proliferate, the
speculation and outrage boosted by algorithms responding to
unprecedented levels of shares, comments and likes. Foreign
agents of disinformation would amplify the division, driving wedges
between communities and sowing chaos. Meanwhile those stranded
on the tops of the towers would be livestreaming their final
moments.

Stress testing technology in the context of the worst moments in


history might have illuminated what social scientists and
propagandists have long known: that humans are wired to respond
to emotional triggers and share misinformation if it reinforces
existing beliefs and prejudices. Instead designers of the social
platforms fervently believed that connection would drive tolerance
and counteract hate. They failed to see how technology would not
change who we are fundamentally—it could only map onto existing
human characteristics.

Online misinformation has been around since the mid-1990s. But in


2016 several events made it broadly clear that darker forces had
emerged: automation, microtargeting and coordination were
fueling information campaigns designed to manipulate public
opinion at scale. Journalists in the Philippines started raising flags
as Rodrigo Duterte rose to power, buoyed by intensive Facebook
activity. This was followed by unexpected results in the Brexit
referendum in June and then the U.S. presidential election in
November—all of which sparked researchers to systematically
investigate the ways in which information was being used as a
weapon.

During the past three years the discussion around the causes of our
polluted information ecosystem has focused almost entirely on
actions taken (or not taken) by the technology companies. But this
fixation is too simplistic. A complex web of societal shifts is making
people more susceptible to misinformation and conspiracy. Trust in
institutions is falling because of political and economic upheaval,
most notably through ever widening income inequality. The effects
of climate change are becoming more pronounced. Global
migration trends spark concern that communities will change
irrevocably. The rise of automation makes people fear for their jobs
and their privacy.
Bad actors who want to deepen existing tensions understand these
societal trends, designing content that they hope will so anger or
excite targeted users that the audience will become the messenger.
The goal is that users will use their own social capital to reinforce
and give credibility to that original message.

Most of this content is designed not to persuade people in any


particular direction but to cause confusion, to overwhelm and to
undermine trust in democratic institutions from the electoral
system to journalism. And although much is being made about
preparing the U.S. electorate for the 2020 election, misleading and
conspiratorial content did not begin with the 2016 presidential
race, and it will not end after this one. As tools designed to
manipulate and amplify content become cheaper and more
accessible, it will be even easier to weaponize users as unwitting
agents of disinformation.

Credit: Jen Christiansen; Source: Information Disorder: Toward an Interdisciplinary


Framework for Research and Policymaking, by Claire Wardle and Hossein
Derakhshan. Council of Europe, October 2017

WEAPONIZING CONTEXT

Generally, the language used to discuss the misinformation


problem is too simplistic. Effective research and interventions
require clear definitions, yet many people use the problematic
phrase “fake news.” Used by politicians around the world to attack
a free press, the term is dangerous. Recent research shows that
audiences increasingly connect it with the mainstream media. It is
often used as a catchall to describe things that are not the same,
including lies, rumors, hoaxes, misinformation, conspiracies and
propaganda, but it also papers over nuance and complexity. Much
of this content does not even masquerade as news—it appears as
memes, videos and social posts on Facebook and Instagram.

In February 2017 I created seven types of “information disorder” in


an attempt to emphasize the spectrum of content being used to
pollute the information ecosystem. They included, among others,
satire, which is not intended to cause harm but still has the
potential to fool; fabricated content, which is 100 percent false and
designed to deceive and do harm; and false context, which is when
genuine content is shared with false contextual information. Later
that year technology journalist Hossein Derakhshan and I published
a report that mapped out the differentiations among disinformation,
misinformation and malinformation.

Purveyors of disinformation—content that is intentionally false and


designed to cause harm—are motivated by three distinct goals: to
make money; to have political influence, either foreign or domestic;
and to cause trouble for the sake of it.

Those who spread misinformation—false content shared by a


person who does not realize it is false or misleading—are driven by
sociopsychological factors. People are performing their identities
on social platforms to feel connected to others, whether the
“others” are a political party, parents who do not vaccinate their
children, activists who are concerned about climate change, or
those who belong to a certain religion, race or ethnic group.
Crucially, disinformation can turn into misinformation when people
share disinformation without realizing it is false.

We added the term “malinformation” to describe genuine


information that is shared with an intent to cause harm. An
example of this is when Russian agents hacked into e-mails from
the Democratic National Committee and the Hillary Clinton
campaign and leaked certain details to the public to damage
reputations.

Having monitored misinformation in eight elections around the


world since 2016, I have observed a shift in tactics and techniques.
The most effective disinformation has always been that which has a
kernel of truth to it, and indeed most of the content being
disseminated now is not fake—it is misleading. Instead of wholly
fabricated stories, influence agents are reframing genuine content
and using hyperbolic headlines. The strategy involves connecting
genuine content with polarizing topics or people. Because bad
actors are always one step (or many steps) ahead of platform
moderation, they are relabeling emotive disinformation as satire so
that it will not get picked up by fact-checking processes. In these
efforts, context, rather than content, is being weaponized. The
result is intentional chaos.

Take, for example, the edited video of House Speaker Nancy Pelosi
that circulated this past May. It was a genuine video, but an agent
of disinformation slowed down the video and then posted that clip
to make it seem that Pelosi was slurring her words. Just as
intended, some viewers immediately began speculating that Pelosi
was drunk, and the video spread on social media. Then the
mainstream media picked it up, which undoubtedly made many
more people aware of the video than would have originally
encountered it.

Research has found that traditionally reporting on misleading


content can potentially cause more harm. Our brains are wired to
rely on heuristics, or mental shortcuts, to help us judge credibility.
As a result, repetition and familiarity are two of the most effective
mechanisms for ingraining misleading narratives, even when
viewers have received contextual information explaining why they
should know a narrative is not true.

Bad actors know this: In 2018 media scholar Whitney Phillips


published a report for the Data & Society Research Institute that
explores how those attempting to push false and misleading
narratives use techniques to encourage reporters to cover their
narratives. Yet another recent report from the Institute for the
Future found that only 15 percent of U.S. journalists had been
trained in how to report on misinformation more responsibly. A
central challenge now for reporters and fact checkers—and anyone
with substantial reach, such as politicians and influencers—is how
to untangle and debunk falsehoods such as the Pelosi video without
giving the initial piece of content more oxygen.

MEMES: A MISINFORMATION POWERHOUSE

In January 2017 the NPR radio show This American


Life interviewed a handful of Trump supporters at one of his
inaugural events called the DeploraBall. These people had been
heavily involved in using social media to advocate for the president.
Of Trump's surprising ascendance, one of the interviewees
explained: “We memed him into power.... We directed the culture.”

The word “meme” was first used by theorist Richard Dawkins in his
1976 book, The Selfish Gene, to describe “a unit of cultural
transmission or a unit of imitation,” an idea, behavior or style that
spreads quickly throughout a culture. During the past several
decades the word has been appropriated to describe a type of
online content that is usually visual and takes on a particular
aesthetic design, combining colorful, striking images with block
text. It often refers to other cultural and media events, sometimes
explicitly but mostly implicitly.

This characteristic of implicit logic—a nod and wink to shared


knowledge about an event or person—is what makes memes
impactful. Enthymemes are rhetorical devices where the argument
is made through the absence of the premise or conclusion. Often
key references (a recent news event, a statement by a political
figure, an advertising campaign or a wider cultural trend) are not
spelled out, forcing the viewer to connect the dots. This extra work
required of the viewer is a persuasive technique because it pulls an
individual into the feeling of being connected to others. If the meme
is poking fun or invoking outrage at the expense of another group,
those associations are reinforced even further.
The seemingly playful nature of these visual formats means that
memes have not been acknowledged by much of the research and
policy community as influential vehicles for disinformation,
conspiracy or hate. Yet the most effective misinformation is that
which will be shared, and memes tend to be much more shareable
than text. The entire narrative is visible in your feed; there is no
need to click on a link. A 2019 book by An Xiao Mina, Memes to
Movements, outlines how memes are changing social protests and
power dynamics, but this type of serious examination is relatively
rare.

Indeed, of the Russian-created posts and ads on Facebook related


to the 2016 election, many were memes. They focused on polarizing
candidates such as Bernie Sanders, Hillary Clinton or Donald
Trump and on polarizing policies such as gun rights and
immigration. Russian efforts often targeted groups based on race
or religion, such as Black Lives Matter or Evangelical Christians.
When the Facebook archive of Russian-generated memes was
released, some of the commentary at the time centered on the lack
of sophistication of the memes and their impact. But research has
shown that when people are fearful, oversimplified narratives,
conspiratorial explanation, and messages that demonize others
become far more effective. These memes did just enough to drive
people to click the share button.

Technology platforms such as Facebook, Instagram, Twitter and


Pinterest play a significant role in encouraging this human behavior
because they are designed to be performative in nature. Slowing
down to check whether content is true before sharing it is far less
compelling than reinforcing to your “audience” on these platforms
that you love or hate a certain policy. The business model for so
many of these platforms is attached to this identity performance
because it encourages you to spend more time on their sites.

Researchers are now building monitoring technologies to track


memes across different social platforms. But they can investigate
only what they can access, and the data from visual posts on many
social platforms are not made available to researchers.
Additionally, techniques for studying text such as natural-language
processing are far more advanced than techniques for studying
images or videos. That means the research behind solutions being
rolled out is disproportionately skewed toward text-based tweets,
Web sites or articles published via URLs and fact-checking of
claims by politicians in speeches.

Although plenty of blame has been placed on the technology


companies—and for legitimate reasons— they are also products of
the commercial context in which they operate. No algorithmic
tweak, update to the platforms' content-moderation guidelines or
regulatory fine will alone improve our information ecosystem at the
level required.

PARTICIPATING IN THE SOLUTION

In a healthy information commons, people would still be free to


express what they want—but information that is designed to
mislead, incite hatred, reinforce tribalism or cause physical harm
would not be amplified by algorithms. That means it would not be
allowed to trend on Twitter or in the YouTube content
recommender. Nor would it be chosen to appear in Facebook feeds,
Reddit searches or top Google results.

Until this amplification problem is resolved, it is precisely our


willingness to share without thinking that agents of disinformation
will use as a weapon. Hence, a disordered information environment
requires that every person recognize how he or she, too, can
become a vector in the information wars and develop a set of skills
to navigate communication online as well as offline.

Currently conversations about public awareness are often focused


on media literacy and often with a paternalistic framing that the
public simply needs to be taught how to be smarter consumers of
information. Instead online users would be better taught to develop
cognitive “muscles” in emotional skepticism and trained to
withstand the onslaught of content designed to trigger base fears
and prejudices.
Credit: Jen Christiansen; Source: Information Disorder: Toward an Interdisciplinary
Framework for Research and Policymaking, by Claire Wardle and Hossein
Derakhshan. Council of Europe, October 2017

Anyone who uses Web sites that facilitate social interaction would
do well to learn how they work—and especially how algorithms
determine what users see by “prioritiz[ing] posts that spark
conversations and meaningful interactions between people,” in the
case of a January 2018 Facebook update about its rankings. I would
also recommend that everyone try to buy an advertisement on
Facebook at least once. The process of setting up a campaign helps
to drive understanding of the granularity of information available.
You can choose to target a subcategory of people as specific as
women, aged between 32 and 42, who live in the Raleigh-Durham
area of North Carolina, have preschoolers, have a graduate degree,
are Jewish and like Kamala Harris. The company even permits you
to test these ads in environments that allow you to fail privately.
These “dark ads” let organizations target posts at certain people,
but they do not sit on that organization's main page. This makes it
difficult for researchers or journalists to track what posts are being
targeted at different groups of people, which is particularly
concerning during elections.

Facebook events are another conduit for manipulation. One of the


most alarming examples of foreign interference in a U.S. election
was a protest that took place in Houston, Tex., yet was entirely
orchestrated by trolls based in Russia. They had set up two
Facebook pages that looked authentically American. One was
named “Heart of Texas” and supported secession; it created an
“event” for May 21, 2016, labeled “Stop Islamification of Texas.”
The other page, “United Muslims of America,” advertised its own
protest, entitled “Save Islamic Knowledge,” for the exact same time
and location. The result was that two groups of people came out to
protest each other, while the real creators of the protest celebrated
the success at amplifying existing tensions in Houston.

Another popular tactic of disinformation agents is dubbed


“astroturfing.” The term was initially connected to people who
wrote fake reviews for products online or tried to make it appear
that a fan community was larger than it really was. Now automated
campaigns use bots or the sophisticated coordination of passionate
supporters and paid trolls, or a combination of both, to make it
appear that a person or policy has considerable grassroots support.
By making certain hashtags trend on Twitter, they hope that
particular messaging will get picked up by the professional media
and direct the amplification to bully specific people or
organizations into silence.

Understanding how each one of us is subject to such campaigns—


and might unwittingly participate in them—is a crucial first step to
fighting back against those who seek to upend a sense of shared
reality. Perhaps most important, though, accepting how vulnerable
our society is to manufactured amplification needs to be done
sensibly and calmly. Fearmongering will only fuel more conspiracy
and continue to drive down trust in quality-information sources and
institutions of democracy. There are no permanent solutions to
weaponized narratives. Instead we need to adapt to this new
normal. Just as putting on sunscreen was a habit that society
developed over time and then adjusted as additional scientific
research became available, building resiliency against a disordered
information environment needs to be thought about in the same
vein.

This article was originally published with the title "A New World
Disorder" in Scientific American 321, 3, 88-93 (September 2019)
doi:10.1038/scientificamerican0919-88

You might also like