Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
6 views26 pages

Breaking The Algorithm Presentation

The document discusses algorithmic bias and its influence on news and public opinion, highlighting how algorithms prioritize content based on user behavior, which can reinforce stereotypes and misinformation. It contrasts traditional and digital media, emphasizing the role of language and algorithms in shaping perceptions and creating echo chambers. The document calls for strategies to mitigate bias, including transparency, media literacy, and international cooperation to foster fairer algorithms and protect users' rights.

Uploaded by

Nikola Smakic
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views26 pages

Breaking The Algorithm Presentation

The document discusses algorithmic bias and its influence on news and public opinion, highlighting how algorithms prioritize content based on user behavior, which can reinforce stereotypes and misinformation. It contrasts traditional and digital media, emphasizing the role of language and algorithms in shaping perceptions and creating echo chambers. The document calls for strategies to mitigate bias, including transparency, media literacy, and international cooperation to foster fairer algorithms and protect users' rights.

Uploaded by

Nikola Smakic
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 26

Breaking the Algorithm:

The Power of Language and


Bias in Shaping News
Imagine you’re using your favorite social media app, and it
knows you pretty well—maybe too well.

It starts showing you posts about fitness, healthy meals,


and motivational quotes because you’ve liked a couple of
those in the past. But suddenly, you get an ad for a 5-day
juice cleanse.

You think,
“Wait, I’ve never looked for anything like that!”

But the algorithm insists you might need it. It’s like a
friend who’s just a bit too eager to give advice based on
your past choices.
So, what’s happening here?

This is algorithmic bias in action.

The algorithm is taking your past behavior and


making assumptions about your future decisions—
sometimes with surprising (and a little annoying)
results. Let’s explore how this hidden bias can
influence more than just your juice cleanse choices.
Firstly, language is used for humans to
communicate, express ideas, and shape
perceptions of the world.

Media systems use language to frame narratives,


shape public opinion, and reinforce or challenge
societal biases, often influencing how issues,
groups, and events are perceived.

Language reflects and reinforces societal biases,


which AI learns from, leading to algorithmic bias
in word associations, translations, and speech
recognition.
Short history

The impact of media systems on beliefs dates back to


the 19th century when newspapers were the primary
source of information and were often used as political
tools to shape public opinion, promote ideological views
etc.

At that time, the press was far from neutral; publishers


openly supported specific political ideologies and elite
interests, often manipulating information to strengthen
the positions of powerful groups.
Further

In the 20th century, with the development of radio and


television, media influence expanded even further.
While these media brought a homogenization of
information at the national level, centralized control
over content and a limited number of channels allowed
governments and corporations to shape public opinion
to suit their own interests.
Media powerhouses often pushed agendas favoring
specific political views or economic interests, creating
polarized reactions within society, especially during
times of political crises or wars (Chomsky, 2002).
The difference between traditional and digital media
in the context of creating public opinion lies in the
way how information is distributed, its interaction
with the audience, speed of publication, and content
filtering mechanisms.
Traditional media have centralized control and
provide one-way communication, where a small
number of editors and media owners decide which
information will be conveyed to the public, while
digital media enable decentralized content creation
and sharing, where every user can become a
producer of information and directly contribute to
public discourse (Jenkins, 2006).
Digital media use algorithms that favor
emotionally engaging content thus creating
spaces that reinforce divisions,

while traditional media typically offer more


balanced information and are less
interactive.
This accelerated flow of information means that
people have less time to critically assess or
verify the accuracy of what they read, which can
lead to misunderstandings, misinterpretations,
and overreactions.

The speed at which information circulates


creates pressure on individuals to respond
immediately, which sometimes leads to
premature conclusions and negatively impacts
public discourse and informed decision-making.
Also, this decentralization has led to the
fragmentation of the media space, where
individuals increasingly reside in "echo
chambers" surrounded by like-minded people
and information that confirms their views,
while opposing opinions are ignored or
distorted.

Echo chambers and epistemic bubbles


represent phenomena in which individuals
avoid certain information or sources that could
challenge their beliefs, thus limiting their
access to more comprehensive knowledge.
This bias in content presentation can further
intensify structural polarization, as users
become increasingly isolated within their
groups, and the content they are shown
reinforces their pre-existing beliefs and views,
rather than exposing them to more diverse
perspectives.

It can also affect non-structural polarization


through content personalization. When
algorithms analyze users' past behavior and
tailor content to their interests, it can influence
how users' opinions and views are shaped
Word "algorithm"

The word "algorithm" comes from the name of a


Persian mathematician, Al-Khwarizmi, who lived
around the year 800. He wrote a book that explained
how to solve math problems using step-by-step
methods.
When his book was translated into Latin, people
started calling the methods "algoritmi" (from his
name). Over time, the word "algorithm" came to
mean any set of instructions or steps for solving a
problem.
So, "algorithm" is named after Al-Khwarizmi, whose
What is Algorithmic Bias?

 - Algorithms prioritize certain content based on


patterns, historical data, and user behavior.
 - Bias occurs when these patterns reinforce
stereotypes, misinformation, or unequal representation.

Why Does This Matter?


 - Affects the flow of global information.
 - Influences public opinion, diplomatic relations, and
international policy-making.
What is Algorithmic Bias?

 1. Algorithms Prioritize Certain Content


Example: In 2020, Facebook's algorithm was found to prioritize
posts with high engagement rates, which often included
inflammatory or misleading content. For example, divisive
political posts by extremist groups frequently appeared in users’
feeds due to their high interaction levels.
 2. Reinforcement of Stereotypes
Example: In 2018, Amazon abandoned an AI recruiting tool after
it was discovered to be biased against women. The system,
trained on resumes submitted over a decade (most from men),
penalized resumes with terms like “women’s chess club” or
graduation from all-women’s colleges.
 3. Amplification of Misinformation
Example: YouTube’s recommendation algorithm has
been criticized for promoting conspiracy theories.
During the COVID-19 pandemic, videos with false
claims about vaccines or the virus's origin were often
recommended to users repeatedly engaging with
similar content.

 4. Unequal Representation
Example: A 2015 study by Harvard researchers
showed that Google's ad system displayed higher-
paying job advertisements more frequently to men
than women, highlighting systemic inequities in how
algorithms present opportunities.
Why Does This Matter?

 Affects the Flow of Global Information


Example: During the 2020 Nagorno-Karabakh conflict,
algorithms on social media platforms like Twitter and
Facebook often amplified content from influential accounts
or state-backed narratives, sidelining grassroots
perspectives from affected civilians.

 Influences Public Opinion


Example: In 2016, Facebook’s algorithm played a key role
in spreading false news stories during the U.S. presidential
election. Stories like “Pope Endorses Trump” gained
millions of views, misleading voters and shaping their
opinions.
 Diplomatic Relations
Example: In 2021, the government of Myanmar
attempted to block social media platforms like Facebook
and Twitter following the military coup. The platforms
were used by the public to organize protests and share
information about the situation. The suppression of these
platforms led to international condemnation, and
strained relations between Myanmar's military
government and other nations, particularly Western
democracies, which criticized the restrictions on free
speech and access to information.
 International Policy-Making
Example: Algorithms during the European refugee crisis
(2015) amplified stories of crime allegedly linked to
refugees. These narratives influenced public sentiment,
pressuring policymakers in countries like Hungary and
Italy to adopt stricter immigration policies.
The News Ecosystem: Role of Algorithms

Key Functions of Algorithms in News


Delivery
 - Personalization: Tailored news feeds.
 - Filtering: Highlighting or omitting
content.
 - Ranking: Determining what is most
relevant or popular.

Impact on Media Consumption


 - Echo chambers and filter bubbles.
Algorithmic Bias and International Relations

Case Studies
 - Misinformation campaigns during elections (e.g., 2016
U.S. elections).
 - Biased reporting on geopolitical conflicts (e.g., Middle
East, Ukraine).
 - Algorithmic censorship of political narratives (e.g.,
feminist or minority perspectives).
Consequences for Diplomacy
 - Distrust among nations.
 - Challenges to public diplomacy and soft power
strategies.
Strategies for Mitigating Bias

 Algorithmic Transparency
 - Demanding clarity on how platforms rank and prioritize
content.
 Promoting Media Literacy
 - Empowering users to identify bias and verify sources.
 International Collaboration
 - Agreements on ethical AI standards in media platforms.
 Policy Recommendations
 - Regulation of algorithmic practices to reduce harmful bias.
Call to Action for International Cooperation

 Fostering Global Alliances


 - Regional agreements on cross-border misinformation.
 Encouraging Academic Partnerships
 - Research on algorithmic bias and its international effec
 Role of NGOs and Civil Society
 - Advocating for unbiased media practices.
Conclusion

Algorithmic content moderation has a significant impact on


societal polarization, with corporate interests often
outweighing ethical principles.
While technical solutions to reduce bias and increase
transparency are possible, their implementation requires
resources, regulatory frameworks, and interdisciplinary
collaboration.
Responsibility for fairer algorithms lies not only with tech
companies but also with the academic community,
legislators, and civil society.
Conclusion

It is crucial to establish a balance between automation


and human control to ensure the plurality of opinions
and the protection of users' fundamental rights.

Finally, only by actively involving the public in


shaping rules and ethical guidelines can a digital
environment be created that serves people, not
commercial interests.
Q&A

 Questions and Discussion

You might also like