Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
10 views12 pages

Final Report

Uploaded by

xstreem4
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views12 pages

Final Report

Uploaded by

xstreem4
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 12

0

2025

ETHICAL USE OF ARTIFICIAL

SECTOR
PROGRAMMING AND TECHNOLOGY
UKHONAYE POTYO

|COMPUTER APPLICATION TECHNOLOGY


BERLIN HIGH SCHOOL

1
HEADINGS
TABLE OF CONTENTS
TABLE OF FIGURES
INTRODUCTION
TASK DEFINITION
FOCUS QUESTION
DISCUSSION AND ANALYSIS
FINDINGS
CONCLUSION
BIBLIOGRAPHY AND APPENDICES

2
INTRODUCTION

The ethical use of AI systems in programming and technology is a


crucial topic in today's rapidly advancing tech landscape. As AI
continues to transform industries and society, ensuring its responsible
application is essential to avoid unintended consequences, bias, and
harm. The core principles of ethical AI involve fairness, transparency,
accountability, privacy, and inclusivity.

1. Fairness: AI systems should be designed to treat all individuals


and groups equally. This means avoiding biased data that may
lead to discriminatory outcomes. For example, AI algorithms in
hiring or lending must be trained on diverse data to prevent bias
against certain demographics.
2. Transparency: AI systems should operate in a transparent
manner, allowing users to understand how decisions are made.
This includes making the workings of the algorithms explainable to
ensure trust in the technology. Users should be informed about the
data being used and how it impacts the outcomes.
3. Accountability: Those who design and deploy AI systems must
be held accountable for their impacts. If an AI system causes harm
or makes erroneous decisions, clear lines of responsibility must
exist to address the issue and rectify it.
4. Privacy: AI systems should respect individual privacy rights.
Personal data used to train AI models must be collected and
handled in accordance with legal and ethical standards. Data
protection mechanisms should be built into AI systems to
safeguard users' information from misuse or exploitation.
5. Inclusivity: AI should be developed to benefit all segments of
society, ensuring that marginalized or underrepresented groups
are not left behind. It is important that AI tools are accessible and
do not perpetuate societal inequalities.
6. Safety and Security: AI systems should be rigorously tested for
safety to prevent unintended consequences, such as making
harmful decisions or being vulnerable to malicious attacks. The
systems should also be designed to be robust and resilient to
manipulation.
7. Sustainability: Ethical AI also involves considering the long-term
impact of technology on society and the environment. For

3
example, the environmental impact of training AI models, including
energy consumption, should be minimized where possible.

The field of AI ethics is still evolving, and frameworks are continuously


being developed by organizations, governments, and researchers to
ensure that AI technologies are used in ways that align with the values
of fairness, respect, and human well-being.

Incorporating ethical practices into AI development is not just about


adhering to regulations but also fostering trust in technology, creating a
positive societal impact, and addressing the potential risks that come
with its power.

4
TABLE OF CONTENTS

Contents
INTRODUCTION................................................................................................................................2
TABLE OF CONTENTS.............................................................................................................................4
REPORT.................................................................................................................................................5

5
REPORT

The ethics of artificial intelligence covers a broad range of topics within


AI that are considered to have particular ethical stakes. This includes
algorithmic biases, fairness, automated decision-making, accountability,
privacy, and regulation. It also covers various emerging or potential
future challenges such as machine ethics (how to make machines that
behave ethically), lethal autonomous weapon systems, arms race
dynamics, AI safety and alignment, technological unemployment, AI-
enabled misinformation, how to treat certain AI systems if they have a
moral status (AI welfare and rights), artificial superintelligence and
existential risks.

There are discussions on creating tests to see if an AI is capable of making


ethical decisions. Alan Winfield concludes that the Turing test is flawed and the
requirement for an AI to pass the test is too low. A proposed alternative test is
one called the Ethical Turing Test, which would improve on the current test by
having multiple judges decide if the AI's decision is ethical or unethical.
Neuromorphic AI could be one way to create morally capable robots, as it aims
to process information similarly to humans, nonlinearly and with millions of
interconnected artificial neurons. Similarly, whole-brain emulation (scanning a
brain and simulating it on digital hardware) could also in principle lead to
human-like robots, thus capable of moral actions. And large language models
are capable of approximating human moral judgments. Inevitably, this raises
the question of the environment in which such robots would learn about the
world and whose morality they would inherit – or if they end up developing
human 'weaknesses' as well: selfishness, pro-survival attitudes, inconsistency,
scale insensitivity, etc. There are discussions on creating tests to see if an
AI is capable of making ethical decisions. Alan Winfield concludes that
the Turing test is flawed and the requirement for an AI to pass the test is
too low. A proposed alternative test is one called the Ethical Turing Test,
which would improve on the current test by having multiple judges
decide if the AI's decision is ethical or unethical. Neuromorphic AI could
be one way to create morally capable robots, as it aims to process
information similarly to humans, nonlinearly and with millions of
interconnected artificial neurons. Similarly, whole-brain emulation

6
(scanning a brain and simulating it on digital hardware) could also in
principle lead to human-like robots, thus capable of moral actions. And
large language models are capable of approximating human moral
judgments. Inevitably, this raises the question of the environment in
which such robots would learn about the world and whose morality they
would inherit – or if they end up developing human 'weaknesses' as well:
selfishness, pro-survival attitudes, inconsistency, scale insensitivity, The
philosophy of sentientism grants degrees of moral consideration to all
sentient beings, primarily humans and most non-human animals. If
artificial or alien intelligence show evidence of being sentient, this
philosophy holds that they should be shown compassion and granted
rights.

AI ethics are the set of guiding principles that stakeholders (from


engineers to government officials) use to ensure artificial intelligence
technology is developed and used responsibly. This means taking a safe,
secure, humane, and environmentally friendly approach to AI. AI ethics
are important because AI technology is meant to augment or replace
human intelligence—but when technology is designed to replicate human
life, the same issues that can cloud human judgment can seep into the
technology.

AI projects built on biased or inaccurate data can have harmful


consequences, particularly for underrepresented or marginalized groups
and individuals. Further, if AI algorithms and machine learning models are
built too hastily, then it can become unmanageable for engineers and
product managers to correct learned biases. It's easier to incorporate a
code of ethics during the development process to mitigate any future
risks.

7
As AI has grown in recent years, influencing nearly every industry and
having a huge positive impact on industries like health care, the topic of
AI ethics has become even more salient. How do we ensure bias-free AI?
What can be done to mitigate risks in the future? There are many
potential solutions, but stakeholders must act responsibly and
collaboratively to ensure positive outcomes across the globe.

8
FINDINGS

gender
25

20
20

15 male
female
10

5
5

0
1

AGE
25

20

15

10

0
i i i i ni
bo na oo rho enti use ras ane jola lay wa ora ana laza nya yinj le n yo
Bo Pina u B Lo S o K So m a B g q ts B a ka a ngu ata alto Pot
b
w
a e th m l ile al a da ni M aye gun Som so a M aqu a M oba M n D aye
el elel we wa xo Bab ong u N a a n N la om m M us N ayo tia on
uw h Avo A A B th lum Ib nga nd Lin Liy lali g e M iyo w bes kh
Al Ap ni
le H I a d U
Di a m M
ih Om han Se
iL n O t

9
10
CONCLUSION

11

You might also like