Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
248 views24 pages

T302 - Final Report (Edited With References)

The document is a final report submitted by 4 students for their EG2401A Engineering Professionalism module. It discusses 3 applications of artificial intelligence (AI) - academic support by AI, self-driving vehicles, and digital humans. For each application, it describes the background, ethical dilemmas, proposed ethical solutions, and evaluation of present and future impacts. It concludes that while AI provides benefits, its development and use also raise ethical concerns regarding issues like job replacement, delegation of decision-making, and threats to privacy, which require mitigation.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
248 views24 pages

T302 - Final Report (Edited With References)

The document is a final report submitted by 4 students for their EG2401A Engineering Professionalism module. It discusses 3 applications of artificial intelligence (AI) - academic support by AI, self-driving vehicles, and digital humans. For each application, it describes the background, ethical dilemmas, proposed ethical solutions, and evaluation of present and future impacts. It concludes that while AI provides benefits, its development and use also raise ethical concerns regarding issues like job replacement, delegation of decision-making, and threats to privacy, which require mitigation.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 24

Project 3: Machine/Artificial Intelligence and their Technologies

Cheong Ming Xian, A0223140B


Jin Yong Hao Jordan, A0218384Y
Luo Senyu, A0211245W
Yan Chenyu, A0222805M

A FINAL REPORT SUBMITTED


FOR EG2401A ENGINEERING PROFESSIONALISM
NATIONAL UNIVERSITY OF SINGAPORE
AY 2022/2023 SEMESTER 2
EG2401A Engineering Professionalism

Group Projects - Honour Pledge

Year Semester Project Title Tutor

Project 3: Machine/Artificial Prof Zhang


2022-2023 2
Intelligence and their Technologies Yong

We, the following students of NUS, upon our honour, hereby confirm that we have
neither received nor given any unauthorized help on the EG2401A group project
carried out by us.
The project report and the presentation reflect truly our own efforts. In all cases where
material from other sources such as books, articles, notes and websites have been used,
we have taken care to provide clear and unambiguous references to the same.
We confirm that we have not provided unauthorized help to other groups doing the
same project. Furthermore, we also confirm that we will not pass on our research
materials, report and presentation materials to other students who may take this
module in later semesters.
In addition, this project report has been prepared and submitted by us only as a part of
an academic exercise. Its contents are not meant for publication in any manner.

Sl.No. Matric No. Name Signature Date

1 28 March
A0223140B Cheong Ming Xian
2023

2 28 March
A0218384Y Jin Yong Hao Jordan
2023

3 28 March
A0211245W Luo Senyu
2023

4 28 March
A0222805M Yan Chenyu
2023
Table of Contents
1. Introduction............................................................................................................................... 4
2. Application 1: Academic Support by AI.............................................................................4
2.1. Background................................................................................................................................... 4
2.2. Ethical Dilemmas......................................................................................................................... 5
2.3. Ethical Solutions........................................................................................................................... 6
2.4. Evaluation of Present and Future Impact............................................................................... 6
2.5. Summary........................................................................................................................................ 7
3. Application 2: Self-Driving by AI......................................................................................... 7
3.1. Background................................................................................................................................... 7
3.2. Ethical Dilemma........................................................................................................................... 7
3.3. Ethical Solutions........................................................................................................................ 14
3.4. Evaluation of Present and Future Impact............................................................................. 14
3.5. Summary..................................................................................................................................... 14

4. Application 3: Digital Human............................................................................................. 15


4.1. Background................................................................................................................................. 15
4.2. Ethical Dilemmas....................................................................................................................... 17
4.3. Ethical Solutions........................................................................................................................ 18
4.4. Evaluation of Present and Future Impact............................................................................. 18
4.5. Summary..................................................................................................................................... 19
5. Conclusion............................................................................................................................... 19
6. References................................................................................................................................ 20
1. Introduction

Machine/Artificial Intelligence (AI) and its technologies has significantly transformed many
aspects of society. It has provided people with new ways of integrating information,
analysing data, and utilising insights to enhance their decision-making abilities [1]. With the
use of sophisticated algorithms and machine learning, AI machines are constantly evolving,
allowing them to perform tasks that were once believed to be exclusive to humans. As a
result, AI has been increasingly adopted in various industries, contributing positively to
society.

However, the development and the use of AI have raised several ethical concerns such as the
potential replacement of human jobs, the delegation of decision-making to AI, the potential
for AI to foster laziness, and the threat to personal privacy posed by AI [2].

This report delves into the ethical dilemmas presented in the following three applications of
AI and proposes possible solutions to mitigate the risks and challenges associated with them.
Additionally, this report will assess both the current and future impact of these issues.

2. Application 1: Academic Support by AI


2.1. Background
ChatGPT, developed by OpenAI, is a AI-powered chatbot that can provide detailed solutions
and answers to most general questions asked by its users. Within a few days of its launch on
4 December 2022, it had already acquired more than 1 million users and today its user base
exceeds 100 million [3]. ChatGPT is a useful tool for research, helping students to find
information and papers efficiently, thereby accelerating their progress. However, some
students abuse ChatGPT to write their essays, leading to academic dishonesty [4]. 89% of the
students in UK admits to using ChatGPT for their homework [5].

An instance of academic dishonesty occurred at Northern Michigan University when a


student submitted an essay on burqa bans generated by ChatGPT, which was later caught by
the philosophy professor, Antony Aumann [6]. The essay was well structured and coherent,
raising suspicion from the professor who ran the student’s paper through an AI search engine
that returned a 99% match [7]. The student eventually admitted to using Chat GPT to write
the essay.

2.2. Ethical Dilemmas

Table 1: Ethics Analysis of Academic Dishonesty using ChatGPT

Action: Academic Dishonesty using ChatGPT

Ethics Category Party #1: Student Party #2: Professor

Duty Ethics Duty ethics was not satisfied. Duty ethics was satisfied.

It is a student’s duty to submit his It is the professor’s duty to


original work that reflects their uphold academic integrity and
own thoughts and ideas. By using ensure that the students are
ChatGPT to generate the essay, producing original work. By
the student violated this duty to catching the student who had
produce original work. submitted an essay generated by
ChatGPT, he fulfilled his duty to
This student does not fulfil his maintain the academic standards
duty as a student. of the university and ensure that
the students were learning and
growing ethically.

Right Ethics Right ethics was not satisfied. Right ethics was satisfied.

The student violated the Students have the inherent right


principles of honesty and to fair assessment of their own
integrity by using ChatGPT to work which can be violated by
generate an essay and submitting academic dishonesty. By
it as their own work. This discovering the use of ChatGPT
deprives their opportunity to in the student’s essay, the
learn and develop their own professor protected the rights of
critical thinking and writing all students to a fair assessment
skills, It also undermined the of their own work.
value and integrity of the
academic degree that the student
was pursuing.

Virtue Ethics Virtue ethics was not satisfied. Virtue ethics was satisfied.

The student act of using Honesty, integrity and fairness is


ChatGPT and passing it off as his shown by the professor when he
own work violates the virtues of ran though the student’s paper
honesty and integrity. Academic through the AI search engine and
integrity was also violated as the confronted the student.
student did not take responsibility
for his own work.

Act-Utilitarianism Act-Utilitarianism was not Act-Utilitarianism was satisfied.


satisfied.
The professor prevented the act
In the short term, the student may from going unpunished,
experience pleasure from not potentially setting it as an
having to write the essay. example for other students to
However, in the long run, when follow, hence having a positive
caught, the student can face impact on the overall academic
consequences such as failing the integrity of the university and
assignment, facing disciplinary also maintain the value of
actions and harming their future degrees earned by students.
academic and professional
opportunities.

Rule-Utilitarianism Rule-Utilitarianism was not Rule-Utilitarianism was


satisfied. satisfied.

The student cheated by using The professor followed a rule


ChatGPT to write his essay, that promotes academic integrity
which is against the rules. and honesty.
Table 2: Ethics Line Diagram from Point of View of Student

Point Action Location from Left

NP Student submitted an essay generated using ChatGPT Left Point


and claimed it as their own work. Even when
confronted, student does not admit to it.

PP Student took the time to do the essay by himself and Right Point
cited all necessary sources.

P1 Student submitted an essay generated using ChatGPT 3/10


and admitted to it when confronted.

SC1 Student generated and essay using ChatGPT and make 4/10
minor amendments to it.

SC2 Students completed majority of the essay by himself 8/10


and only sought minor assistance from ChatGPT.

Table 3: Ethics Line Diagram from Point of View of Professor

Point Action Location from Left

NP Despite finding the essay suspicious, the professor Left Point


did not conduct any further investigation and grade
the student based on the submitted work.

PP Professor found the essay suspicious, investigated Right Point


and imposed the necessary punishment on the
student.

P1 Professor investigated the suspicious essay and 9/10


confronted the student.

SC1 The professor did not impose the necessary 2/10


punishments despite discovering the essay was
written by an AI and grade it normally.

SC2 After discovering the essay was written by an AI, 5/10


the professor gave the student a second chance to
rewrite the essay instead of imposing a harsh
punishment immediately.

2.3. Ethical Solutions

Technological solutions to academic dishonesty involving ChatGPT have been developed.


An example is GPTZero, a software that can differentiat between human and computer
generated text with a high accuracy [8]. Alternatively, students can be educated on the
importance on academic integrity and plagiarism. Strict academic integrity policies that
outline the consequences can be enforced to deter students from falling into academic
dishonesty.

2.4. Evaluation of Present and Future Impact

In the short run, ChatGPT can be instrumental in helping student find quick answers to their
questions, leading to increased efficiency in their study, which can lead to improved
academic performance [9]. However, it should be noted that ChatGPT may not always be a
reliable source as the accuracy of its generated information cannot be guaranteed, leading to
misinformation [10].

In the long run, students may become overly dependent on ChatGPT. The ease of access to
information may lead to a lack of effort in research and critical thinking, endangering
student’s willingness to develop these skills. Hence, long-term learning outcomes can be
compromised. Students can become lazy thinkers as a result. Academic dishonesty can also
become a major issue as as more students start to pass off work generated by AI as their
own.undermining the integrity of the education system , learning process [10] and the hard
work of other students. The boundaries of plagiarism have also been blurred, plenty of
universities face the challenge of re-defining plagiarism and deciding whether to accept the
work produced by a bot. [10]

However, this does not mean that ChatGPT should be banned completely as it can be used as
a pedagogical tool. Students can be taught to ask ethically correct questions and to verify
information obtained from ChatGPT with other sources [10].

2.5. Summary
In conclusion, the use of ChatGPT should be used appropriately so as to maximise itsbenefits
while minimising its drawbacks. The impact of ChatGPT on student learning ultimately
depends its usage. It is important to address the potential negative impacts to ensure that
ChatGPT is used in an ethical and responsible manner.

3. Application 2: Self-Driving by AI
3.1. Background
Self-driving by AI refers to the capability of a vehicle to operate on its own without human
intervention. Self-driving cars use a combination of sensors, cameras, GPS, and machine
learning algorithms to navigate and respond to the environment around them [11]. Self-
driving cars aim to enhance the safety, efficiency, and convenience on the road [12]. While
they have the potential to bring significant benefits, important ethical dilemmas have also
surfaced.

Ethical issues arise when accidents occur. When faced with a dangerous situation on the road
such as collisions, self-driving cars must have the ability to respond instantaneously to such
scenarios. However, it is highly debatable on how the AI should prioritise safety; the safety
of the occupants of the car or others on the road [13]. Also, in the event of an accident,
liability also becomes an issue. Should the car manufacturer, the software developer, or the
owner of the car be responsible for an accident? [11]

One significant example is the case of an uber self-driving car accident in Arizona. In March
2018, the autonomous vehicle failed to detect a pedestrian who was crossing out of a
designated crosswalk, colliding into her which resulted in a fatal crash. An investigation
conducted by the National Transportation Safety Board (NTSB) found that the Uber vehicle's
software had detected the pedestrian six seconds before the collision but did not take
corrective action to avoid hitting her. It was also found that the safety driver was distracted
on her phone at the time of the accident. It was concluded that the accident could have been
avoided if the safety driver had taken control of the vehicle in an emergency. As a result, the
safety driver was charged [14].
3.2. Ethical Dilemma
Table 4: Ethics Analysis of Self-driving Cars faced with a Dangerous Situation

Action: When faced with a dangerous situation

Ethics Category Party #1: Safety Driver Party #2: Uber

Duty Ethics Duty ethics was not satisfied. Duty ethics was not satisfied.

The safety driver was responsible The Uber company was


for monitoring the autonomous responsible for ensuring the
vehicle and taking over control if safety of its autonomous vehicles
necessary to ensure the safety of and the people who interact with
the vehicle's passengers and them. This includes ensuring that
other road users. However, the the vehicles are properly tested,
safety driver was distracted by monitored, and maintained to
her phone at the time of the prevent accidents. An
crash, which prevented her from investigation found that Uber had
reacting quickly and taking over disabled the emergency braking
control of the vehicle in time to system on the autonomous
prevent the collision. Hence, it vehicle at the time of the
can be said that the safety driver accident, which meant that the
failed to fulfil her moral vehicle was unable to
obligation to perform her duties automatically brake to avoid the
responsibly which contributed to pedestrian. Additionally, Uber's
the accident. Therefore, it can be autonomous vehicle testing
argued that duty ethics was not program was found to have
fulfilled in this case. inadequate safety protocols and
training for safety drivers. Based
on these findings, it can be
argued that Uber failed to fulfil
its moral obligation to ensure the
safety of its autonomous vehicles
and the people who interact with
them. The company's decisions
and actions contributed to the
accident, and therefore, duty
ethics was not fulfilled in this
case.

Right Ethics Right ethics was not satisfied. Right ethics was not satisfied.

The safety driver was distracted An investigation found that the


by her phone at the time of the autonomous vehicle operated by
crash, preventing her from Uber failed to detect the
reacting quickly and taking over pedestrian and did not apply the
control of the vehicle in time to brakes to avoid hitting her. The
prevent the collision. As a result, pedestrian's right to safety was
the pedestrian's inherent right to not respected and protected, and
safety was not protected and she this led to her death.
was killed in the accident. It can From a rights ethics perspective,
be argued that the failure of the it can be argued that Uber failed
safety driver to monitor the to fulfil its moral obligation to
vehicle effectively and take over protect the inherent rights of the
control in a timely manner pedestrian who was killed. The
contributed to the accident and company's decisions and actions
resulted in the violation of the contributed to the accident, and
pedestrian’s inherent right to therefore, right ethics was not
safety. Therefore, right ethics fulfilled in this case.
was not fulfilled in this case.

Virtue Ethics Virtue ethics was not satisfied. Virtue ethics was not satisfied.

An investigation found that the The Uber company has a moral


safety driver was distracted by responsibility to cultivate and
her phone at the time of the embody virtues that promote the
crash, which prevented her from safety and well-being of its users
reacting quickly and taking over and the general public. This
control of the vehicle in time to includes promoting virtues such
prevent the collision. This as responsibility, integrity, and
suggests that the safety driver respect for the inherent worth of
lacked the virtue of attentiveness human life. An investigation
and may have been motivated by found that Uber's autonomous
personal distractions instead of vehicle testing program had
her duty to monitor the vehicle. inadequate safety protocols and
From a virtue ethics perspective, training for safety drivers, and
it can be argued that the safety that the emergency braking
driver failed to fulfil her moral system on the autonomous
obligation to demonstrate the vehicle had been disabled at the
virtue of attentiveness and act in time of the accident. This
accordance with her duties. Her suggests that Uber failed to
lack of attentiveness and cultivate and embody the virtues
distraction by personal activities necessary to ensure the safety of
indicate a lack of virtue, and her its users and the public.
actions contributed to the Therefore, it can be argued that
accident. Therefore, virtue ethics virtue ethics was not fulfilled by
was not fulfilled in this case. The Uber in this case. The company
safety driver's character traits and failed to cultivate and embody
motivations did not align with the virtues necessary to promote
the virtues necessary to fulfil her the safety and well-being of its
responsibilities and protect the users and the public, and its
safety of others. actions contributed to the tragic
accident.

Act-Utilitarianism Act-Utilitarianism was not Act-Utilitarianism was not


satisfied. satisfied.

An investigation revealed that The accident resulted in the


the safety driver was distracted tragic death of a pedestrian,
by her phone at the time of the which caused significant harm
accident and failed to take and suffering for her and her
control of the vehicle in time to loved ones. While the
prevent the collision. Her failure development of autonomous
to act in accordance with her vehicle technology has the
duties and responsibilities potential to provide benefits such
contributed to the tragic outcome as increased safety and
of the accident, causing harm and efficiency, the accident
suffering to the victim and her demonstrated the risks involved
loved ones, as well as causing in testing and implementing such
emotional distress for the safety technology. Furthermore, the
driver herself and negatively investigation revealed that Uber's
impacting public trust in autonomous vehicle program had
autonomous vehicle technology. inadequate safety protocols and
Therefore, it can be argued that training for safety drivers, and
act utilitarianism was not that the emergency braking
fulfilled by the safety driver in system on the autonomous
this case. Her actions failed to vehicle had been disabled at the
prioritise the overall well-being time of the accident. These
of individuals and the public, and actions were taken in pursuit of
instead contributed to harm and cost-cutting measures and faster
suffering. development, rather than
prioritising safety and
minimising harm to individuals.
Therefore, act utilitarianism was
not fulfilled by the Uber
company in this case. The harm
caused by the accident
outweighed any potential
benefits that may have been
gained from developing and
testing autonomous vehicle
technology under inadequate
safety protocols and with
disabled safety features. The
company's actions did not
prioritise the overall well-being
of individuals and the public, and
as a result, caused significant
harm and suffering.

Rule-Utilitarianism Rule-Utilitarianism was not Rule-Utilitarianism was not


satisfied. satisfied.

The safety driver's actions of The investigation revealed that


being distracted by her phone the Uber company had
and failing to take control of the inadequate safety procedures in
vehicle in time to prevent the place, failed to adequately train
accident can be evaluated as a safety drivers, and failed to
violation of the rule that safety properly monitor their safety
drivers should remain vigilant driver's behaviour on the road.
and attentive at all times in order These actions violated the rule
to prevent accidents and protect that companies should prioritise
the safety of pedestrians and safety and the well-being of their
other road users. customers and the public.
Therefore, the safety driver's If companies consistently
actions can be considered followed this rule, it would likely
immoral as they violated a rule lead to a reduction in the number
that, if followed, would promote of accidents and injuries caused
the overall well-being of society by autonomous vehicles and
by reducing the risk of harm promote public trust and
caused by autonomous vehicles. confidence in the technology.
Therefore, the Uber company's
actions can be considered
immoral as they violated a rule
that, if followed, would promote
the overall well-being of society
by reducing the risk of harm
caused by autonomous vehicles.

Table 5: Ethics Line Diagram from Point of View of Safety Driver

Point Action Location from Left

NP Safety driver was constantly distracted and did not Left Point
notice the safety hazards and hence did not take any
action to prevent it.

PP Safety driver was constantly on high alert, looking out Right Point
for potential safety hazards and taking appropriate
action to prevent the accident from happening.

P1 Safety driver was distracted by her phone at the time of 1/10


the incident and hence was unable to react in time.

SC1 The safety driver was aware of the safety hazard but 2/10
was too frightened and hence did not react.

SC2 The safety driver was distracted but managed to react 3/10
in time.

Table 6: Ethics Line Diagram from Point of View of Uber Company

Point Action Location from Left

NP Insufficient testing was done on the autonomous Left Point


vehicle and insufficient training was given to the
safety drivers.
PP Sufficient testing was done on the autonomous Right Point
vehicle and sufficient training was given to the
safety drivers.

P1 The autonomous vehicle detected the presence but 1/10


did not alert the safety driver.

SC1 Sufficient testing was done on the autonomous 2/10


vehicle but insufficient training was given to the
safety drivers.

SC2 Insufficient testing was done on the autonomous 3/10


vehicle but sufficient training was given to the
safety drivers.

3.3. Ethical Solutions


Firstly, companies that develop self-driving cars should ensure that sufficient and rigorous
testing is done to ensure its safety and reliability. Clear ethical guidelines and regulations for
the use and programming of self-driving cars, including how they should respond to different
scenarios on the road, accountability and responsibility during accidents, should be
developed.

3.4. Evaluation of Present and Future Impacts


As self-driving cars are still in the early stages of development and deployment, their impact
is relatively limited in present days. One such impact is the reduced number of accidents on
the road [15] as self-driving cars are not subjected to human error, distractions or fatigue
[16].

With more time given to the development of self-driving cars, more rigorous testing cn be
done to ensure the safety of such vehicles, which can make it a reliable option in the future.

However, as self-driving cars become more widely adopted in the future, certain jobs in the
transportation industry may become obsolete, displacing many workers. On the other hand,
jobs in areas such as software development and maintenance can be made available.
3.5. Summary
In conclusion, self-driving cars have the potential to greatly improve road safety and reduce
traffic congestions. However, there are ethical concerns that needs to be addressed. Overall,
self-driving cars should be developed, deployed, and regulated carefully in order to maximise
its benefits for future generations.
4. Application 3: Digital Human
4.1. Background
AI has advanced to a point where it can imitate human behaviours such as writing, speaking,
and thinking. It is now possible to recreate digital replicas of individuals in a variety of ways.
One such method is to create a virtual avatar or chatbot simulating a person’s personality and
appearance based on the existing data of that person, such as texts, videos, and photos [17].

Artists are being ‘resurrected’ through the use of hologram and artificial voices. An example
is when Michael Jackson moonwalking in the 2014 Billboard Music Awards even after
passing away in 2009 [18].

4.2. Ethical Dilemmas

Table 7: Ethics Analysis of Recreating a Deceased Person

Ethics Category Party #1: Creators of the Deceased

Duty Ethics Duty ethics was satisfied.

If fans of the deceased feel that they can receive comfort and solace
by seeing their idol once again

Right Ethics Right ethics was not satisfied.


The deceased has the right to regulate their image use after their death
and they may not want be recreated.

Virtue Ethics Virtue ethics was not satisfied.


The creators may be exploiting the use of a deceased celebrity’s
likeness for monetary gain.

Act-Utilitarianism Act-Utilitarianism was satisfied.

If being able to recreate

Rule-Utilitarianism Rule-Unitarianism was not satisfied.


Table 8: Ethics Line Diagram from Point of View of the Creators of the Deceased

Point Action Location from Left

NP The deceased is recreated without permission even the Left Point


deceased indicated that he did not want his image to be
recreated after death.

PP The deceased is recreated after opening Right Point

P1

SC1

SC2

4.3. Ethical Solutions


One such solution is for people with influence to come up with a will with regards to whether
they are okay with their image being recreated after death.

4.4. Evaluation of Present and Future Impacts

As further research and development in this field of AI be conducted, we might have more
realistic interaction experience with digital humans and explore more ways of applying such
a technology. With the ability to create digital replicas of the people, we might even be able
to bring the famous figures in history back to life and have interesting dialogues with them.
Additionally, by combining the digital human technology with the virtual assistant
technology, we may even let our deceased loved ones live again as virtual assistants that take
care of everyone in the house like what they used to be. Our loved ones could help us prepare
food, remind us of important appointments, wake us up, and make coffee through the
assistance of smart household appliances [19][20]. However, in the long run, Users may
prefer interacting with digital humans over real humans and this pose significant social and
psychological impacts, which could lead to social isolation and loneliness.

4.5. Summary
5. Conclusion
In conclusion, Machine/Artificial Intelligence  and its technologies have significantly
transformed the way people collect and analyse data, operate and interact with one another.
These technologies have the potential to revolutionise the world in numerous ways. However,
the development of these technologies should be approached with critical thinking and
caution as they may bring about unintended consequences such as job displacement and other
ethical concerns. Therefore, it is of utmost importance that AI is developed and implemented
in a responsible and ethical manner. Policies and regulations should be put in place to create
a safe and effective environment for AI development and deployment. 
6. References
[1] D. M. West and J. R. Allen, “How artificial intelligence is transforming the world,”
Brookings, 09-Mar-2022. [Online]. Available: https://www.brookings.edu/research/how-
artificial-intelligence-is-transforming-the-world/. [Accessed: 28-Mar-2023].

[2] D. Zaichenko, “11 artificial intelligence issues you should worry about,” Coupler.io Blog,
16-Feb-2023. [Online]. Available: https://blog.coupler.io/artificial-intelligence-issues/.
[Accessed: 28-Mar-2023].

[3] Person and K. Hu, “CHATGPT sets record for fastest-growing user base - analyst note,”
Reuters, 02-Feb-2023. [Online]. Available: https://www.reuters.com/technology/chatgpt-sets-
record-fastest-growing-user-base-analyst-note-2023-02-01/. [Accessed: 28-Mar-2023].

[4] A. Mitchell, “Professor catches student cheating with chatgpt: 'I feel abject terror',” New
York Post, 05-Jan-2023. [Online]. Available: https://nypost.com/2022/12/26/students-using-
chatgpt-to-cheat-professor-warns/. [Accessed: 28-Mar-2023].

[5] C. Westfall, “Educators battle plagiarism as 89% of students admit to using OpenAI's
chatgpt for homework,” Forbes, 30-Jan-2023. [Online]. Available:
https://www.forbes.com/sites/chriswestfall/2023/01/28/educators-battle-plagiarism-as-89-of-
students-admit-to-using-open-ais-chatgpt-for-homework/?sh=5366b561750d. [Accessed: 28-
Mar-2023].

[6] V. Tangermann, “College student caught submitting paper using chatgpt,” Futurism, 18-
Jan-2023. [Online]. Available: https://futurism.com/college-student-caught-writing-
paper-chatgpt. [Accessed: 29-Mar-2023].

[7] T. Kendrick, “Educators weigh in on the use of artificial intelligence in the classroom,”
https://www.uppermichiganssource.com. [Online]. Available:
https://www.uppermichiganssource.com/2023/01/27/educators-weigh-use-artificial-
intelligence-classroom/. [Accessed: 29-Mar-2023].

[8] T. Keith, “Combating academic dishonesty, part 6: CHATGPT, AI, and academic
integrity,” Academic Technology Solutions. [Online]. Available:
https://academictech.uchicago.edu/2023/01/23/combating-academic-dishonesty-part-6-
chatgpt-ai-and-academic-integrity/. [Accessed: 29-Mar-2023].

[9] J. Mallow, “Chatgpt for students: How Ai Chatbots are revolutionizing education,”
eLearning Industry, 28-Feb-2023. [Online]. Available:
https://elearningindustry.com/chatgpt-for-students-how-ai-chatbots-are-revolutionizing-
education. [Accessed: 29-Mar-2023].

[10] M. Huzaif, “The impact of CHATGPT on student learning,” The Companion, 01-Feb-
2023. [Online]. Available: https://thecompanion.in/the-impact-of-chatgpt-on-student-
learning. [Accessed: 29-Mar-2023].

[11] Lutkevich, B. (2023) What are self-driving cars and how do they work?, Enterprise AI.
TechTarget. Available at:
https://www.techtarget.com/searchenterpriseai/definition/driverless-car#:~:text=What%20is
%20a%20self%2Ddriving,destinations%20without%20a%20human%20operator. (Accessed:
February 21, 2023).

[12] Gringer, B. (2020) History of the autonomous car, TitleMax. Available at:
https://www.titlemax.com/resources/history-of-the-autonomous-car/#:~:text=Autonomous
%20vehicles%20are%20expected%20to,high%20like%20human%20drivers%20can.
(Accessed: February 21, 2023).

[13] Koh, S. (2022) The moral dilemmas behind developing self-driving vehicles, KrASIA.
Available at: https://kr-asia.com/the-moral-dilemmas-behind-developing-self-driving-
vehicles (Accessed: February 21, 2023).

[14] “Uber's self-driving operator charged over fatal crash,” BBC News, 16-Sep-2020.
[Online]. Available: https://www.bbc.com/news/technology-54175359. [Accessed: 29-
Mar-2023].

[15] Otmseo (2022) 2022 self-driving car accident statistics " 1-800-INJURED, 1. Available
at: https://1800injured.care/self-driving-car-accident-statistics/#:~:text=In%202021%2C
%20over%2080%20firms,36%20states%20and%20Washington%2C%20DC.&text=Overall
%2C%20autonomous%20vehicles%20(AVs),to%204.1%20for%20conventional%20cars.
(Accessed: February 21, 2023).

[16] S. Szymkowski, “Self-driving cars won't eradicate the car crash, study says,” CNET, 08-
Jun-2020. [Online]. Available: https://www.cnet.com/roadshow/news/self-driving-cars-
crash-study/. [Accessed: 29-Mar-2023].
[17] “Could ai keep people 'alive' after death?,” The Wall Street Journal, 08-Jul-2021.
[Online]. Available: https://www.wsj.com/articles/could-ai-keep-people-alive-after-
death-11625317200. [Accessed: 29-Mar-2023].

[18] “Dead celebrities are being digitally resurrected - and the ethics are murky | CBC news,”
CBCnews, 08-Aug-2021. [Online]. Available:
https://www.cbc.ca/news/entertainment/dead-celebrities-digital-resurrection-
1.6132738. [Accessed: 29-Mar-2023].

[19] H. Chawre, “How ai-powered Virtual assistants are Transforming Industries?,” How AI-
Powered Virtual Assistants Are Transforming Industries?, 08-Aug-2022. [Online].
Available: https://www.turing.com/resources/how-ai-powered-virtual-assistants-are-
transforming-industries. [Accessed: 29-Mar-2023].

[20] A. Hayes, “Smart home: Definition, how they work, Pros and Cons,” Investopedia, 19-
Dec-2022. [Online]. Available: https://www.investopedia.com/terms/s/smart-home.asp.
[Accessed: 29-Mar-2023].

You might also like