Ethics Unit-2-Notes
Ethics Unit-2-Notes
All of the initiatives listed above agree that AI should be researched, developed, designed, deployed,
monitored, and used in an ethical manner – but each has different areas of priority. This section will
include analysis and grouping of the initiatives above, by type of issues they aim to address, and
then outline some of the proposed approaches and solutions to protect from harms.
A number of key issues emerge from the initiatives, which can be broadly split into the following
categories:
1. Human rights and well-being
2. Emotional harm
3. Accountability and responsibility
4. Security, privacy, accessibility, and transparency
5. Safety and trust
6. Social harm and social justice
7. Financial harm
8. Lawfulness and justice
9. Control and the ethical use – or misuse – of AI
10. Environmental harm and sustainability
11. Informed use
12. Existential risk
Overall, these initiatives all aim to identify and form ethical frameworks and systems that
establish human beneficence at the highest levels, prioritise benefit to both human society
and the environment and mitigate the risks and negative impacts associated with AI — with a
focus on ensuring that AI is accountable and transparent.
The IEEE's 'Ethically Aligned Design: A Vision for Prioritising Human Well-being with Autonomous and
Intelligent Systems' is one of the most substantial documents published to date on the ethical issues
that AI may raise — and the various proposed means of mitigating these.
2.2.1 Harms in detail
Taking each of these harms in turn, this section explores how they are being conceptualised
by initiatives and some of the challenges that remain.
Human rights and well-being
All initiatives adhere to the view that AI must not impinge on basic and fundamental human rights,
such as human dignity, security, privacy, freedom of expression and information, protection of
personal data, equality, solidarity and justice.
In order to ensure that human rights are protected, the IEEE recommends new
governance frameworks, standards, and regulatory bodies which oversee the use of AI; translating
existing legal obligations into informed policy, allowing for cultural norms and legal
frameworks; and always maintaining complete human control over AI, without granting them
rights or privileges equal to those of humans. To safeguard human well-being, defined as 'human
satisfaction with life and the conditions of life, as well as an appropriate balance between positive
and negative affect', the IEEE suggest prioritising human well-being throughout the design
phase, and using the best and most widely-accepted available metrics to clearly measure the
societal success of an AI.
According to the Foundation for Responsible Robotics, AI must be ethically developed with
human rights in mind to achieve their goal of 'responsible robotics', which relies upon proactive
innovation to uphold societal values like safety, security, privacy, and well-being. The Foundation
engages with policymakers, organises and hosts events, publishes consultation documents to
educate policymakers and the public, and creates public-private collaborations to bridge the gap
between industry and consumers, to create greater transparency. It calls for ethical decision-
making right from the research and development phase, greater consumer education, and
responsible law- and policymaking – made before AI is released and put into use.
The Future of Life Institute defines a number of principles, ethics, and values for consideration
in the development of AI, including the need to design and operate AI in a way that is
compatible with the ideals of human dignity, rights, freedoms, and cultural diversity. This is echoed
by the Japanese Society for AI Ethical Guidelines, which places the utmost importance on AI being
realised in a way that is beneficial to humanity, and in line with the ethics, conscience, and
competence of both its researchers and society as a whole. AI must contribute to the peace,
safety, welfare, and public interest of society, says the Society, and protect human rights.
The Future Society's Law and Society Initiative emphasises that human beings are equal in rights,
dignity, and freedom to flourish, and are entitled to their human rights. For example, could AI
'judges' in the legal profession be more efficient, equitable, uniform, and cost-saving than
human ones – The Montréal Declaration aims to clarify this somewhat, by pulling together
an ethical framework that promotes internationally recognised human rights in fields affected by
the rollout of AI: 'The principles of the current declaration rest on the common belief that human
beings seek to grow as social beings endowed with sensations, thoughts and feelings, and strive
to fulfil their potential by freely exercising their emotional, moral and intellectual capacities.' In
other words, AI must not only not disrupt human well-being, but it must also proactively
encourage and support it to improve and grow.
Some approach AI from a more specific viewpoint – such as the UNI Global Union, which
strives to protect an individual's right to work. Over half of the work currently done by people
could be done faster and more efficiently in an automated way, says the Union. This identifies a
prominent harm that AI may cause in the realm of human employment. The Union states that we
must ensure that AI serves people and the planet, and both protects and increases fundamental
human rights, human dignity, integrity, freedom, privacy, and cultural and gender diversity’
Emotional harm
AI will interact with and have an impact on the human emotional experience in ways that
have not yet been qualified; humans are susceptible to emotional influence both positively and
negatively, and 'affect' – how emotion and desire influence behaviour – is a core part of intelligence. Affect
varies across cultures, and, given different cultural sensitivities and ways of interacting, affective and
influential AI could begin to influence how people view society itself. The IEEE recommend
various ways to mitigate this risk, including the ability to adapt and update AI norms and values
according to who they are engaging with, and the sensitivities of the culture in which they are
operating.
There are various ways in which AI could inflict emotional harm, including false
intimacy, over- attachment, objectification and commodification of the body, and social or sexual
isolation. These are covered by various of the aforementioned ethical initiatives, including the
Foundation for Responsible Robotics, Partnership on AI, the AI Now institute, the Montréal
Declaration, and the European Robotics Research Network (EURON) Roadmap.
These possible harms come to the fore when considering the development of an
intimate relationship with an AI, for example in the sex industry. Intimate systems, as the IEEE call
them, must not contribute to sexism, racial inequality, or negative body image stereotypes; must be
for positive and therapeutic use; must avoid sexual or psychological manipulation of users
without consent; should not be designed in a way that contributes to user isolation from human
companionship; must be designed in a way that is transparent about the effect they may have on
human relationship dynamics and jealousy; must not foster deviant or criminal behaviour, or
normalise illegal sexual practices such as paedophilia or rape; and must not be marketed
commercially as a person’
Affective AI is also open to the possibility of deceiving and coercing its users –
researchers have defined the act of AI subtly modifying behaviour as 'nudging', when an AI
emotionally manipulates and influences its user through the affective system. While this may be
useful in some ways – drug dependency, healthy eating – it could also trigger behaviours that
worsen human health. Systematic analyses must examine the ethics of affective design prior to
deployment; users must be educated on how to recognise and distinguish between nudges;
users must have an opt-in system for autonomous nudging systems; and vulnerable
populations that cannot give informed consent, such as children, must be subject to additional
protection. In general, stakeholders must discuss the question of whether or not the nudging
design pathway for AI, which lends itself well to selfish or detrimental uses, is an ethical one to
pursue.
As raised by the IEEE, nudging may be used by governments and other entities to influence public
behaviour. We must pursue full transparency regarding the beneficiaries of such behaviour, say
the IEEE, due to the potential for misuse.Other issues include technology addiction and emotional
harm due to societal or gender bias.
Accountability and responsibility
The vast majority of initiatives mandate that Sex and Robots
AI must be auditable, in order to assure that
the designers, manufacturers, owners, In July of 2017, the Foundation for Responsible
and operators of AI are held accountable Robotics published a report on ‘Our Sexual Future with
for the Robots’ (Founda琀椀on for Responsible Robo琀椀cs, 2019).
technology or system's actions, and are This aimed to present an objec琀椀ve summary of the
thus considered responsible for any various issues and opinions surrounding our in琀椀mate
associa琀椀on with technology. Many countries are
potential harm
developing robots for sexual gra琀椀昀椀ca琀椀on; these largely
it might cause. According to the IEEE, this tend to be pornographic representa琀椀ons of the human
could be achieved by the courts body – and are mostly female. These representa琀椀ons,
clarifying issues of culpability and when accompanied by human anthropomorphism, may
liability during the development and cause robots to be perceived as somewhere between
deployment phases where possible, so that living and inanimate, especially when sexual
those involved understand their gra琀椀昀椀ca琀椀on is combined with elements of in琀椀macy,
obligations and rights; by designers and companionship and conversa琀椀on. Robots may also
a昀昀ect societal percep琀椀ons of gender or body
developers taking into account the diversity stereotypes, erode human connec琀椀on and in琀椀macy and
of existing cultural norms among various lead to greater social isola琀椀on. However, there is also
user some poten琀椀al for robots to be of emo琀椀onal sexual
groups; by establishing multi-stakeholder bene昀椀t to humans, for example by helping to reduce sex
crime, and to rehabilitate vic琀椀ms of rape or sexual abuse
via inclusion in healing therapies.
digital shadow of their physical self'. Individuals may lack the appropriate tools to control and
cultivate their unique identity and manage the associated ethical implications of the use of their
data. Without clarity and education, many users of AI will remain unaware of the digital footprint
they are creating, and the information they are putting out into the world. Systems must be put in
place for users to control, interact with and access their data, and give them agency over their
digital personas.
The Future of Life Institute's Asilomar Principles agree with the IEEE on the importance of
transparency and privacy across various aspects: failure transparency (if an AI fails, it must be
possible to figure out why), judicial transparency (any AI involved in judicial decision-making must
provide a satisfactory explanation to a human), personal privacy (people must have the right to
access, manage, and control the data AI gather and create), and liberty and privacy (AI must
not unreasonably curtail people's real or perceived liberties). Saidot takes a slightly wider approach
and strongly emphasises the importance of AI that are transparent, accountable, and trustworthy,
where people, organisations, and smart systems are openly connected and collaborative in order to
foster cooperation, progress, and innovation.
All of the initiatives surveyed identify transparency and accountability of AI as an important
issue. This balance underpins many other concerns – such as legal and judicial fairness,
worker compensation and rights, security of data and systems, public trust, and social harm.
Safety and trust
Where AI is used to supplement or
An ‘ethical black box’
replace human decision-making, there is
consensus that it must be safe, trustworthy, and Ini琀椀a琀椀ves including the UNI Global Union
reliable, and act with integrity. and IEEE suggest equipping AI systems with an
‘ethical black box’: a device that can record
The IEEE propose cultivating a 'safety informa琀椀on about said system to ensure its
mindset' among researchers, to 'identify and accountability and transparency, but that also
pre-empt unintended and unanticipated includes clear data on the ethical considera琀椀on
behaviors in their systems' and to develop built into the system from the beginning (UNI
systems which are 'safe by design'; setting up Global Union, n.d.).
review boards at institutions as a resource and
means of evaluating projects and their progress;
encouraging a community of sharing, to
spread the word on safety-related developments, research, and tools. The Future of Life Institute's
Asilomar principles indicate that all involved in developing and deploying AI should be mission-
led, adopting the norm that AI 'should only be developed in the service of widely shared ethical
ideals, and for the benefit of all humanity rather than one state or organisation'. This approach
would build public trust in AI, something that is key to its successful integration into society.
The Japanese Society for AI proposes that AI should act with integrity at all times, and that AI and
society should earnestly seek to learn from and communicate with one another. 'Consistent and
effective communication' will strengthen mutual understanding, says the Society, and '[contribute]
to the overall peace and happiness of mankind'. The Partnership on AI agrees, and strives to ensure
AI is trustworthy and to create a culture of cooperation, trust, and openness among AI scientists and
engineers. The Institute for Ethical AI & Machine Learning also emphasises the importance of
dialogue; it ties together the issues of trust and privacy in its eight core tenets, mandating that
AI technologists communicate with stakeholders about the processes and data involved to build
trust and spread understanding throughout society.
Social harm and social justice: inclusivity, bias, and discrimination
AI development requires a diversity of viewpoints. There are several organisations establishing that
these must be in line with community viewpoints and align with social norms, values, ethics,
and preferences, that biases and assumptions must not be built into data or systems, and that AI
should be aligned with public values, goals, and behaviours, respecting cultural diversity.
Initiatives also argue that all should have access to the benefits of AI, and it should work for the
common good. In other words, developers and implementers of AI have a social responsibility
to embed the right values into AI and ensure that they do not cause or exacerbate any existing
or future harm to any part of society.
The IEEE suggest first identifying social and moral norms of the specific community in
which an AI will be deployed, and those around the specific task or service it will offer; designing
AI with the idea of 'norm updating' in mind, given that norms are not static and AI must change
dynamically and transparently alongside culture; and identifying the ways in which people
resolve norm conflicts, and equipping AI with a system in which to do so in a similar and
transparent way. This should be done collaboratively and across diverse research efforts, with
care taken to evaluate and assess potential biases that disadvantage specific social groups.
Several initiatives – such as AI4All and the AI Now Institute – explicitly advocate for fair, diverse,
equitable, and non-discriminatory inclusion in AI at all stages, with a focus on support for
under- represented groups. Currently, AI-related degree programmes do not equip aspiring
developers and designers with an appropriate knowledge of ethics and corporate environments
and business practices are not ethically empowering, with a lack of roles for senior ethicists that can
steer and support value-based innovation.
On a global scale, the inequality gap between developed and developing nations is
significant. While AI may have considerable usefulness in a humanitarian sense, they must not
widen this gap or exacerbate poverty, illiteracy, gender and ethnic inequality, or
disproportionately disrupt employment and labour. The IEEE suggests taking action and
investing to mitigate the inequality gap; integrating corporate social responsibility (CSR) into
development and marketing; developing transparent power structures; facilitating and sharing
robotics and AI knowledge and research; and generally keeping AI in line with the US
Sustainable Development Goals11. AI technology should be made equally available worldwide
via global standardisation and open-source software, and interdisciplinary discussion should be
held on effective AI education and training.
A set of ethical guidelines published by the Japanese Society for AI emphasises, among other
considerations, the importance of a) contribution to humanity, and b) social responsibility. AI must
act in the public interest, respect cultural diversity, and always be used in a fair and equal manner.
The Foundation for Responsible Robotics includes a Commitment to Diversity in its push for
responsible AI; the Partnership on AI cautions about the 'serious blind spots' of ignoring the
presence of biases and assumptions hidden within data; Saidot aims to ensure that, although our
social values are now 'increasingly mediated by algorithms', AI remains human-centric; the
Future of Life Institute highlights a need for AI imbued with human values of cultural diversity and
human rights; and the Institute for Ethical AI & Machine Learning includes 'bias evaluation' for
monitoring bias in AI development and production. The dangers of human bias and assumption
are a frequently identified risk that will accompany the ongoing development of AI.
Financial harm: Economic opportunity and employment
AI may disrupt the economy and lead to loss of jobs or work disruption for many
humans, and will have an impact on workers' rights and displacement strategy as many
strains of work become automated.
Additionally, rather than just focusing on the number of jobs lost or gained, traditional
employment structures will need to be changed to mitigate the effects of automation and take into
account the complexities of employment. Technological change is happening too fast for the
traditional workforce to keep pace without retraining. Workers must train for adaptability, says the
IEEE , and new skill sets, with fallback strategies put in place for those who cannot be re-
trained, and training programmes implemented at the level of high school or earlier to increase
access to future employment. The UNI Global Union call for multi-stakeholder ethical AI
governance bodies on global and regional levels, bringing together designers, manufacturers,
developers, researchers, trade unions, lawyers, CSOs, owners, and employers. AI must benefit and
empower people broadly and equally, with policies put in place to bridge the economic,
technological, and social digital divides, and ensure a just transition with support for fundamental
freedoms and rights.
The AI Now Institute works with diverse stakeholder groups to better understand the
implications that AI will have for labour and work, including automation and early-stage
integration of AI changing the nature of employment and working conditions in various sectors.
AI in the workplace will affect far more than workers' finances, and may offer various
positive opportunities. As laid out by the IEEE, AI may offer potential solutions to workplace bias
– if it is developed with this in mind, as mentioned above – and reveal deficiencies in
product development, allowing proactive improvement in the design phase.
'RRI is a transparent, interactive process by Responsible research and innova琀椀on
which societal actors and innovators become (RRI)
mutually responsive to each other with a view to
the (ethical) acceptability, sustainability and RRI is a growing area, especially in the EU, that draws
societal desirability of the innovation process from classical ethics to provide tools with which to
and its marketable products (in order to allow address ethical concerns from the very outset of a
a proper embedding of scientific and project. When incorporated into a project’s design
technological advances in our society).' phase, RRI increases the chances of design being both
relevant and strong in terms of ethical alignment.
Lawfulness and justice
Many research funders and organisa琀椀ons include
Several initiatives address the need for AI to RRI in their mission statements and within their
be lawful, equitable, fair, just and subject to research and innova琀椀on e昀昀orts (IEEE, 2019).
appropriate, pre-emptive governance and
regulation. The many complex ethical
problems surrounding AI translate directly
and indirectly into discrete legal challenges
The IEEE conclude that AI should not be granted any level of 'personhood', and
that, while development, design and distribution of AI should fully comply with all applicable
international and domestic law, there is much work to be done in defining and implementing the
relevant legislation. Legal issues fall into a few categories: legal status, governmental use, legal
accountability for harm, and transparency, accountability, and verifiability. The IEEE suggest that
AI should remain subject to the applicable regimes of property law; that stakeholders should
identify the types of decisions that should never be delegated to AI, and ensure effective human
control over those decisions via rules and standards; that existing laws should be scrutinised and
reviewed for mechanisms that could practically give AI legal autonomy; and that manufacturers and
operators should be required to comply with the applicable laws of all jurisdictions in which an
AI could operate. They also recommend that governments reassess the legal status for AI as
they become more sophisticated, and work closely with regulators, societal and industry actors and
other stakeholders to ensure that the interests of humanity – and not the development of
systems themselves – remain the guiding principle.
Control and the ethical use – or misuse – of AI
With more sophisticated and complex new AI come more sophisticated and complex
possibilities for misuse. Personal data may be used maliciously or for profit, systems are at risk of
hacking, and technology may be used exploitatively. This ties into informed use and public
awareness: as we enter a new age of AI, with new systems and technology emerging that have
never before been implemented, citizens must be kept up to date of the risks that may come
with either the use or misuse of these.
The IEEE suggests new ways of educating Personhood and AI
the public on ethics and security issues,
for example a 'data privacy' warning on The issue of whether or not an AI deserves
smart devices that collect personal data; ‘personhood’ 琀椀es into debates surrounding
delivering this education in scalable, accountability, autonomy, and responsibility: is it the AI
itself that is responsible for its ac琀椀ons and
effective ways; and educating
consequences, or the person(s) who built them?
government, lawmakers, and
enforcement agencies surrounding
these issues, so they can work This concept, rather than allowing robots to be
considered people in a human sense, would place
collaboratively with citizens – in a similar
robots on the same legal level as corpora琀椀ons. It is
way to police officers providing safety worth no琀椀ng that corpora琀椀ons’ legal personhood can
lectures in schools – and avoid fear and currently shield the natural persons behind them from the
confusion. implica琀椀ons of the law. However, The UNI Global
Union asserts that legal responsibility lies with the
Other issues include manipulation of
creator, not the robot itself, and calls for a ban on
behaviour and data. Humans must a琀琀ribu琀椀ng responsibility to robots.
retain control over AI and oppose
subversion. Most initiatives reviewed flag
this as a potential issue facing AI as it
develops, and flag that AI must behave in a
way that is predictable and
reliable, with appropriate means for redress, and be subject to validation and testing. AI must also
work for the good of humankind, must not exploit people, and be regularly reviewed by human
experts.
Environmental harm and sustainability
The production, management, and implementation of AI must be sustainable and
avoid environmental harm. This also ties in to the concept of well-being; a key recognised aspect of
well- being is environmental, concerning the air, biodiversity, climate change, soil and water quality,
and so on. The IEEE state that AI must do no harm to Earth's natural systems or exacerbate their
degradation, and contribute to realising sustainable stewardship, preservation, and/or the
restoration of Earth's natural systems. The UNI Global Union state that AI must put people and the
planet first, striving to protect and even enhance our planet's biodiversity and ecosystems. The
Foundation for Responsible Robotics identifies a number of potential uses for AI in coming years,
from agricultural and farming roles to monitoring of climate change and protection of endangered
species. These require responsible, informed policies to govern AI and robotics, say the
Foundation, to mitigate risk and support ongoing innovation and development.
Informed use: public education and awareness
Members of the public must be educated on the use, misuse, and potential harms of AI,
via civic participation, communication, and dialogue with the public. The issue of consent – and
how much an individual may reasonably and knowingly give – is core to this. For example, the
IEEE raise several instances in which consent is less clear-cut than might be ethical: what if
one's personal data are used to make inferences they are uncomfortable with or unaware of?
Can consent be given when a system does not directly interact with an individual? This latter
issue has been named the 'Internet of Other People's Things'. Corporate environments also
raise the issue of power imbalance; many employees do not have clear consent on how their
personal data – including those on health – is used by their employer. To remedy this, the IEEE
suggest employee data impact assessments to deal with these corporate nuances and ensure that
no data is collected without employee consent. Data must also be only gathered and used for
specific, explicitly stated, legitimate purposes, kept up-to-date, lawfully processed, and not
kept for a longer period than necessary. In cases where subjects do not have a direct relationship
with the system gathering data, consent must be dynamic, and the system designed to interpret
data preferences and limitations on collection and use.
To increase awareness and understanding of AI, undergraduate and postgraduate students
must be educated on AI and its relationship to sustainable human development, say the IEEE.
Specifically, curriculum and core competencies should be defined and prepared; degree
programmes focusing on engineering in international development and humanitarian relief
should be exposed to the potential of AI applications; and awareness should be increased of the
opportunities and risks faced by Lower Middle Income Countries in the implementation of AI in
humanitarian efforts across the globe.
Many initiatives focus on this, including the Foundation for Responsible Robotics, Partnership on AI,
Japanese Society for AI Ethical Guidelines, Future Society and AI Now Institute; these and others maintain
that clear, open and transparent dialogue between AI and society is key to the creation of
understanding, acceptance, and trust.
Existential risk
According to the Future of Life Institute, the main existential issue surrounding AI
'is not malevolence, but competence' – AI will continually learn as they interact with others and
gather data, leading them to gain intelligence over time and potentially develop aims that are at odds
with those of humans.
'You're probably not an evil ant-hater who steps on ants out of malice,' 'but if you're in
charge of a hydroelectric green energy project and there's an anthill in the region to be flooded, too bad
for the ants. A key goal of AI safety research is to never place humanity in the position of those ants' .
AI also poses a threat in the form of autonomous weapons systems (AWS). As these are designed
to cause physical harm, they raise numerous ethical quandaries. The IEEE lays out a number of
recommendations to ensure that AWS are subject to meaningful human control: they suggest
audit trails to guarantee accountability and control; adaptive learning systems that can explain their
reasoning in a transparent, understandable way; that human operators of autonomous systems
are identifiable, held responsible, and aware of the implications of their work; that autonomous
behaviour is predictable; and that professional codes of ethics are developed to address the
development of autonomous systems – especially those intended to cause harm. The pursuit of
AWS may lead to an international arms race and geopolitical stability; as such, the IEEE recommend
that systems designed to act outside the boundaries of human control or judgement are unethical
and violate fundamental human rights and legal accountability for weapons use.
Given their potential to seriously harm society, these concerns must be controlled for and
regulated pre-emptively, says the Foundation for Responsible Robotics. Other initiatives that cover this
risk explicitly include the UNI Global Union and the Future of Life Institute, the latter of which cautions
against an arms race in lethal autonomous weapons, and calls for planning and mitigation
efforts for possible longer-term risks. We must avoid strong assumptions on the upper limits of
future AI capabilities, assert the FLI's Asilomar Principles, and recognise that advanced AI
represents a profound change in the history of life on Earth.
the near future, robots will help in the diagnosis of patients; the performance of simple
surgeries; and the monitoring of patients' health and mental wellness in short and long-term
care facilities. They may also provide basic physical interventions, work as companion carers,
remind patients to take their medications, or help patients with their mobility.
In some fundamental areas of medicine, such as medical image diagnostics, machine
learning has been proven to match or even surpass our ability to detect illnesses.
Embodied AI, or robots, are already involved in a number of functions that affect
people's physical safety. In June 2005, a surgical robot at a hospital in Philadelphia
malfunctioned during prostate surgery, injuring the patient. In June 2015, a worker at a
Volkswagen plant in Germany was crushed to death by a robot on the production line. In June
2016, a Tesla car operating in autopilot mode collided with a large truck, killing the car's
passenger.
As robots become more prevalent, the potential for future harm will increase,
particularly in the case of driverless cars, assistive robots and drones, which will face
decisions that have real consequences for human safety and well-being. The stakes are much
higher with embodied AI than with mere software, as robots have moving parts in physical
space. Any robot with moving physical parts poses a risk, especially to vulnerable people
such as children and the elderly.
Safety
It is vital that robots should not harm people, and that they should be safe to work
with. This point is especially important in areas of healthcare that deal with vulnerable
people, such as the ill, elderly, and children.
Digital healthcare technologies offer the potential to improve accuracy of diagnosis
and treatments, but to thoroughly establish a technology's long-term safety and performance
investment in clinical trials is required. The debilitating side-effects of vaginal mesh implants
and the continued legal battles against manufacturers, stand as an example against
shortcutting testing, despite the delays this introduces to innovating healthcare. Investment in
clinical trials will be essential to safely implement the healthcare innovations that AI systems
offer.
User understanding
The correct application of AI by a healthcare professional is important to ensure
patient safety. For instance, the precise surgical robotic assistant 'the da Vinci' has proven a
useful tool in minimising surgical recovery, but requires a trained operator
A shift in the balance of skills in the medical workforce is required, and healthcare providers
are preparing to develop the digital literacy of their staff over the next two decades. With
genomics and machine learning becoming embedded in diagnoses and medical decision-
making, healthcare professionals need to become digitally literate to understand each
technological tool and use it appropriately. It is important for users to trust the AI presented
but to be aware of each tool's strengths and weaknesses, recognising when validation is
necessary. For instance, a generally accurate machine learning study to predict the risk of
complications in patients with pneumonia erroneously considered those with asthma to be at
low risk. It reached this conclusion because asthmatic pneumonia patients were taken directly
to intensive care, and this higher-level care circumvented complications. The inaccurate
recommendation from the algorithm was thus overruled.
To ensure the most accurate diagnoses are presented to people of all ethnicities,
algorithmic biases must be identified and understood. Even with a clear understanding of
model design this is a difficult task because of the aforementioned 'black box' nature of
machine learning. However, various codes of conduct and initiatives have been introduced to
spot biases earlier. For instance, The Partnership on AI, an ethics-focused industry group was
launched by Google, Facebook, Amazon, IBM and Microsoft — although, worryingly, this
board is not very diverse.
Equality of access
Digital health technologies, such as fitness trackers and insulin pumps, provide
patients with the opportunity to actively participate in their own healthcare. Some hope that
these technologies will help to redress health inequalities caused by poor education,
unemployment, and so on. However, there is a risk that individuals who cannot afford the
necessary technologies or do not have the required 'digital literacy' will be excluded, so
reinforcing existing health inequalities.
The UK's National Health Services' Widening Digital Participation programme is one
example of how a healthcare service has tried to reduce health inequalities, by helping
millions of people in the UK who lack the skills to access digital health services. Programmes
such as this will be critical in ensuring equality of access to healthcare, but also in increasing
the data from minority groups needed to prevent the biases in healthcare algorithms discussed
above.
Quality of care
'There is remarkable potential for digital healthcare technologies to improve accuracy
of diagnoses and treatments, the efficiency of care, and workflow for healthcare
professionals'.
If introduced with careful thought and guidelines, companion and care robots, for example,
could improve the lives of the elderly, reducing their dependence, and creating more
opportunities for social interaction. Imagine a home-care robot that could: remind you to take
your medications; fetch items for you if you are too tired or are already in bed; perform
simple cleaning tasks; and help you stay in contact with your family, friends and healthcare
provider via video link.
However, questions have been raised over whether a 'cold', emotionless robot can
really substitute for a human's empathetic touch. This is particularly the case in long-term
caring of vulnerable and often lonely populations, who derive basic companionship from
caregivers. Human interaction is particularly important for older people, as research suggests
that an extensive social network offers protection against dementia. At present, robots are far
from being real companions. Although they can interact with people, and even show
simulated emotions, their conversational ability is still extremely limited, and they are no
replacement for human love and attention. Some might go as far as saying that depriving the
elderly of human contact is unethical, and even a form of cruelty.
It's vital that robots don't make elderly people feel like objects, or with even less
control over their lives than when they were dependent on humans — otherwise they may
feel like they are 'lumps of dead matter: to be pushed, lifted, pumped or drained, without
proper reference to the fact that they are sentient beings'.
To ensure their charge's safety, robots might sometimes need to act as supervisors,
restricting their freedoms. For example, a robot could be trained to intervene if the cooker
was left on, or the bath was overflowing. Robots might even need to restrain elderly people
from carrying out potentially dangerous actions, such as climbing up on a chair to get
something from a cupboard. Smart homes with sensors could be used to detect that a person
is attempting to leave their room, and lock the door, or call staff — but in so doing the elderly
person would be imprisoned.
Moral agency
'There's very exciting work where the brain can be used to control things, like maybe
they've lost the use of an arm…where I think the real concerns lie is with things like
behavioural targeting: going straight to the hippocampus and people pressing 'consent', like
we do now, for data access'. (John Havens)
Robots do not have the capacity for ethical reflection or a moral basis for decision-
making, and thus humans must currently hold ultimate control over any decision-making. An
example of ethical reasoning in a robot can be found in the 2004 dystopian film 'I, Robot',
where Will Smith's character disagreed with how the robots of the fictional time used cold
logic to save his life over that of a child's. If more automated healthcare is pursued, then the
question of moral agency will require closer attention. Ethical reasoning is being built into
robots, but moral responsibility is about more than the application of ethics — and it is
unclear whether robots of the future will be able to handle the complex moral issues in
healthcare .
Trust
Larosa and Danks write that AI may affect human-human interactions and
relationships within the healthcare domain, particularly that between patient and doctor, and
potentially disrupt the trust we place in our doctor.
'Psychology research shows people mistrust those who make moral decisions by
calculating costs and benefits — like computers do'. Our distrust of robots may also come
from the number of robots running amok in dystopian science fiction. News stories of
computer mistakes — for instance, of an image-identifying algorithm mistaking a turtle for a
gun — alongside worries over the unknown, privacy and safety are all reasons for resistance
against the uptake of AI.
Firstly, doctors are explicitly certified and licensed to practice medicine, and their
license indicates that they have specific skills, knowledge, and values such as 'do no harm'. If
a robot replaces a doctor for a particular treatment or diagnostic task, this could potentially
threaten patient-doctor trust, as the patient now needs to know whether the system is
appropriately approved or 'licensed' for the functions it performs.
Secondly, patients trust doctors because they view them as paragons of expertise. If
doctors were seen as 'mere users' of the AI, we would expect their role to be downgraded in
the public's eye, undermining trust.
Thirdly, a patient's experiences with their doctor are a significant driver of trust. If a
patient has an open line of communication with their doctor, and engages in conversation
about care and treatment, then the patient will trust the doctor. Inversely, if the doctor
repeatedly ignores the patient's wishes, then these actions will have a negative impact on
trust. Introducing AI into this dynamic could increase trust — if the AI reduced the likelihood
of misdiagnosis, for example, or improved patient care. However, AI could also decrease trust
if the doctor delegated too much diagnostic or decision-making authority to the AI,
undercutting the position of the doctor as an authority on medical matters.
As the body of evidence grows to support the therapeutic benefits for each
technological approach, and as more robotic interacting systems enter the marketplace, then
trust in robots is likely to increase. This has already happened for robotic healthcare systems
such as the da Vinci surgical robotic assistant.
Employment replacement
As in other industries, there is a fear that emerging technologies may threaten
employment , for instance, there are carebots now available that can perform up to a third of
nurses' work. Despite these fears, the NHS' Topol Review concluded that 'these technologies
will not replace healthcare professionals but will enhance them ('augment them'), giving them
more time to care for patients'. The review also outlined how the UK's NHS will nurture a
learning environment to ensure digitally capable employees.
2.5 Autonomous Vehicles
Autonomous Vehicles (AVs) are vehicles that are capable of sensing their
environment and operating with little to no input from a human driver. While the idea of self-
driving cars has been around since at least the 1920s, it is only in recent years that technology
has developed to a point where AVs are appearing on public roads.
According to automotive standardisation body SAE International (2018), there are six
levels of driving automation:
The driver and automated system share control of the vehicle. For
example, the automated system may control engine power to
1 Hands on
maintain a set speed (e.g. Cruise Control), engine and brake power
to maintain and vary speed (e.g. Adaptive Cruise Control), or
steering (e.g. Parking Assistance). The driver must be ready to retake
full control at any time.
4 Minds off As level 3, but no driver attention is ever required for safety, meaning
the driver can safely go to sleep or leave the driver's seat.
optional
Some of the lower levels of automation are already well-established and on the market, while
higher level AVs are undergoing development and testing. However, as we transition up the
levels and put more responsibility on the automated system than the human driver, a number of
ethical issues emerge.
Societal and Ethical Impacts of AVs
'We cannot build these tools saying, 'we know that humans act a certain way, we're going to kill
them – here's what to do'.' (John Havens)
Public safety and the ethics of testing on public roads
At present, cars with 'assisted driving' functions are legal in most countries. Notably,
some Tesla models have an Autopilot function, which provides level 2 automation (Tesla, nd).
Drivers are legally allowed to use assisted driving functions on public roads provided they
remain in charge of the vehicle at all times. However, many of these assisted driving functions
have not yet been subject to independent safety certification, and as such may pose a risk to
drivers and other road users. In Germany, a report published by the Ethics Commission on
Automated Driving highlights that it is the public sector's responsibility to guarantee the safety
of AV systems introduced and licensed on public roads, and recommends that all AV driving
systems be subject to official licensing and monitoring.
In addition, it has been suggested that the AV industry is entering its most dangerous
phase, with cars being not yet fully autonomous but human operators not being fully engaged.
The risks this poses have been brought to widespread attention following the first pedestrian fatality
involving an autonomous car. The tragedy took place in Arizona, USA, in May 2018, when a
level 3 AV being tested by Uber collided with 49-year-old Elaine Herzberg as she was walking
her bike across a street one night. It was determined that Uber was 'not criminally liable' by
prosecutors and the US National Transportation Safety Board's preliminary report which drew no
conclusions about the cause, said that all elements of the self- driving system were operating
normally at the time of the crash. Uber said that the driver is relied upon to intervene and take
action in situations requiring emergency braking – leading some commentators to call out
the misleading communication to consumers around the terms 'self- driving cars' and 'autopilot'.
The accident also caused some to condemn the practice of testing AV systems on public roads as
dangerous and unethical, and led Uber to temporarily suspend its self-driving programme.
This issue of human safety — of both public and passenger — is emerging as a key issue
concerning self-driving cars. Major companies — Nissan, Toyota, Tesla, Uber, Volkswagen —
are developing autonomous vehicles capable of operating in complex, unpredictable
environments without direct human control, and capable of learning, inferring, planning and
making decisions.
Self-driving vehicles could offer multiple benefits: statistics show you're almost certainly
safer in a car driven by a computer than one driven by a human. They could also ease
congestion in cities, reduce pollution, reduce travel and commute times, and enable people to
use their time more productively. However, they won't mean the end of road traffic accidents. Even
if a self-driving car has the best software and hardware available, there is still a collision risk. An
autonomous car could be surprised, say by a child emerging from behind a parked vehicle, and there
is always the issue of how: how should such cars be programmed when they must decide whose
safety to prioritise?
Driverless cars may also have to choose between the safety of passengers and other road
users. Say that a car travels around a corner where a group of school children are playing; there
is not enough time to stop, and the only way the car can avoid hitting the children is to swerve
into a brick wall — endangering the passenger. Whose safety should the car prioritise: the
children’s', or the passenger's?
In January 2016, 23-year-old Gao Yaning died when his Tesla Model S crashed into the
back of a road-sweeping truck on a highway in Hebei, China. The family believe
Autopilot was engaged when the accident occurred and accuse Tesla of
exaggerating the system's capabilities. Tesla state that the damage to the vehicle made
it impossible to determine whether Autopilot was engaged and, if so, whether it
malfunctioned. A civil case into the crash is ongoing, with a third-party appraiser
reviewing data from the vehicle.
In May 2016, 40-year-old Joshua Brown died when his Tesla Model S collided with a
truck while Autopilot was engaged in Florida, USA. An investigation by the National
Highways and Transport Safety Agency found that the driver, and not Tesla, were at fault.
However, the National Highway Traffic Safety Administration later determined that
both Autopilot and over-reliance by the motorist on Tesla's driving aids were to blame.
In March 2018, Wei Huang was killed when his Tesla Model X crashed into a highway
safety barrier in California, USA. According to Tesla, the severity of the accident
was 'unprecedented'. The National Transportation Safety Board later published a
report attributing the crash to an Autopilot navigation mistake. Tesla is now being sued
by the victim's family.
Unfortunately, efforts to investigate these accidents have been stymied by the fact that standards,
processes, and regulatory frameworks for investigating accidents involving AVs have not yet
been developed or adopted. In addition, the proprietary data logging systems currently installed
in AVs mean that accident investigators rely heavily on the cooperation of manufacturers to provide
critical data on the events leading up to an accident.
One solution is to fit all future AVs with industry standard event data recorders — a so-called
'ethical black box' — that independent accident investigators could access. This would mirror
the model already in place for air accident investigations.
Near-miss accidents
At present, there is no system in place for the systematic collection of near-miss accidents.
While it is possible that manufacturers are collecting this data already, they are not under any
obligation to do so — or to share the data. The only exception at the moment is the US state of
California, which requires all companies that are actively testing AVs on public roads to
disclose the frequency at which human drivers were forced to take control of the vehicle for
safety reasons (known as 'disengagement').
Without access to this type of data, policymakers cannot account for the frequency and
significance of near-miss accidents, or assess the steps taken by manufacturers as a result of these
near-misses. Again, lessons could be learned from the model followed in air accident investigations,
in which all near misses are thoroughly logged and independently investigated.
Policymakers require comprehensive statistics on all accidents and near-misses in order to inform
regulation.
Data privacy
It is becoming clear that manufacturers collect significant amounts of data from AVs. As
these vehicles become increasingly common on our roads, the question emerges: to what extent
are these data compromising the privacy and data protection rights of drivers and passengers?
Already, data management and privacy issues have appeared, with some raising concerns
about the potential misuse of AV data for advertising purposes. Tesla have also come under fire for
the unethical use of AV data logs. In an investigation by The Guardian, the newspaper found
multiple instances where the company shared drivers' private data with the media following crashes,
without their permission, to prove that its technology was not responsible. At the same time,
Tesla does not allow customers to see their own data logs.
One solution, proposed by the German Ethics Commission on Automated Driving, is to
ensure that that all AV drivers be given full data sovereignty (Ethics Commission, 2017). This
would allow them to control how their data is used.
Employment
The growth of AVs is likely to put certain jobs — most pertinently bus, taxi, and truck
drivers — at risk.In the medium term, truck drivers face the greatest risk as long-distance trucks are
at the forefront of AV technology. In 2016, the first commercial delivery of beer was made
using a self-driving truck, in a journey covering 120 miles and involving no human action. Last year
saw the first fully driverless trip in a self-driving truck, with the AV travelling seven miles without a
single human on board.
Looking further forward, bus drivers are also likely to lose jobs as more and more buses
become driverless. Numerous cities across the world have announced plans to introduce self-driving
shuttles in the future, including Edinburgh, New York and Singapore. In some places, this vision has
already become a reality; the Las Vegas shuttle famously got off to a bumpy start when it was
involved in a collision on its first day of operation and tourists in the small Swiss town of
Neuhausen Rheinfall can now hop on a self-driving bus to visit the nearby waterfalls. In the
medium term, driverless buses will likely be limited to routes that travel along 100% dedicated bus
lanes. Nonetheless, the advance of self-driving shuttles has already created tensions with
organised labour and city officials in the USA. Last year, the Transport Workers Union of
America formed a coalition in an attempt to stop autonomous buses from hitting the streets of
Ohio.
Fully autonomous taxis will likely only become realistic in the long term, once AV
technology has been fully tested and proven at levels 4 and 5. Nonetheless, with plans to introduce
self-driving taxis in London by 2021 and an automated taxi service already available in Arizona,
USA , it is easy to see why taxi drivers are uneasy.
The quality of urban environments
In the long-term, AVs have the potential to reshape our urban environment. Some of
these changes may have negative consequences for pedestrians, cyclists and locals. As driving
becomes more automated, there will likely be a need for additional infrastructure (e.g. AV-only
lanes). There may also be more far-reaching effects for urban planning, with automation shaping
the planning of everything from traffic congestion and parking to green spaces and lobbies. The
rollout of AVs will also require that 5G network coverage is extended significantly — again,
directly for their input, or informing the user ahead of time how the car is programmed to
behave in certain situations.
2.6 Warfare and weaponization
Although partially autonomous and intelligent systems have been used in military
technology since at least the Second World War, advances in machine learning and AI signify
a turning point in the use of automation in warfare.
AI is already sufficiently advanced and sophisticated to be used in areas such as
satellite imagery analysis and cyber defence, but the true scope of applications has yet to be
fully realised. A recent report concludes that AI technology has the potential to transform
warfare to the same, or perhaps even a greater, extent than the advent of nuclear weapons,
aircraft, computers and biotechnology (Allen and Chan, 2017). Some key ways in which AI
will impact militaries are outlined below.
Lethal autonomous weapons
As automatic and autonomous systems have become more capable, militaries have
become more willing to delegate authority to them. This is likely to continue with the
widespread adoption of AI, leading to an AI inspired arms-race. The Russian Military
Industrial Committee has already approved an aggressive plan whereby 30% of Russian
combat power will consist of entirely remote-controlled and autonomous robotic platforms by
2030. Other countries are likely to set similar goals. While the United States Department of
Defense has enacted restrictions on the use of autonomous and semi autonomous systems
wielding lethal force, other countries and non-state actors may not exercise such self-
restraint.
Drone technologies
Standard military aircraft can cost more than US$100 million per unit; a high-quality
quadcopter Unmanned Aerial Vehicle, however, currently costs roughly US$1,000, meaning
that for the price of a single high-end aircraft, a military could acquire one million drones.
Although current commercial drones have limited range, in the future they could have similar
ranges to ballistic missiles, thus rendering existing platforms obsolete.
Robotic assassination
Widespread availability of low-cost, highly-capable, lethal, and autonomous robots
could make targeted assassination more widespread and more difficult to attribute. Automatic
sniping robots could assassinate targets from afar.
Mobile-robotic-Improvised Explosive Devices
As commercial robotic and autonomous vehicle technologies become widespread,
some groups will leverage this to make more advanced Improvised Explosive Devices
(IEDs). Currently, the technological capability to rapidly deliver explosives to a precise target
from many miles away is restricted to powerful nation states. However, if long distance
package delivery by drone becomes a reality, the cost of precisely delivering explosives from
afar would fall from millions of dollars to thousands or even hundreds. Similarly, self-driving
cars could make suicide car bombs more frequent and devastating since they no longer
require a suicidal driver.
Hallaq et al. (2017) also highlight key areas in which machine learning is likely to
affect warfare. They describe an example where a Commanding Officer (CO) could employ
an Intelligent Virtual Assistant (IVA) within a fluid battlefield environment that automatically
scanned satellite imagery to detect specific vehicle types, helping to identify threats in
advance. It could also predict the enemy's intent, and compare situational data to a stored
database of hundreds of previous wargame exercises and live engagements, providing the CO
with access to a level of accumulated knowledge that would otherwise be impossible to
accrue.
Employing AI in warfare raises several legal and ethical questions. One concern is
that automated weapon systems that exclude human judgment could violate International
Humanitarian Law, and threaten our fundamental right to life and the principle of human
dignity. AI could also lower the threshold of going to war, affecting global stability.
International Humanitarian law stipulates that any attack needs to distinguish
between combatants and non-combatants, be proportional and must not target civilians or
civilian objects. Also, no attack should unnecessarily aggravate the suffering of combatants.
AI may be unable to fulfil these principles without the involvement of human judgment. In
particular, many researchers are concerned that Lethal Autonomous Weapon Systems
(LAWS) — a type of autonomous military robot that can independently search for and
'engage' targets using lethal force — may not meet the standards set by International
Humanitarian Law, as they are not able to distinguish civilians from combatants, and would
not be able to judge whether the force of the attack was proportional given the civilian
damage it would incur.
Amoroso and Tamburrini argue that: '[LAWS must be] capable of respecting the
principles of distinction and proportionality at least as well as a competent and conscientious
human soldier'. However, Lim (2019) points out that while LAWS that fail to meet these
requirements should not be deployed, one day LAWS will be sophisticated enough to meet
the requirements of distinction and proportionality. Meanwhile, Asaro (2012) argues that it
doesn't matter how good LAWS get; it is a moral requirement that only a human should
initiate lethal force, and it is simply morally wrong to delegate life or death decisions to
machines.
Some argue that delegating the decision to kill a human to a machine is an
infringement of basic human dignity, as robots don't feel emotion, and can have no notion of
sacrifice and what it means to take a life. As Lim et al (2019) explain, 'a machine, bloodless
and without morality or mortality, cannot fathom the significance of using force against a
human being and cannot do justice to the gravity of the decision'.
Robots also have no concept of what it means to kill the 'wrong' person. 'It is only
because humans can feel the rage and agony that accompanies the killing of humans that they
can understand sacrifice and the use of force against a human. Only then can they realise the
'gravity of the decision' to kill'.
However, others argue that there is no particular reason why being killed by a
machine would be a subjectively worse, or less dignified, experience than being killed by a
cruise missile strike. 'What matters is whether the victim experiences a sense of humiliation
in the process of getting killed. Victims being threatened with a potential bombing will not
care whether the bomb is dropped by a human or a robot'. In addition, not all humans have
the emotional capacity to conceptualise sacrifice or the relevant emotions that accompany
risk. In the heat of battle, soldiers rarely have time to think about the concept of sacrifice, or
generate the relevant emotions to make informed decisions each time they deploy lethal
force.
Additionally, who should be held accountable for the actions of autonomous systems
— the commander, programmer, or the operator of the system? Schmit (2013) argues that the
responsibility for committing war crimes should fall on both the individual who programmed
the AI, and the commander or supervisor (assuming that they knew, or should have known,
the autonomous weapon system had been programmed and employed in a war crime, and that
they did nothing to stop it from happening).