Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
68 views10 pages

1 s2.0 S2543925124000147 Main

Uploaded by

amirhaz220
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
68 views10 pages

1 s2.0 S2543925124000147 Main

Uploaded by

amirhaz220
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

xxx (xxxx) xxx

Contents lists available at ScienceDirect

Data and Information Management


journal homepage: www.journals.elsevier.com/data-and-information-management

Human-AI interaction research agenda: A user-centered perspective


Tingting Jiang a,b,* , Zhumo Sun a, Shiting Fu a,**, Yan Lv a
a
School of Information Management, Wuhan University, Wuhan, Hubei, China
b
Center for Studies of Information Resources, Wuhan University, Wuhan, Hubei, China

A R T I C L E I N F O A B S T R A C T

Keywords: The rapid growth of artificial intelligence (AI) has given rise to the field of Human-AI Interaction (HAII). This
Human-AI interaction study meticulously reviewed the research themes, theoretical foundations, and methodological frameworks of
Human-AI collaboration the HAII field, aiming to construct a comprehensive overview of this field and provide robust support for future
Human-AI competition
investigations. HAII research themes include human-AI collaboration, competition, conflict, and symbiosis.
Human-AI conflict
Human-AI symbiosis
Theories drawn from communication, psychology, and sociology support these studies, while the employed
methods include both self-reporting and observational approaches commonly utilized in user studies. It is sug-
gested that future research should broaden its focus to encompass diverse user groups, AI roles, and tasks.
Moreover, it is necessary to develop multi-disciplinary theories and integrate multi-level research methods to
support the sustained development of the field. This study not only furnishes indispensable theoretical and
practical insights for forthcoming research endeavors but also catalyzes the realization of a future distinguished
by seamless interaction between humans and AI.

1. Introduction proliferation of related studies. However, this field grapples with un-
clear concepts and inconsistent terminology, hindering the establish-
Since Alan Turing posed the famous question, “Can machines think?” ment of a cohesive global viewpoint. Moreover, current research is
in 1950, a new technology, Artificial Intelligence (AI), has emerged to scattered across different disciplines, leading to isolated investigations
simulate and expand human intelligence. In the subsequent decades, AI within each field. This overlooks the essential need for interdisciplinary
technology has experienced rapid advancement, exerting a profound collaboration to tackle complex and long-term issues. To address these
influence on diverse industries and reshaping societal structures. Due to challenges, this study undertakes a comprehensive review of existing
its autonomy and anthropomorphic attributes, the interaction between HAII research, aiming to establish a holistic view of the field. It begins by
human and AI is markedly distinct from traditional human-computer revisiting and categorizing common AI-infused systems. Subsequently, it
interaction. AI has evolved beyond being a mere tool and is gradually explores cutting-edge research themes in HAII, extracting theoretical
becoming a companion, partner, friend, and even an opponent. Sce- foundations and research methods from diverse disciplinary domains.
narios such as competition and conflict, originally confined to inter- Finally, insights into future research trends are presented. This study
personal interactions, also manifest in Human-AI interaction. contributes to laying a solid foundation for the theoretical development
Meanwhile, the extraordinary capabilities of AI have also sparked public and practical advancement of the field of HAII.
concerns regarding issues such as privacy breaches, algorithmic
discrimination, misinformation, and digital divides (UNESCO, 2022). 2. Understanding AI systems
Driven by AI technology, the focus of HCI work is transitioning from
human interaction with non-AI computing systems to interaction with 2.1. Basic AI technologies
AI systems, which has given rise to an emerging field known as Human-
AI Interaction (HAII) (Sun et al., 2023). Currently experiencing rapid Machine learning (ML) is the driving force behind the development
growth, the HAII field has attracted scholars from information science, of AI. The ML process involves selecting and applying appropriate al-
computer science, psychology, and other disciplines, leading to a gorithms to train models to learn patterns and relationships from data,

* Corresponding author. School of Information Management, Wuhan University, Wuhan, Hubei, China.
** Corresponding author.
E-mail addresses: [email protected] (T. Jiang), [email protected] (Z. Sun), [email protected] (S. Fu), [email protected] (Y. Lv).

https://doi.org/10.1016/j.dim.2024.100078
Received 5 February 2024; Received in revised form 2 July 2024; Accepted 14 July 2024
Available online 15 July 2024
2543-9251/© 2024 The Authors. Published by Elsevier Ltd on behalf of School of Information Management Wuhan University. This is an open access article under
the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).

Please cite this article as: Tingting Jiang et al., , https://doi.org/10.1016/j.dim.2024.100078


T. Jiang et al. Data and Information Management xxx (xxxx) xxx

enabling the models to make predictions on new data (Tyagi, 2019). ML knowledge representation and reasoning, cognitive computing, and
algorithms can be broadly classified into supervised learning (e.g., other building blocks of AI are combined and applied in various ways to
Linear Regression, Decision Trees, and Support Vector Machines), un- create intelligent systems that exhibit human-like cognitive abilities and
supervised learning (e.g., K-Means Clustering and Hierarchical Clus- behaviors. The overall goal is to empower AI systems to solve complex
tering), and reinforcement learning (e.g., Q-Learning and Deep Q problems, make informed decisions, and interact with humans and the
Networks). Deep learning (DL) is a subset of ML that utilizes artificial world in a more natural and sophisticated manner.
neural networks to learn hierarchical and intricate patterns automati- AI-infused systems are built from the ground up with AI in mind,
cally from raw data. These networks, inspired by the structure of the aiming to optimize and enhance the system’s functionalities by
human brain, consist of interconnected neurons organized in layers, leveraging the above basic AI technologies. AI infusion implies that AI is
including multiple hidden layers. Various DL models (e.g., Convolu- a fundamental and intrinsic component of the system (Ueno et al.,
tional Neural Networks, Recurrent Neural Networks, and Generative 2022). There are also AI-enabled systems that incorporate AI capabilities
Adversarial Networks) have been designed for processing different types as an augmentation to their existing functionalities.
of data and made powerful tools for such complex tasks as language
translation, image classification, and speech recognition (Schultz et al.,
2021). 2.2. Classification of AI systems
ML plays a crucial role in advancing both natural language pro-
cessing (NLP) and computer vision (CV), the two essential subfields of Two basic dimensions, presence and embodiment, can be taken into
AI. NLP techniques allow AI systems to comprehend, decipher, and consideration in the classification of AI systems (Fig. 1). While presence
generate human language, thus bridging the gap in human-AI commu- refers to whether AI is presented in physical or electronic proximity to
nication. NLP often involve the analysis and extraction of meaning from the user, embodiment refers to whether AI appears in an anthropo-
text data, performing sentiment analysis and language translation, as morphic morphology or not (Li, 2015). Chatbots, voice assistants,
well as producing responses that resemble those of humans (Hirschberg personalized recommenders, and virtual humans are all telepresent AI as
& Manning, 2015). Automatic speech recognition (ASR), an important users interact with them through desktop or mobile devices. Autono-
component of NLP, is responsible for convert spoken language into text. mous vehicles and service robots are typical copresent AI for being
This process usually comprises analyzing audio signals, recognizing touchable in the real world. Under the embodied category, virtual
phonemes and words, and generating a textual representation of the humans usually have highly realistic human appearance, whereas ser-
speech (Aldarmaki et al., 2022). On the other hand, computer vision vice robots can be in different human-like forms. The rest of the major
techniques, such as image classification, segmentation, generation, and AI-infused systems are unembodied.
captioning, etc., focus on processing images, videos, and other visual Chatbots are conversational agents created to simulate natural lan-
data and extracting meaningful insights based on visual input. They guage interactions with users mainly via text. There are scripted chat-
enable AI systems to interpret visual scenes like humans’ visual system bots programmed to respond to specific user inputs with predetermined
does, such as detecting objects, tracking motions, recognizing human responses by following a set of rules. A more advanced form, intelligent
facial features, and determining human poses, which is indispensable to chatbots are powered mainly by ML and NLP and able to understand
human-AI interaction in the physical world (Voulodimos et al., 2018). user intent, generate more natural, personalized, and sophisticated re-
In addition to the above-mentioned technologies, robotics, sponses, and adapt to changing user needs. AI-powered chatbots are
gaining popularity in a variety of domains, such as customer service,

Fig. 1. The classification of AI systems.

2
T. Jiang et al. Data and Information Management xxx (xxxx) xxx

healthcare, and education, for their effectiveness in handling user in- external human-machine interfaces use visual (light-based or textual
quiries, automating routine tasks, and providing tailored recommen- messages) or auditory cues (pure tones or spoken words) to communi-
dations, etc. Due to their limited abilities to interpret social cues or cate with pedestrians. For lack of trust, however, public acceptance of
subtle nuances of human language, however, chatbots can give users an and consumer readiness for autonomous vehicles remain low at present
impression of being robotic and impersonal. It is important to increase (Alawadhi et al., 2020).
social presence, i.e., the “sense of being with another”, in human- Unlike industrial robots that are programmed to perform simple re-
chatbot interaction (Jin & Youn, 2023). petitive tasks, e.g., welding and assembly, service robots (e.g., Plato,
Voice assistants are virtual assistants that are able to understand and NAO, and Pepper) are embodied robots designed to interact with and
respond to voice commands and queries. They leverage on ASR to provide personalized services to humans with a high degree of auton-
convert spoken words into text and then ML and NLP to identify the omy (Jörling et al., 2019). Computer vision and robotics engineering
user’s intent and determine what action should be taken. Amazon Alexa, enable some service robots to provide labor-intensive services through
Apple Siri, Microsoft Cortana, and Google Assistant are among the particular physical capabilities, e.g., moving and carrying. For example,
popular voice assistants that have already been widely embraced by catering service robots are used in restaurants to take orders and serve
consumers and businesses. They are typically integrated into mobile or food. In contrast, social interaction-oriented service robots are further
wearable devices, smart speakers, in-car systems, or home automation enabled by NLP and ML to understand social cues and respond in a
systems, etc. and used to execute commands (e.g., controlling lights, meaningful way, thus adept at such services as providing information,
playing music, and ordering products) or provide information (e.g., entertainment, companionship, and emotional support as well as
finding directions, checking the news, and searching the Web). A wake- assisting with activities in retail, education, healthcare, and other set-
up word is often needed to activate a voice assistant, such as “Alexa” and tings. With the wide adoption of service robots, there rise several major
“Hey Siri”. The future development of voice assistants will continue to ethical concerns, including privacy, dehumanization, social deprivation,
focus on improving voice recognition accuracy and the understanding of and disempowerment (Čaić et al., 2019).
context as well as reducing response time and privacy risks (Zwakman
et al., 2021). 3. Human-AI interaction research themes
Personalized recommenders are information filters that make data-
driven predictions about individual users’ preferences and recommend The main research themes of human-AI interaction include human-
relevant items they may like based on ML, NLP, and data mining AI collaboration, human-AI competition, human-AI conflict, and
(Isinkaye et al., 2015). They are built upon various ML algorithms, human-AI symbiosis, as shown in Fig. 2.
including content-based filtering and collaborative filtering. The former
focuses on the similarity between items and recommends items with
similar attributes to the items that a user has liked, while the latter fo- 3.1. Human-AI collaboration
cuses on the similarity between users and recommends items that similar
users have liked to the target user. There are also hybrid recommenders Human-AI collaboration is a joint effort of humans and AI in which a
that combine multiple algorithms to increase the accuracy of recom- common goal is pursued. The aim is to create synergistic relationships
mendation. It is believed that personalized recommenders may give rise where humans and AI collaboratively contribute to successful outcomes
to filter bubbles in which individuals are exposed to homogeneous in- in various domains (Cañas Delgado, 2022). Such collaboration can
formation (Pariser, 2011). This would aggravate the negative social leverage the strengths of both parties, combining humans’ cognitive
impact of information cocoons and echo chambers. abilities, domain expertise, creative thinking, contextual understanding,
Virtual humans are virtual avatars that are created with the com- and ethical judgement with AI’s efficiency in identifying patterns and
bination of ML, NLP, ASR, CV, 3D modelling, animation, and motion extracting insights from data and making data-driven predictions or
capture etc., to imitate human appearance and act and interact with recommendations, to tackle complex problems, make informed de-
users in a lifelike manner (Gratch et al., 2002). They are different from cisions, and drive innovation. Abundant evidence derived from empir-
digital doubles, i.e., replicas of real-life people in the digital form ical research and real-world applications has demonstrated that
(Domingos & Veve, 2018). Service-oriented virtual humans have human-AI collaboration undeniably yields superior outcomes
emerged as virtual instructors or trainers, health consultants, tour compared to scenarios involving only humans or AI (Kahn et al., 2020).
guides, banking representatives, and shopping assistants, etc., with a Human-AI collaboration has been revolutionizing medical and
superiority in engendering engaging and immersive user experience. In healthcare fields. AI can help with disease diagnosis by analyzing patient
recent years, virtual idols, i.e., virtual characters appearing as singers or
performers with distinct appearances and personalities, are becoming
increasingly popular in East Asian, such as Luo Tianyi (China), Hatsune
Miku (Japan), and K/DA (South Korea). With great efforts devoted to
addressing the uncanny valley, a phenomenon in which people some-
thing that is almost but not fully human-like causes feelings of unease or
revulsion in the observer (Mori et al., 2012), steady progress has been
made towards generating realistic facial features and expressions, voi-
ces, gestures, and movements for virtual humans.
Autonomous vehicles, also known as self-driving cars, are capable of
sensing their environment and navigating without human input. They
depend mainly on computer vision and sensor fusion to perceive and
understand the environment, and self-driving is made possible through
the integration of the localization, path planning, and control modules
(Mohamed et al., 2018). Human-vehicle interaction involve both
in-vehicle and external interfaces. With an aim to ensure safe and
comfortable driver/passenger experience, the intelligent cockpit pro-
vides an in-vehicle living space where multimodal interaction is enabled
with such components as head-up displays, streaming rearview mirrors,
in-vehicle voice assistants, and infotainment systems. Meanwhile, the Fig. 2. Research themes of human-AI interaction.

3
T. Jiang et al. Data and Information Management xxx (xxxx) xxx

data and support the development of personalized treatment plans. to fostering alignment of mental models as well as cognitive and
Medical imaging analysis is one the common applications in which AI emotional styles, developing effective communication strategies in
algorithms can improve radiologists’ accuracy and efficiency in identi- human-AI teams, enhancing humans’ understanding of and trust in
fying tumors, lesions, or fracturs from medical images (Rajpurkar et al., their AI teammates’ decisions and actions, and seeking methods for
2022). AI-enabled devices have been used in remote patient monitoring. role allocation and team performance assessment (Tabrez et al.,
They can collect and analyze real-time data on vital signs, symptoms, 2020; Zhang et al., 2023).
and disease progression to detect any abnormalities and trigger alerts, ⋅ Autonomous intelligence: AI operates independently and makes
enabling proactive interventions (Lee et al., 2021). AI-powered robotic decisions without continuous human intervention, e.g., autonomous
systems can assist surgeons during complex procedures, increasing vehicles and robots. Key research topics specific to this scenario
precision and reducing errors (Lai et al., 2021). Virtual assistants and include safety and reliability, adaptivity to changing conditions in
chatbots can offer patients basic symptom analysis and medical advice the environments, cybersecurity challenges, balance between au-
and direct them to appropriate healthcare resources (Roca et al., 2021). tonomy and human control in critical situations, and liability and
In addition, AI has the potential to enhance drug discovery and devel- accountability standards, etc. (Huang et al., 2023).
opment, healthcare resource management, mental health support, and
so on (D’Alfonso, 2020; Paul et al., 2021). 3.2. Human-AI competition
Modern battlefields often involve multi-domain operations and/or
coalition operations, presenting unprecedented complexity and uncer- Human-AI competition, in a narrow sense, refers to the contest be-
tainty. Military operations have become increasingly dependent on the tween human and AI players in the context of game playing. IBM’s Deep
strong computational capabilities of AI. With the collection and analysis Blue and Google’s AlphaGo are among the famous AI players that have
of battlefield information, surveillance data, intelligent reports, and defeated the world’s top human experts in traditional tabletop games
historical records, AI can promote situational awareness and assist like chess, Go, and Texas hold’em. In more challenging online video
commanders in strategic planning, risk assessment, operational games, such as Dota 2 and StarCraft II, AI players have also reached
decision-making, and resource allocation (Hung et al., 2021). Moreover, master levels with victories over most professional human players
AI-powered autonomous military systems, e.g., unmanned aerial or (Canaan et al., 2019).
ground vehicles, can perform tasks such as reconnaissance, surveillance, Games have a long history of being used as AI testbeds and bench-
and logistics, reducing the threats to human personnel (Johnson, 2019). marks. Human-AI competition in games enables AI to learn humans’
Human-AI co-creation is believed to be a new strategy for creative strategies of thinking, deciding, and acting, which is an important
processes with amplified creativity and increased productivity (Wu approach to developing human-like AI. It also gives rise to more
et al., 2021). AI has been introduced to a wide range of creative fields, objectivized methods of measuring AI performance for involving a large
such as painting (Oh, Bailenson, & Welch, 2018), storytelling (Zhang number of referees in real decision-making situations (Świechowski,
et al., 2021), music composition (Louie et al., 2020), fashion design 2020). Meanwhile, humans can also benefit from playing games with AI.
(Zhao & Ma, 2018), and game design (Guzdial et al., 2019), etc. By Stronger AI opponents can help humans improve mental capabilities and
analyzing vast amounts of existing creative works, AI can generate novel skills that have potential applicability to a variety of real-world games
ideas and insights to inspire artists, writers, and designers or help them such as business negotiation, political campaigns, and medical treat-
explore new frontiers and create original content. ment planning (Sandholm, 2017).
In addition, it has been found that the routine work of peer reviewers Some basic issues need to be addressed to ensure benign competition
(Bharti et al., 2021), teachers (Ng et al., 2020), truck drivers (Loske & between humans and AI in gaming (Canaan et al., 2019; Świechowski,
Klumpp, 2021), and manufacturing workers (Mantravadi et al., 2020) 2020): (1) fairness – given that AI players are enabled by incomparable
can be enhanced through human-AI collaboration for either enabling data and computing resources, what is the fair way to compare human
humans to focus on more cognitively challenging or creative tasks or and AI performance on a game? (2) transparency – how can we explain
releasing them from dangerous, physically demanding, or monotone AI players’ highly accurate decisions and actions to humans and help
tasks. humans understand the roles and limitations of AI in the game? (3)
The collaboration between humans and AI in the above scenarios challenge-skill balance – since both unbeatable and incompetent AI
varies in the level of human control and oversight. Four different modes players are undesirable, what level of difficulty is appropriate for a game
of human-AI collaboration can be inferred, with each mode highlighting that offers both challenges and entertainment?
a specific range of research foci. There is a growing concern about human-AI competition in general,
especially with regards to job opportunities. Due to its superior pro-
⋅ Assisted intelligence: humans use AI as an assistant to offer infor- ductivity, accuracy, availability, cost-efficiency, and learnability, AI is
mation or perform specific tasks. Existing related studies have increasingly replacing humans in relatively simple customer service
focused on personalizing AI assistants and improving multi-modal tasks, and it is also extensively used to supplement the work of pro-
interaction techniques to increase their effectiveness and efficiency fessionals such as medical practitioners, lawyers, software engineers,
in task automation (Islas-Cota et al., 2022; Varshan V et al., 2023). and financial advisors (Frey & Osborne, 2017).
⋅ Augmented intelligence: humans use AI as a supporter to amplify More and more people have viewed AI as job competitors to humans
their own abilities. It has been widely investigated how to integrate and as a potential threat to human uniqueness and control over the
AI into cognitive tasks and support decision-making and problem world. The greater the autonomy of AI, the more pronounced the
solving with insights, recommendations, and predictions derived perception of its threat to humans. This may lead to negative attitudes
from the analysis of medical, customer, or social media data (Sadiku towards AI, resistance to AI research, and a rejection of services
& Musa, 2021). The recent outburst of generative AI services, e.g., rendered by AI agents (Złotowski et al., 2017). In particular, western
ChatGPT and Midjourney, has resulted in a rapid increase in research cultures tend to treat AI agents as pragmatic assistants, showing more
exploring the ways to augment users in creative tasks with ambivalent attitudes towards AI than East Asian cultures (Dang & Liu,
AI-generated content. Much attention has been attracted to prompt 2022).
engineering that involves the intentional design and formulation of However, the refusal to embrace AI development is not a feasible
queries to elicit specific and desired responses from AI models (Liu & solution to job competition between humans and AI. Humans should
Chilton, 2022). make better use of their invaluable expertise in creative approaches,
⋅ Cooperative intelligence: humans work with AI as a team to jointly emotional intelligence, and complex problem-solving, while AI can be
create solutions to complex tasks. Researchers have devoted efforts leveraged for its incredible computational capabilities for pattern

4
T. Jiang et al. Data and Information Management xxx (xxxx) xxx

recognition, reasoning, prediction, and decision making. It is important AI’s computational power and analytical capabilities. On the other
to promote a healthy job market in which the full potential of both hand, humans can bring contextual understanding, intuition, empathy,
humans and AI are unlocked through their collaboration rather than and judgement to improve the accuracy, adaptability, and ethical
competition. sensitivity of AI. Overall, human-AI symbiosis describes a desirable
future featuring harmonious collaboration and benign competition be-
3.3. Human-AI conflict tween humans and AI without being hindered by conflicts.
The low trust in and acceptance of AI among humans nowadays,
Human-AI conflict is a state of incompatibility, disagreement, or however, indicates that we are still far from achieving human-AI sym-
opposition between humans and AI systems (Flemisch et al., 2020). Such biosis. A fundamental obstacle lies in the fact that many AI algorithms
tensions may occur during human-AI collaboration or competition. Task operate as black boxes, making it difficult for humans to understand the
conflict and relationship conflict are the two major types of conflict. reasoning behind their decisions. Communication breakdowns and
The former often involves concreate issues in which resources are frustrating user experiences are prevalent in human-AI interfaces. The
limited or individuals have different goals, opinions, motivations, ap- lack of ethical guidelines or ethical assessment has led to concerns about
proaches, or decisions, etc. The latter can be attributed to negative job displacement, loss of autonomy and control, and even adversarial
feelings as well as differences in personality, value, expectation, and attacks (Huang et al., 2023).
style, etc. (De Dreu & Weingart, 2003). A novel concept, known as human-centered AI, has been put forth to
Prior studies of human-AI conflict are mainly interested in task address the above challenges. It emphasizes that the design, develop-
conflict. For examples, a human and a robot need to pass a doorway or ment and deployment of AI technologies and systems should focus on
use an elevator at the same time (Thomas & Vaughan, 2018); the needs, values, and well-being of humans, with an aim to empower
self-driving or autopilot systems may sometimes operate unexpectedly individuals and promote positive outcomes for society (Shneiderman,
and/or get out of the control of human drivers or pilots, such as 2022). Human-centered AI requires multidisciplinary research that
“phantom braking” and “automation surprise” (Wen et al., 2022); and combines expertise from fields such as computer science,
human participants and AI proposed different solutions to a collabora- human-computer interaction, cognitive science, psychology, sociology,
tive task, e.g., human-agent cooperation in desert survival (Takayama and ethics, etc. As shown in Fig. 3, existing efforts have taken different
et al., 2009). When AI, often regarded as a “machine”, is adapted to approaches to actualizing human-centered AI.
perform tasks that are “proper for humans”, e.g., babysitting and hair-
dressing, users would show a low level of trust and have a negative ⋅ Human-in-the-loop: a feedback loop of machine learning in which
expectation for the outcomes. This has been investigated as the human input or oversight is incorporated to improve the perfor-
“human-machine trans roles conflict” (Modliński et al., 2023), a special mance of an AI system. Typical tasks featuring human involvement
kind of relationship conflict. include training data annotation, model validation and evaluation,
Human-AI conflict is a double-edged sword. The interference of AI and algorithmic decision support (Wu et al., 2022).
can prevent humans from making mistakes, e.g., dangerous driving, and ⋅ Explainable AI: in order to provide insights into how AI models
vice versa. The conflict resolution process can spark engagement and make decisions and generate outputs, previous research has pro-
innovation (Jung & Yoon, 2018). However, it is also possible that posed techniques to explain black-box models, i.e., uncovering the
human-AI conflict leads to lower task performance, undermines key features or factors influencing complex model’s predictions (Xu
humans’ perception of AI’s trustworthiness, or even does harm to et al., 2019), as well as attempted to create white-box models, such as
humans mentally or physically (Esterwood & Robert, 2021). Hence, decision trees, linear, and rule-based models that are inherently
there is an urging need to avoid and/or resolve human-AI conflict. By interpretable (Shakerin & Gupta, 2020).
making AI’s decision processes more visible, understandable, and ⋅ User-friendly AI: in order to support natural and effective commu-
controllable to humans, human-centered design of AI systems is crucial nication and interaction, researchers have considered enhancing AI
to reducing the occurrences of human-AI conflict. Higher AI adaptive- systems’ abilities to understand natural language and contextual
ness is also desirable so that the level of automation and authority can be information, respond to human emotions and social situations, adapt
modified when potential conflict is detected. to individual users’ preferences and needs, and handle user errors
When it comes to conflict resolution, both submissive and persuasive and misunderstandings; and it is also useful to anthropomorphize AI
strategies have been widely explored. AI’s submission to humans can when necessary through more human-like appearances, voices and
take the form of apology, promise, and gratitude, etc. (Esterwood & speeches, conversational styles, and/or non-verbal cues (Salles et al.,
Robert, 2021), which helps repair or recover humans’ trust in AI. Robots 2020).
can also use non-verbal gestures or take actions (e.g., changing path, ⋅ Responsible AI: it has been widely recognized the importance of
waiting, and backing off) to show their submission (Kamezaki et al., establishing ethical principles and legal and regulatory frameworks
2020). When persuading humans, AI may leverage on explanation, ap- to engender a more trustworthy, inclusive, and beneficial AI
peal, and even command or threat, with an emphasis on the benefits of ecosystem that prioritizes human well-being, maintains human
cooperation (Babel et al., 2022). Humor, empathy, politeness, and other dominance, and avoids human capacity diminution; in particular,
verbal techniques can be employed to enhance AI’s persuasiveness, but various solutions have been proposed to address such ethical issues
their effectiveness would be affected by a variety of factors, such as task as data bias and fairness, privacy and security, AI divides and
urgency, type of context, and robot’s appearance, etc. (Babel et al., equality, and so on (Cheng et al., 2021).
2021).
AI literacy also plays an indispensable part in fostering human-AI
3.4. Human-AI symbiosis symbiosis by enabling humans to understand, interact with, and effec-
tively utilize AI systems. As shown in Fig. 4, it consists of a set of basic
Human-AI symbiosis is the most updated version of “man-computer competencies: (1) knowledge of AI capabilities and limitations – the
symbiosis”, a concept coined in 1960 to envision the close coupling basic understanding what AI can and cannot do, enabling individuals to
between humans and electronic computers (Licklider, 1960). Symbiosis, decide when and how to use AI; (2) effective communication and
rather than a specific form of interaction, refers to a mutually beneficial interaction with AI – the knowledge and skills needed to utilize AI
relationship. Human-AI symbiosis emphasizes the enhancement of both systems’ features and interface elements, provide AI-understandable
humans and AI. On the one hand, humans’ information processing, inputs, and extract meaningful insights from AI outputs; (3) critical
problem-solving, and decision-making abilities can be augmented with assessment of AI reliability and credibility – the ability to evaluate the

5
T. Jiang et al. Data and Information Management xxx (xxxx) xxx

Fig. 3. Approaches to actualizing human-centered AI.

designed with rich social cues, including human-like appearances, voi-


ces, names, and even personalities (Liew & Tan, 2021). Despite the lack
of genuine social cognition, AI systems can be driven by algorithms to
adhere to social norms and exhibit behaviors that are consistent with
social expectations in HAII (Ribino, 2023).

4.2. Social presence theory

Social presence, a concept closely related to the CASA paradigm, is a


psychological state in which virtual social actors are experienced as
actual social actors (Lee, 2006). The Social Presence Theory (SPT)
originally explores how communication technologies vary in their
abilities to convey a sense of interpersonal connection and immediacy.
The SPT contends that users are motivated to choose media where they
perceive a higher level of social presence because of their inherent desire
for being present with others. It has been found that the perceptions of
social presence can be influenced by individual, technological, and
contextual characteristics (Oh, Bailenson, & Welch, 2018). Existing HAII
Fig. 4. Basic competencies of AI literacy. research has investigated social presence widely as a mediating variable
between various AI-related factors, such as design elements (e.g.,
potential benefits and risks of AI recommendations or predictions for anthropomorphism), conversational cues (e.g., responsiveness), and
making informed decisions and maintaining human control; (4) interaction patterns (e.g., proactivity), and users’ attitudes, experiences,
responsible use of AI – the awareness of ethical and legal considerations and behaviors (Chien et al., 2022). The SPT can provide useful insights
and the efforts to contribute to addressing fairness, security, equality, into the underlying mechanism of HAII and informs the design and
and other ethical challenges; and (5) adaptability to rapid evolvement development of AI systems as effective social actors that align with
of AI – the readiness to embrace new AI tools and applications, keeping human expectations.
pace with the changing technological landscape (Süße et al., 2021).

4.3. Para-Social Relationship theory


4. Human-AI interaction theoretical foundations

Another related theory deserving attention is the Para-Social Rela-


4.1. Media equation theory/Computers Are Social Actors
tionship (PSR) theory that can be used to explain how users view their
relationships with AI systems. The PSR is a term originated in the 1950s,
The Media Equation Theory (MET) suggests that humans tend to
referring to the “illusion of face-to-face relationship” that spectators
treat media like real people rather than tools (Reeves & Nass, 1996). The
develop with performers on mass media (Horton & Richard, 1956).
essence of the theory consists in the Computers Are Social Actors (CASA)
Similarly, HAII has a bidirectional relation with PSRs in which users
paradigm, i.e., users perceiving computers as if they were social beings
sense a degree of closeness and intimacy with AI systems and even
and possessed human-like qualities and applying social rules in their
consider them as friends and companions. It has been revealed for
interaction with computers, such as being polite (Nass et al., 1994). The
chatbots, voice assistants, and social robots that the greater their
CASA paradigm has been increasingly used as a fundamental theoretical
human-likeness perceived by users, the stronger the PSRs users devel-
framework for understanding users’ social responses to various AI sys-
oped with them (Tsai et al., 2021; Whang & Im, 2021). In turn, PSRs can
tems. Users have a natural inclination to treat AI systems as social actors
contribute to the establishing of emotional attachment and social bonds,
for their stronger abilities to process and generate information than
leading to increased user engagement, satisfaction, and attitude (Tsai
traditional computers. Moreover, anthropomorphic AI agents are
et al., 2021).

6
T. Jiang et al. Data and Information Management xxx (xxxx) xxx

4.4. Uncanny valley hypothesis large-scale quantitative data from a wide audience and enable re-
searchers to analyze trends and patterns across different user groups.
“Uncanny valley” is a concept originating in robotics and used to Questionnaires can include scales that are rigorously developed and
describe people’s reactions to robots that look and act almost like validated measurement tools for assessing single constructs or variables
humans (Mori et al., 2012). The uncanny valley hypothesis includes in a standardized manner. Surveys have been applied in human-AI
three stages demonstrating different patterns of change in users’ interaction research to capture users’ trust in and acceptance of AI
emotional response. First, as robots or virtual characters become more systems, ethical concerns, perceptions of AI systems’ usability, useful-
human-like in appearance, users’ affinity or emotional connection will ness, and emotional impact, as well as the levels of engagement and
increase correspondingly. Second, when the resemblance reaches a satisfaction during human-AI interaction (Villacis Calderon et al., 2023).
certain level but there still exist subtle imperfections, such anomalies Interviews are a qualitative method that enable researchers to gain
can be detected by users and evoke a feeling of eeriness or discomfort, rich insights and subjective perspectives through in-depth conversations
known as the “valley”. Third, as the humanoid entities continue to with users. They provide a platform for users to articulate their needs
improve and becomes highly realistic, users’ positive emotions will re- and expectations, describe the contexts in which they interact with AI
turn (Ho & MacDorman, 2010). The uncanny valley hypothesis has and their overall impressions of the experience, and uncover the frus-
useful implications for designing the appearances and movements of trations, surprises, and other nuances of human-AI interaction. The great
embodied AI systems as well as the voices and speaking styles of voice flexibility of interviews supports deeper probe into the underlying rea-
assistants. According to some preliminary empirical evidence, it is sons for users’ trust, acceptance, perceptions, attitudes, and concerns in
important to strike a balance between human- and machine-likeness in relation to AI systems in general (Zhu et al., 2023).
the design of AI systems in order to enhance users’ trust, acceptance, and Field studies involve observing how AI is integrated into users’ daily
engagement in HAII (Ciechanowski et al., 2019). lives, work environments, or social interactions, which ensures a higher
level of ecological validity than controlled laboratory settings. In field
4.5. Technology Acceptance Model and related models studies, researchers can understand the contextual factors that influ-
encing human-AI interaction, understand the practical challenges and
Based on the general framework provided by the Theory of Reasoned opportunities users encounter when using AI, and even compare
Action (TRA) and the Theory of Planned Behavior (TPB) for understanding different user groups and assess the long-term impact of AI (Schlomann
human behavior, the Technology Acceptance Model (TAM) suggests that et al., 2021). Field studies are useful complement to other methods by
users are more likely to accept and adopt a new technology when they providing valuable insights into the complexities and nuances of
perceive it as useful and easy to use (Davis, 1989). The integration and human-AI interaction in real-world contexts.
extension of these theories and models led to the Unified Theory of Experimental studies allow for the investigation of how specific
Acceptance and Use of Technology (UTAUT) that identifies four key fac- factors influence user experiences in human-AI interaction, with an aim
tors with direct influence on users’ behavioral intention, i.e., perfor- to obtain generalizable findings about the relationships between vari-
mance expectancy, effort expectancy, social influence, and facilitating ables. In a controlled experiment, researchers may compare different AI
conditions, and considers several moderating factors, e.g., gender, age, algorithms, system features, design elements, and interaction techniques
experience, and voluntariness of use (Venkatesh et al., 2003). The sub- to measure their effects on users’ objective performance and subjective
sequent UTAUT2 introduces hedonic motivation, price value, and habit evaluation. An experimental design often involves carefully manipu-
as the additional constructs and expands the list of moderation factors lating the independent variables, randomizing participant assignments,
(Venkatesh et al., 2012). Previous studies of HAII have applied the TAM, measuring the dependent variables using various instruments and
UTAUT, or UTAUT2 to predict and explain users’ acceptance and techniques, and adopting appropriate statistical tests to determine the
adoption of various AI systems, e.g., voice assistants, chatbots, and significance of observed effects. In addition to build fully automated AI
service robots, with an aim to guide the user experience design of the systems, previous experiments have employed the Wizard of Oz tech-
systems. nique in which system functionalities, such as providing information,
However, it has been argued that the models mentioned above have recommendations, decisions, and physical assistance, are stimulated by
limited applicability to HAII research as they focus on users’ acceptance human operators to trigger human-AI interaction in various tasks (Riek,
and adoption of “unintelligent functional technologies”. In contrast, AI 2012). Physiological measurements of heart rates, electrodermal activ-
systems have the capabilities of interacting with users in a more natural ities, eye movements, and EEG have been introduced to human-AI
manner, such as incorporating voice and gesture modalities, and per- interaction experiments to supplement psychological measurements
forming complex cognitive processing tasks (Lu et al., 2019). Hence, and behavioral observation (Zheng et al., 2019). Experimental studies
Gursoy et al. (2019) proposed and tested an AI-specific model, i.e., contribute to evidence-based design decisions of AI design and devel-
Artificially Intelligence Device Use Acceptance (AIDUA). The new model opment by providing meaningful concrete conclusions.
features a three-stage process: (1) primary appraisal – social influence, Usability testing is a prevalent user-centered method used to eval-
hedonic motivation, and anthropomorphism are determinants of users’ uate the usability and user interface of an AI system or AI-enabled
perceived performance/effort expectancy of AI systems; (2) secondary product (Lam et al., 2023). In usability testing, researchers observe
appraisal – performance/effort expectancy influences users’ emotions and collect quantitative and qualitative feedback from representative
towards the use of AI systems; and (3) outcome – positive/negative users as they perform naturalist tasks using the given system or product.
emotions predict users’ acceptance/rejection of AI systems. The aim is to identify usability issues, understand user behavior, and
gather context-specific feedback to improve the design, functionality,
5. Human-AI interaction research methods and user experience of the system or product. Usability testing is often
conducted at different stages of an iterative design process in which
The existing human-AI interaction research has been made possible human-AI interaction is continuously improved (Amershi et al., 2019).
through a variety of research methods that have been frequently applied Data-driven research, featuring the automated collection and anal-
in the investigation of information behavior and user experience. These ysis of large-scale user-centered data, is gaining popularity in the field of
methods help researchers engender useful insights to inform the design, human-AI interaction. Chat logs are a typical type of AI system usage
development, and evaluation of AI systems, leading to user-centered data stored on servers, capturing all the interactions between users and
improvements and better understanding of human-AI interaction chatbots or other conversational agents as well as the messages
dynamics. exchanged during their conversations. Chat log analysis is useful for
Questionnaire surveys provide a structured approach to gathering characterizing user behavior and evaluating chatbot performance (Gao

7
T. Jiang et al. Data and Information Management xxx (xxxx) xxx

& Jiang, 2021). Meanwhile, a substantial volume of user comments processes, and disease diagnosis and treatment, etc.
regarding AI systems have been posted to social media or online review Multi-disciplinary theoretical development. The disciplines of
platforms, describing their relevant experiences, opinions, concerns, and communication and psychology have contributed the most theories to
expectations, etc. Topic modelling, sentiment analysis, and other text HAII research, including MET, SPT, and PSR. These theories explain how
mining techniques have been applied to analyze both conversation humans perceive AI and their relationships with AI, enabling researchers
messages and user reviews, enabling researchers to recognize user in- to explore AI systems as social beings. The uncanny valley hypothesis,
tentions, understand user preferences, and determine user satisfaction. also with a psychological basis, further suggests how humans feel about
The insights extracted from such user-centered data are valuable to the embodied AI. TAM and related models, derived from the information
continuous improvement of AI systems and enhancement of user expe- system literature, provide useful frameworks for the research designs for
rience (Siemon et al., 2022). examining users’ acceptance of AI systems as tools. The existing theo-
retical foundations as described in Section 4 will continue to support
6. Future trends of Human-AI interaction research HAII research in the future, highlighting different concerns for devel-
oping functional AI and social AI. Future research should seamlessly
More diverse user groups. Prior HAII studies often involve general integrate interdisciplinary theories. Incorporating principles from
users or domain-specific users who exhibit higher willingness and abil- computational law and intelligent law is crucial for establishing stan-
ities to utilize AI systems. As human-centered AI emphasizes the broader dardized governance in areas such as privacy security, intellectual
social impacts and ethical concerns, future research should consider property, and the allocation of rights and responsibilities. Philosophical
various demographic or cultural user groups, especially technologically theories, including ontology, epistemology and ethics, can deepen
disadvantaged groups. For examples, there is a rising interest in deliv- human understanding of AI and facilitate the alignment of AI with
ering age-friendly AI services and investigating the ways in which older human values. In addition, educational concepts such as critical
adults can use AI assistive technologies to enhance their independent thinking, creative thinking, and experiential learning theory will
living and social and cognitive activities; AI applications have been in- contribute to the development of a modern education system tailored to
tegrated into smart classrooms and online gaming to amplify children’s the age of artificial intelligence, strengthening human core compe-
enthusiasm for learning, fostering the development of their cognitive tencies that are distinct from AI.
capacities and physical skills; and it is also necessary to leverage the Multi-level methodological integration. Current research
potential of AI technologies to improve the lives of people with dis- methods in HAII tend to be singular, and future integration should occur
abilities, such as providing automated speech recognition tools for in- at multiple levels. Firstly, data collection should be more thorough. The
dividuals with hearing impairments. Understanding the diverse needs integration of qualitative and quantitative data enables the simulta-
and preferences of all user groups helps create AI systems that offer an neous analysis of quantitative metrics and subjective experiences. Uti-
inclusive user experience, preventing the exacerbation of the digital lizing both large and small-scale data allows the discovery of universal
divide. patterns within large datasets, and enables nuanced analysis to uncover
More diverse AI roles. Following the traditions in HCI research, underlying reasons. Combining trace and response data means that re-
existing HAII studies have often centered around augmenting humans searchers could capture the most natural interactions unobtrusively, and
with AI systems to accomplish their goals, highlighting AI roles as as- could also manipulate specific variables for real-time user reaction
sistants or even tools. As evidenced by the wide adoption of the CASA recording. Secondly, user observation dimensions should be more
paradigm, however, there is a growing inclination to perceive AI systems comprehensive, covering cognitive, emotional, and behavioral aspects
as social actors, especially those with embodied forms. Anthropomor- for a holistic understanding. Lastly, tool selection should be diversified,
phic design, aiming to enhance the human-likeness of AI in terms of moving beyond traditional psychological measures to incorporate
appearance, communication, and behavior, has appeared as a promising behavior tracking, micro-expression capture, and neuroscientific tools
avenue of research. Humans are also enabled to engage in more natural such as EEG and fNIRS. This integration would capture implicit, objec-
multi-modal interaction with AI, such as using speech, text, touch, tive responses, offering more comprehensive data support and
gestures, and even facial expressions. Moreover, the literature has wit- enhancing the rigor of research conclusions.
nessed a rise in efforts to create empathetic AI or emotionally intelligent
systems, investigating how AI systems can recognize, understand, 7. Conclusion
respond to, and influence human emotions. An increasing variety of
human-AI relationships are emerging, ranging from teammates and The rapid advancement of AI technology continues to significantly
companions to, intriguingly, romantic partners. Anticipated is a shift of impact various aspects of human life, creating extensive research op-
attention, moving from the usability, usefulness, and ease of use of AI portunities for HAII. This study reviews existing HAII research,
systems to the considerations of anthropomorphism, hedonism, and extracting four main research themes (human-AI collaboration,
socialization. competition, conflict, and symbiosis) and outlining the theoretical and
More diverse tasks. To elicit interaction between humans and AI methodological foundations. Based on the current landscape, the study
systems, prior studies have mostly relied on simple short single envisions future research directions. By furnishing theoretical guidance
conversational or gaming tasks, probably due to the constraints of AI and practical insights, this study contributes not only to ensuring the
capabilities. The introduction of the Wizard of Oz technique, i.e., having sustained and robust development of the HAII field but also to the
a human operator control or simulate the AI system’s responses, helped realization of the harmonious symbiosis between humans and AI.
address such challenge by making users believe they are interacting with
a fully autonomous AI. However, researchers still need to consider CRediT authorship contribution statement
limited realism, operator bias, difficulty in simulating, and other con-
cerns associated with this technique. The deployment of large language Tingting Jiang: Writing – review & editing, Investigation, Funding
models has engendered new possibilities in HAII, showcasing remark- acquisition, Conceptualization. Zhumo Sun: Writing – original draft,
able abilities to process vast amounts of textual data and generate Methodology, Investigation. Shiting Fu: Writing – review & editing,
contextually relevant text. It is predicted that the forthcoming burst of Writing – original draft, Project administration, Investigation. Yan Lv:
large vision models will offer further improvement in understanding and Writing – original draft.
interpreting complex visual data. As a result, future HAII research will
be empowered to explore more complex or longitudinal tasks within
realistic scenarios such as public services, business decisions, creative

8
T. Jiang et al. Data and Information Management xxx (xxxx) xxx

Declaration of competing interest Gratch, J., Rickel, J., Andre, E., Cassell, J., Petajan, E., & Badler, N. (2002). Creating
interactive virtual humans: Some assembly required. IEEE Intelligent Systems, 17(4),
54–63. https://doi.org/10.1109/MIS.2002.1024753
The authors declare that they have no known competing financial Gursoy, D., Chi, O. H., Lu, L., & Nunkoo, R. (2019). Consumers acceptance of artificially
interests or personal relationships that could have appeared to influence intelligent (AI) device use in service delivery. International Journal of Information
the work reported in this paper. Management, 49, 157–169. https://doi.org/10.1016/j.ijinfomgt.2019.03.008
Guzdial, M., Liao, N., Chen, J., Chen, S.-Y., Shah, S., Shah, V., Reno, J., Smith, G., &
Riedl, M. O. (2019). Friend, collaborator, student, manager: How design of an ai-
Acknowledgements driven game level editor affects creators. In Proceedings of the 2019 CHI conference on
human factors in computing systems (pp. 1–13).
Hirschberg, J., & Manning, C. D. (2015). Advances in natural language processing.
This research has been made possible through the financial support Science, 349(6245), 261–266. https://doi.org/10.1126/science.aaa8685
of the National Social Science Fund of China under Grant No. Ho, C.-C., & MacDorman, K. F. (2010). Revisiting the uncanny valley theory: Developing
and validating an alternative to the Godspeed indices. Computers in Human Behavior,
22&ZD325.
26(6), 1508–1518. https://doi.org/10.1016/j.chb.2010.05.015
Horton, D., & Richard, W. R. (1956). Mass communication and para-social interaction.
References Psychiatry, 19(3), 215–229. https://doi.org/10.1080/00332747.1956.11023049
Huang, C., Zhang, Z., Mao, B., & Yao, X. (2023). An overview of artificial intelligence
ethics. IEEE Transactions on Artificial Intelligence, 4(4), 799–819. https://doi.org/
Alawadhi, M., Almazrouie, J., Kamil, M., & Khalil, K. A. (2020). A systematic literature
10.1109/TAI.2022.3194503
review of the factors influencing the adoption of autonomous driving. International
Hung, C., Choi, J., Gutstein, S., Jaswa, M., & Rexwinkle, J. (2021). Soldier-led adaptation
Journal of System Assurance Engineering and Management, 11(6), 1065–1082. https://
of autonomous agents (SLA3). Artificial Intelligence and Machine Learning for Multi-
doi.org/10.1007/s13198-020-00961-4
Domain Operations Applications, III, 743–754. https://doi.org/10.1117/12.2585828
Aldarmaki, H., Ullah, A., Ram, S., & Zaki, N. (2022). Unsupervised automatic speech
Isinkaye, F. O., Folajimi, Y. O., & Ojokoh, B. A. (2015). Recommendation systems:
recognition: A review. Speech Communication, 139, 76–91. https://doi.org/10.1016/
Principles, methods and evaluation. Egyptian Informatics Journal, 16(3), 261–273.
j.specom.2022.02.005
https://doi.org/10.1016/j.eij.2015.06.005
Amershi, S., Weld, D., Vorvoreanu, M., Fourney, A., Nushi, B., Collisson, P., Suh, J.,
Islas-Cota, E., Gutierrez-Garcia, J. O., Acosta, C. O., & Rodríguez, L.-F. (2022).
Iqbal, S., Bennett, P. N., Inkpen, K., Teevan, J., Kikin-Gil, R., & Horvitz, E. (2019).
A systematic review of intelligent assistants. Future Generation Computer Systems,
Guidelines for human-AI interaction. In Proceedings of the 2019 CHI conference on
128, 45–62. https://doi.org/10.1016/j.future.2021.09.035
human factors in computing systems (pp. 1–13). https://doi.org/10.1145/
Jin, S. V., & Youn, S. (2023). Social presence and imagery processing as predictors of
3290605.3300233
chatbot continuance intention in human-AI-interaction. International Journal of
Babel, F., Kraus, J. M., & Baumann, M. (2021). Development and testing of psychological
Human-Computer Interaction, 39(9), 1874–1886. https://doi.org/10.1080/
conflict resolution strategies for assertive robots to resolve human–robot goal
10447318.2022.2129277
conflict. Frontiers in Robotics and AI, 7, Article 591448.
Johnson, J. (2019). The AI-cyber nexus: Implications for military escalation, deterrence
Babel, F., Vogt, A., Hock, P., Kraus, J., Angerer, F., Seufert, T., & Baumann, M. (2022).
and strategic stability. Journal of Cyber Policy, 4(3), 442–460. https://doi.org/
Step aside! VR-based evaluation of adaptive robot conflict resolution strategies for
10.1080/23738871.2019.1701693
domestic service robots. International Journal of Social Robotics, 14(5), 1239–1260.
Jörling, M., Böhm, R., & Paluch, S. (2019). Service robots: Drivers of perceived
https://doi.org/10.1007/s12369-021-00858-7 [Article].
responsibility for service outcomes. Journal of Service Research, 22(4), 404–420.
Bharti, P. K., Ranjan, S., Ghosal, T., Agrawal, M., & Ekbal, A. (2021). PEERAssist:
https://doi.org/10.1177/1094670519842334
Leveraging on paper-review interactions to predict peer review decisions. In Towards
Jung, H. S., & Yoon, H. H. (2018). Improving frontline service employees’ innovative
open and trustworthy digital societies (pp. 421–435).
behavior using conflict management in the hospitality industry: The mediating role
Čaić, M., Mahr, D., & Oderkerken-Schröder, G. (2019). Value of social robots in services:
of engagement. Tourism Management, 69, 498–507. https://doi.org/10.1016/j.
Social cognition perspective. Journal of Services Marketing, 33(4), 463–478. https://
tourman.2018.06.035
doi.org/10.1108/JSM-02-2018-0080
Kahn, L. H., Savas, O., Morrison, A., Shaffer, K. A., & Zapata, L. (2020). Modelling hybrid
Canaan, R., Salge, C., Togelius, J., & Nealen, A. (2019). Leveling the playing field:
human-artificial intelligence cooperation: A call center customer service case study.
Fairness in AI versus human game benchmarks. In Proceedings of the 14th
In 2020 IEEE international conference on big data (big data) (pp. 3072–3075).
international conference on the foundations of digital games (pp. 1–8).
Kamezaki, M., Kobayashi, A., Yokoyama, Y., Yanagawa, H., Shrestha, M., & Sugano, S.
Cañas Delgado, J. J. (2022). AI and ethics when human beings collaborate with AI
(2020). A preliminary study of interactive navigation framework with situation-
agents. Frontiers in Psychology, 13, Article 836650. https://doi.org/10.3389/
adaptive multimodal inducement: Pass-by scenario. International Journal of Social
fpsyg.2022.836650
Robotics, 12(2), 567–588.
Cheng, L., Varshney, K. R., & Liu, H. (2021). Socially responsible ai algorithms: Issues,
Lai, Y., Kankanhalli, A., & Ong, D. (2021). Human-AI collaboration in healthcare: A
purposes, and challenges. Journal of Artificial Intelligence Research, 71, 1137–1181.
review and research agenda. In the 54th Hawaii international Conference on system
Chien, S.-Y., Lin, Y.-L., & Chang, B.-F. (2022). The effects of intimacy and proactivity on
sciences (HICSS-54).
trust in human-humanoid robot interaction. Information Systems Frontiers, 26, 75–90.
Lam, S. C. J., Ali, A., Abdalla, M., & Fine, B. (2023). U“AI” testing: User interface and
https://doi.org/10.1007/s10796-022-10324-y
usability testing of a chest X-ray AI tool in a simulated real-world workflow.
Ciechanowski, L., Przegalinska, A., Magnuski, M., & Gloor, P. (2019). In the shades of the
Canadian Association of Radiologists Journal, 74(2), 314–325. https://doi.org/
uncanny valley: An experimental study of human–chatbot interaction. Future
10.1177/08465371221131200
Generation Computer Systems, 92, 539–548. https://doi.org/10.1016/j.
Lee, K. M. (2006). Presence, explicated. Communication Theory, 14(1), 27–50. https://
future.2018.01.055
doi.org/10.1111/j.1468-2885.2004.tb00302.x
D’Alfonso, S. (2020). AI in mental health. Current Opinion in Psychology, 36, 112–117.
Lee, M. H., Siewiorek, D. P., Smailagic, A., Bernardino, A., & Bermúdez i Badia, S. (2021).
https://doi.org/10.1016/j.copsyc.2020.04.005
A human-ai collaborative approach for clinical decision making on rehabilitation
Dang, J., & Liu, L. (2022). Implicit theories of the human mind predict competitive and
assessment. In Proceedings of the 2021 CHI conference on human factors in computing
cooperative responses to AI robots. Computers in Human Behavior, 134, Article
systems (pp. 1–14).
107300. https://doi.org/10.1016/j.chb.2022.107300
Li, J. (2015). The benefit of being physically present: A survey of experimental works
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of
comparing copresent robots, telepresent robots and virtual agents. International
information technology. MIS Quarterly, 13(3), 319–340. https://doi.org/10.2307/
Journal of Human-Computer Studies, 77, 23–37. https://doi.org/10.1016/j.
249008
ijhcs.2015.01.001
De Dreu, C. K. W., & Weingart, L. R. (2003). Task versus relationship conflict, team
Licklider, J. C. R. (1960). Man-computer symbiosis. IRE transactions on human factors in
performance, and team member satisfaction: A meta-analysis. Journal of Applied
electronics, HFE-1(1), 4–11.
Psychology, 88(4), 741–749. https://doi.org/10.1037/0021-9010.88.4.741
Liew, T. W., & Tan, S.-M. (2021). Social cues and implications for designing expert and
Domingos, P., & Veve, A. (2018). Our digital doubles. Scientific American, 319(3), 88–93.
competent artificial agents: A systematic review. Telematics and Informatics, 65,
https://www.jstor.org/stable/27173625.
Article 101721. https://doi.org/10.1016/j.tele.2021.101721
Esterwood, C., & Robert, L. P. (2021). Do you still trust me? Human-robot trust repair
Liu, V., & Chilton, L. B. (2022). Design guidelines for prompt engineering text-to-image
strategies. In 2021 30th IEEE international conference on robot & human interactive
generative models. In Proceedings of the 2022 CHI conference on human factors in
communication (pp. 183–188). RO-MAN).
computing systems (pp. 1–23). https://doi.org/10.1145/3491102.3501825
Flemisch, F. O., Pacaux-Lemoine, M.-P., Vanderhaegen, F., Itoh, M., Saito, Y.,
Loske, D., & Klumpp, M. (2021). Intelligent and efficient? An empirical analysis of
Herzberger, N., Wasser, J., Grislin, E., & Baltzer, M. (2020). Conflicts in human-
human–AI collaboration for truck drivers in retail logistics. International Journal of
machine systems as an intersection of bio-and technosphere: Cooperation and
Logistics Management, 32(4), 1356–1383. https://doi.org/10.1108/IJLM-03-2020-
interaction patterns for human and machine interference and conflict resolution. In
0149
2020 IEEE international conference on human-machine systems (ICHMS) (pp. 1–6).
Louie, R., Coenen, A., Huang, C. Z., Terry, M., & Cai, C. J. (2020). Novice-AI music co-
Frey, C. B., & Osborne, M. A. (2017). The future of employment: How susceptible are jobs
creation via AI-steering tools for deep generative models. In Proceedings of the 2020
to computerisation? Technological Forecasting and Social Change, 114, 254–280.
CHI conference on human factors in computing systems (pp. 1–13).
https://doi.org/10.1016/j.techfore.2016.08.019
Lu, L., Cai, R., & Gursoy, D. (2019). Developing and validating a service robot integration
Gao, Z., & Jiang, J. (2021). Evaluating human-AI hybrid conversational systems with
willingness scale. International Journal of Hospitality Management, 80, 36–51. https://
chatbot message suggestions. In Proceedings of the 30th ACM international conference
doi.org/10.1016/j.ijhm.2019.01.005
on information & knowledge management (pp. 534–544). https://doi.org/10.1145/
3459637.3482340

9
T. Jiang et al. Data and Information Management xxx (xxxx) xxx

Mantravadi, S., Jansson, A. D., & Møller, C. (2020). User-friendly MES interfaces: Tabrez, A., Luebbers, M. B., & Hayes, B. (2020). A survey of mental modeling techniques
Recommendations for an AI-based chatbot assistance in industry 4.0 shop floors, in human–robot teaming. Current Robotics Reports, 1(4), 259–267. https://doi.org/
2020. In Intelligent information and database systems (pp. 189–201). 10.1007/s43154-020-00019-0
Modliński, A., Fortuna, P., & Rożnowski, B. (2023). Human–machine trans roles conflict Takayama, L., Groom, V., & Nass, C. (2009). I’m sorry, dave: i’m afraid i won’t do that:
in the organization: How sensitive are customers to intelligent robots replacing the Social aspects of human-agent conflict. In Proceedings of the SIGCHI conference on
human workforce? International Journal of Consumer Studies, 47(1), 100–117. human factors in computing systems (pp. 2099–2108).
https://doi.org/10.1111/ijcs.12811 Thomas, J., & Vaughan, R. (2018). After you: Doorway negotiation for human-robot and
Mohamed, A., Ren, J., El-Gindy, M., Lang, H., & Ouda, A. N. (2018). Literature survey for robot-robot interaction. In 2018 IEEE/RSJ international conference on intelligent robots
autonomous vehicles: Sensor fusion, computer vision, system identification and fault and systems (IROS) (pp. 3387–3394).
tolerance. International Journal of Automation and Control, 12(4), 555–581. https:// Tsai, W.-H. S., Liu, Y., & Chuan, C.-H. (2021). How chatbots’ social presence
doi.org/10.1504/IJAAC.2018.095104 communication enhances consumer engagement: The mediating role of parasocial
Mori, M., MacDorman, K. F., & Kageki, N. (2012). The uncanny valley [from the field]. interaction and dialogue. Journal of Research in Interactive Marketing, 15(3), 460–482.
IEEE Robotics and Automation Magazine, 19(2), 98–100. https://doi.org/10.1109/ https://doi.org/10.1108/JRIM-12-2019-0200
MRA.2012.2192811 Tyagi, A. K. (2019). Machine learning with big data. In Proceedings of international
Nass, C., Steuer, J., & Tauber, E. R. (1994). Computers are social actors. Proceedings of the conference on sustainable computing in science (pp. 1011–1020). Technology and
SIGCHI conference on Human factors in computing systems, 72–78. Management (SUSCOM).
Ng, F., Suh, J., & Ramos, G. (2020). Understanding and supporting knowledge Ueno, T., Sawa, Y., Kim, Y., Urakami, J., Oura, H., & Seaborn, K. (2022). Trust in human-
decomposition for machine teaching. In Proceedings of the 2020 ACM designing AI interaction: Scoping out models, measures, and methods. In CHI conference on
interactive systems conference (pp. 1183–1194). human factors in computing systems extended abstracts (pp. 1–7). https://doi.org/
Oh, C. S., Bailenson, J. N., & Welch, G. F. (2018). A systematic review of social presence: 10.1145/3491101.3519772
Definition, antecedents, and implications. Frontiers in Robotics and AI, 5, 114. UNESCO. (2022). Recommendation on the ethics of artificial intelligence. https://unesdoc.
https://doi.org/10.3389/frobt.2018.00114 unesco.org/notice?id=p::usmarcdef_0000381137.
Oh, C., Song, J., Choi, J., Kim, S., Lee, S., & Suh, B. (2018). I lead, you help but only with Varshan, V. S., Rajakumaran, G., Usharani, S., & Vincent, R. (2023). A multimodal
enough details: Understanding user experience of co-creation with artificial architecture with visual-level framework for virtual assistant. International Journal of
intelligence. In Proceedings of the 2018 CHI conference on human factors in computing Intelligent Systems and Applications in Engineering, 11(2), 1004–1012. https://www.
systems (pp. 1–13). ijisae.org/index.php/IJISAE/article/view/2983.
Pariser, E. (2011). The filter bubble: What the Internet is hiding from you. penguin UK. Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of
Paul, D., Sanap, G., Shenoy, S., Kalyane, D., Kalia, K., & Tekade, R. K. (2021). Artificial information technology: Toward a unified view. MIS Quarterly, 425–478.
intelligence in drug discovery and development. Drug Discovery Today, 26(1), 80–93. Venkatesh, V., Thong, J. Y., & Xu, X. (2012). Consumer acceptance and use of
https://doi.org/10.1016/j.drudis.2020.10.010 information technology: Extending the unified theory of acceptance and use of
Rajpurkar, P., Chen, E., Banerjee, O., & Topol, E. J. (2022). AI in health and medicine. technology. MIS Quarterly, 157–178.
Nature Medicine, 28(1), 31–38. https://doi.org/10.1038/s41591-021-01614-0 Villacis Calderon, E. D., James, T. L., & Lowry, P. B. (2023). How Facebook’s newsfeed
Reeves, B., & Nass, C. I. (1996). The media equation: How people treat computers, television, algorithm shapes childhood vaccine hesitancy: An algorithmic fairness,
and new media like real people and places. Cambridge University Press. accountability, and transparency (FAT) perspective. Data and Information
Ribino, P. (2023). The role of politeness in human–machine interactions: A systematic Management, 7(3), Article 100042. https://doi.org/10.1016/j.dim.2023.100042
literature review and future perspectives. Artificial Intelligence Review, 56(1), Voulodimos, A., Doulamis, N., Doulamis, A., & Protopapadakis, E. (2018). Deep learning
445–482. https://doi.org/10.1007/s10462-023-10540-1 for computer vision: A brief review. Computational Intelligence and Neuroscience,
Riek, L. D. (2012). Wizard of Oz studies in HRI: A systematic review and new reporting 2018, Article 7068349. https://doi.org/10.1155/2018/7068349
guidelines. J. Hum.-Robot Interact., 1(1), 119–136. https://doi.org/10.5898/ Wen, H., Amin, M. T., Khan, F., Ahmed, S., Imtiaz, S., & Pistikopoulos, S. (2022).
JHRI.1.1.Riek A methodology to assess human-automated system conflict from safety perspective.
Roca, S., Lozano, M. L., García, J., & Alesanco, Á. (2021). Validation of a virtual assistant Computers & Chemical Engineering, 165, Article 107939.
for improving medication adherence in patients with comorbid type 2 diabetes Whang, C., & Im, H. (2021). ’I like Your Suggestion!’ the role of humanlikeness and
mellitus and depressive disorder. International Journal of Environmental Research and parasocial relationship on the website versus voice shopper’s perception of
Public Health, 18(22), Article 12056. https://www.mdpi.com/1660-4601/18 recommendations. Psychology and Marketing, 38(4), 581–595. https://doi.org/
/22/12056. 10.1002/mar.21437
Sadiku, M. N. O., & Musa, S. M. (2021). Augmented intelligence. In A primer on multiple Wu, Z., Ji, D., Yu, K., Zeng, X., Wu, D., & Shidujaman, M. (2021). AI creativity and the
intelligences (pp. 191–199). Springer International Publishing. https://doi.org/ human-AI Co-creation model. In HCII 2021: Human-computer interaction. Theory,
10.1007/978-3-030-77584-1_15. methods and tools (pp. 171–190).
Salles, A., Evers, K., & Farisco, M. (2020). Anthropomorphism in AI. AJOB Neuroscience, Wu, X., Xiao, L., Sun, Y., Zhang, J., Ma, T., & He, L. (2022). A survey of human-in-the-
11(2), 88–95. https://doi.org/10.1080/21507740.2020.1740350 loop for machine learning. Future Generation Computer Systems, 135, 364–381.
Sandholm, T. (2017). Super-human AI for strategic reasoning: Beating top pros in heads- https://doi.org/10.1016/j.future.2022.05.014
up No-limit Texas hold’em. In Proceedings of the twenty-sixth international joint Xu, F., Uszkoreit, H., Du, Y., Fan, W., Zhao, D., & Zhu, J. (2019). Explainable AI: A brief
conference on artificial intelligence (IJCAI-17) (pp. 24–25). survey on history, research areas, approaches and challenges. In Natural Language
Schlomann, A., Wahl, H.-W., Zentel, P., Heyl, V., Knapp, L., Opfermann, C., Krämer, T., & processing and Chinese computing: 8th CCF international conference, NLPCC 2019 (pp.
Rietz, C. (2021). Potential and pitfalls of digital voice assistants in older adults with 563–574).
and without intellectual disabilities: Relevance of participatory design elements and Zhang, R., Duan, W., Flathmann, C., McNeese, N., Freeman, G., & Williams, A. (2023).
ecologically valid field studies. Frontiers in Psychology, 12, Article 684012. https:// Investigating AI teammate communication strategies and their impact in human-AI
doi.org/10.3389/fpsyg.2021.684012 teams for effective teamwork. Proceedings of the ACM on Human-Computer Interaction,
Schultz, M. G., Betancourt, C., Gong, B., Kleinert, F., Langguth, M., Leufen, L. H., 7(CSCW2), 1–31. https://doi.org/10.1145/3610072
Mozaffari, A., & Stadtler, S. (2021). Can deep learning beat numerical weather Zhang, C., Yao, C., Liu, J., Zhou, Z., Zhang, W., Liu, L., Ying, F., Zhao, Y., & Wang, G.
prediction? Philosophical Transactions of the Royal Society A: Mathematical, Physical & (2021). StoryDrawer: A Co-creative agent supporting children’s storytelling through
Engineering Sciences, 379(2194), Article 20200097. https://doi.org/10.1098/ collaborative drawing. In Extended abstracts of the 2021 CHI conference on human
rsta.2020.0097 factors in computing systems (pp. 1–6).
Shakerin, F., & Gupta, G. (2020). White-box induction from SVM models: Explainable AI Zhao, Z., & Ma, X. (2018). A compensation method of two-stage image generation for
with logic programming. Theory and Practice of Logic Programming, 20(5), 656–670. human-AI collaborated in-situ fashion design in augmented reality environment. In
https://doi.org/10.1017/S1471068420000356 2018 IEEE international conference on artificial intelligence and virtual reality (AIVR)
Shneiderman, B. (2022). Human-centered AI. Oxford University Press. (pp. 76–83).
Siemon, D., Strohmann, T., Khosrawi-Rad, B., Vreede, T.d., Elshan, E., & Meyer, M. Zheng, W. L., Liu, W., Lu, Y., Lu, B. L., & Cichocki, A. (2019). EmotionMeter: A
(2022). Why do we turn to virtual companions? A text mining analysis of replika multimodal framework for recognizing human emotions. IEEE Transactions on
reviews. In AMCIS 2022 proceedings (pp. 1–10). https://www.alexandria.unisg.ch/h Cybernetics, 49(3), 1110–1122. https://doi.org/10.1109/TCYB.2018.2797176
andle/20.500.14171/109536. Zhu, H., Sallnäs Pysander, E.-L., & Söderberg, I.-L. (2023). Not transparent and
Sun, Y., Shen, X.-L., & Zhang, K. Z. K. (2023). Human-AI interaction. Data and Information incomprehensible: A qualitative user study of an AI-empowered financial advisory
Management, 7(3), Article 100048. https://doi.org/10.1016/j.dim.2023.100048 system. Data and Information Management, 7(3), Article 100041. https://doi.org/
Süße, T., Kobert, M., & Kries, C. (2021). Antecedents of constructive human-AI 10.1016/j.dim.2023.100041
collaboration: An exploration of human actors’ key competencies. In Smart and Złotowski, J., Yogeeswaran, K., & Bartneck, C. (2017). Can we control it? Autonomous
sustainable collaborative networks 4.0 (pp. 113–124). robots threaten human identity, uniqueness, safety, and resources. International
Świechowski, M. (2020). Game AI competitions: Motivation for the imitation game- Journal of Human-Computer Studies, 100, 48–54. https://doi.org/10.1016/j.
playing competition, 6-9 Sept. 2020. In 2020 15th conference on computer science and ijhcs.2016.12.008
information systems (FedCSIS) (pp. 155–160). Zwakman, D. S., Pal, D., & Arpnikanondt, C. (2021). Usability evaluation of artificial
intelligence-based voice assistants: The case of amazon Alexa. SN Computer Science, 2
(1), 28. https://doi.org/10.1007/s42979-020-00424-4

10

You might also like