Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
16 views68 pages

When and How Artificial Intelligence Augments Employee Creativity

The article examines how artificial intelligence (AI) can enhance employee creativity by allowing AI to handle repetitive tasks while employees focus on higher-level problem-solving. A field experiment in a telemarketing company showed that AI assistance significantly increased creativity and sales performance, particularly among higher-skilled employees. The study concludes that AI-augmented creativity is skill-biased, favoring those with greater job skills, and highlights the implications of AI-human collaboration on job design and employee outcomes.

Uploaded by

shakteddy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views68 pages

When and How Artificial Intelligence Augments Employee Creativity

The article examines how artificial intelligence (AI) can enhance employee creativity by allowing AI to handle repetitive tasks while employees focus on higher-level problem-solving. A field experiment in a telemarketing company showed that AI assistance significantly increased creativity and sales performance, particularly among higher-skilled employees. The study concludes that AI-augmented creativity is skill-biased, favoring those with greater job skills, and highlights the implications of AI-human collaboration on job design and employee outcomes.

Uploaded by

shakteddy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 68

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/369589868

When and How Artificial Intelligence Augments Employee Creativity

Article in Academy of Management Journal · March 2023


DOI: 10.5465/amj.2022.0426

CITATIONS READS
297 25,692

4 authors, including:

Nan Jia Xueming Luo


University of Southern California Temple University
40 PUBLICATIONS 2,795 CITATIONS 150 PUBLICATIONS 20,895 CITATIONS

SEE PROFILE SEE PROFILE

All content following this page was uploaded by Nan Jia on 03 April 2023.

The user has requested enhancement of the downloaded file.


When and How Artificial Intelligence Augments Employee Creativity

Nan Jia
Marshall School of Business, University of Southern California
[email protected]

Xueming Luo
Fox School of Business, Temple University
[email protected]

Zheng Fang*
Business School, Sichuan University
[email protected]

Chengcheng Liao*
Business School, Sichuan University
[email protected]

Forthcoming, Academy of Management Journal

Abstract

Can artificial intelligence (AI) assist human employees in increasing employee creativity?
Drawing on research on AI-human collaboration, job design, and employee creativity, we examine
AI assistance in the form of a sequential division of labor within organizations: in a task, AI
handles the initial portion which is well-codified and repetitive, and employees focus on the
subsequent portion involving higher-level problem-solving. First, we provide causal evidence
from a field experiment conducted at a telemarketing company. We find that AI assistance in
generating sales leads, on average, increases employees’ creativity in answering customers’
questions during subsequent sales persuasion. Enhanced creativity leads to increased sales.
However, this effect is much more pronounced for higher-skilled employees. Next, we conducted
a qualitative study using semi-structured interviews with the employees. We found that AI
assistance changes job design by intensifying employees’ interactions with more serious customers.
This change enables higher-skilled employees to generate innovative scripts and develop positive
emotions at work, which are conducive to creativity. By contrast, with AI assistance, lower-skilled
employees make limited improvements to scripts and experience negative emotions at work. We
conclude that employees can achieve AI-augmented creativity, but this desirable outcome is skill-
biased by favoring experts with greater job skills.
*Corresponding authors.

1
Author Bio:

Nan Jia ([email protected]) Nan Jia is the Dean's Associate Professor in Business Administration
at the Marshall School of Business, University of Southern California. She holds a PhD degree in Strategic
Management from the Rotman School of Management, University of Toronto. Her research interests
include corporate political strategy, business-governance relationships, and applications of Artificial
Intelligence technologies in management.

Xueming Luo ([email protected]) is the Charles Gilliland Distinguished Chair Professor of


Marketing, Professor of Strategy and MIS, and Founder/ Director of the Global Institute for Artificial
Intelligence & Business Analytics in the Fox School of Business at Temple University. His research is
quantitative in nature and focuses on integrating artificial intelligence, 5G/AR/VR metaverse technologies,
big data machine learning, and field experiments to model, explain, and optimize customer behaviors,
company strategies, platform designs, and creator & sharing economy.

Zheng Fang ([email protected]) is a professor of Marketing, Information Systems and Strategic


Management at the Business School of Sichuan University. He received his PhD from the Sichuan
University. His research interests include artificial intelligence (AI), machine leaning (ML) and new
generation of information technology (NGIT) in business administration.

Chengcheng Liao ([email protected]) is a postdoc researcher of Marketing and Information Systems at


the Business School of Sichuan University. She received her PhD from the Sichuan University. Her research
interests include artificial intelligence (AI) and machine leaning (ML) in business administration.

2
“We need to acquaint a generation of workers with technologies to take on the more mundane,
repetitive portions of their jobs, and in turn elevate their decision-making roles within enterprises.”
Forbes Magazine, February 13, 20211

INTRODUCTION

In modern organizations, employee creativity, which enables employees to solve problems whose

solutions have not yet been established (e.g., Zhou and George 2001), plays a vital role in increasing

employee productivity and, thus, organizational performance (e.g., Anderson, Potocnik, and Zhou 2014).

Can AI technology increase the creativity of human employees, thus generating “AI-augmented employee

creativity”? This question has attracted significant interest from industry experts in terms of both

technology adoption and human resource management. A promising vision for organizations is to use AI

to assist human employees by “freeing [them] for higher-level” problem solving (Wilson and Daugherty

2018, p. 6). One speculation is that AI can increase employee creativity as “humans and bots work

together, with bots taking care of the heavy lifting, so humans can focus on the creative [part]” (O’Carroll

2017). However, there is a lack of theoretical foundation and systematic empirical evidence of whether

AI can indeed assist employees in creatively solving higher-level problems and the critical conditions that

must be met for this desirable outcome.

We address this gap by studying AI-human collaboration in a sequential division of labor (Puranam

2020), which entails assigning the initial well-codified and repetitive portions of a task to AI and the

subsequent higher-level problem-solving portions of the same task to human employees. First, we argue

that this form of AI assistance changes the human employees' job design. On this basis, we draw on the

insights developed by job characteristic models (e.g., Hackman, Oldham, Janson, and Purdy 1975; Elsbach

and Hargdon 2006; Chae and Choi 2018) and research on employee creativity (e.g., Amabile, Conti, Coon,

Lazenby, and Herron 1996; Zhou and Shalley 2003) to develop two competing implications for whether

this form of AI assistance increases employee creativity. We then identify employees’ job skills, i.e.,

1 https://www.forbes.com/sites/joemckendrick/2021/02/13/needed-people-to-put-the-intelligence-in-artificial-
intelligence/?sh=434a86533160&mod=djemAIPro.
domain expertise in carrying out the tasks required by their jobs, as a critical condition. Thus, we

hypothesize that this form of AI assistance is more likely to enable higher-skilled employees to find new,

useful solutions for higher-level problems, thus demonstrating greater creativity. (It is important to clarify

that we distinguish high- and low- skilled employees based on their varying expertise of performing the

same task; we do not refer to employees who perform different tasks that require greater or lesser training.)

We also argue that increasing employee creativity is critical for AI assistance to enhance organizational

performance.

We employed a mixed-methods empirical approach. First, we provided causal evidence from a field

experiment conducted by a major telemarketing company. The task involved making outbound calls to

sell credit cards to customers, which comprised generating sales leads, a well-codified activity that an

established AI conversational bot technology could perform, and subsequently persuading these sales

leads to make purchases, which human sales agents performed. We used double randomization: 3,144

customers were randomly assigned to be served by AI human teams or human agents alone, and 40 human

agents were randomly assigned to work in AI human teams or independently. To measure employee

creativity, we used voice recognition and text mining analysis to process the audio recordings of agents’

conversations during sales persuasion, identifying whether customer questions fell outside the scope of

agents’ training and whether agents successfully answered these untrained questions. We also observed

whether customers applied for credit cards after the sales calls. The results show that, on average, agents

with AI assistance were 2.33 times as successful in solving untrained questions as those without AI

assistance, but the magnitude of this increase was much more pronounced for top agents—2.81 times that

of bottom agents. Further, causal mediation analysis demonstrates that increased success in answering

untrained questions is critical for AI-assisted agents to achieve higher customer purchase rates than those

obtained independently.

Subsequently, we conducted semi-structured interviews with the 28 sales agents involved in the

field experiment. The agents confirmed that AI assistance changed their job design by screening out

uninterested customers, thereby intensifying their interactions with more serious customers. This change
impacted agents’ skills and psychology but with a distinct divergence based on the agent’s job skills.

Higher-skilled agents discussed several paths through which such a change enabled them to produce more

innovative scripts to address untrained questions from customers. This change also engenders positive

psychological outcomes for higher-skilled agents, including better mood, higher morale, a greater sense

of freedom in their position, and a more positive view of the firm. In contrast, lower-skilled agents

expressed that they had limited abilities to take advantage of the opportunities presented by this change to

solve untrained questions and reported greater stress, a stronger sense of defeat, and lower morale. The

findings corroborate and enrich our theory by generating deeper and more nuanced insights into the

underlying mechanisms through which AI assistance affects employee creativity.

This paper makes the following contributions. First, by demonstrating that it is possible to use AI-

human collaboration to increase employee creativity, thus achieving “AI-augmented employee creativity,”

this paper contributes to the fast-growing literature of “augmented intelligence,” which focuses on creating

complementary between AI and humans (Davenport and Kirby 2016, Raisch and Krakowski 2021;

Lebovitz, Lifshitz-Assaf, and Levina, 2022). Despite being attractive, the idea of using AI and humans

together for “co-creation” (e.g., Daugherty and Wilson 2018; Puranam 2020; Wilson and Daugherty 2018)

remains broad and vague. We developed a concrete design using the sequential division of labor to create

a complementary relationship between AI and human employees. In the literature on employee creativity,

our study adds AI assistance to the toolbox that enables employees to solve problems in a novel and useful

manner, thereby improving creativity at work. Moreover, based on theories of AI-human collaboration

and job design, our analysis identified a deeper theoretical tension in generating AI-augmented employee

creativity. This tension, and the critical scope conditions, are new to what we previously knew about how

AI may affect employees’ creativity at work.

Second, by delineating the critical conditions concerning employees’ job skills, we highlight that

AI-augmented creativity is skill-biased. Economic theory on skill-biased technology establishes that

production technologies enable skilled labor to become more productive than unskilled labor (Card and

DiNardo 2002). We reveal that AI technologies in non-production-related organizational contexts also


exhibit this feature, but for different theoretical reasons—employees’ job skills critically shape their

cognitive abilities and psychological outcomes, which are conducive to creativity. This is a novel insight

into employee creativity literature. Moreover, whereas the literature on AI adoption commonly considers

human employees’ job skills to enable them to compete in a “horse race” with AI (Choudhury, Starr, and

Agarwal 2020), we expand this view by showing that job skills also shape how employees benefit from

AI assistance. Finally, despite their benefits, skill-biased AI augmented intelligence raises serious

questions about the dark side of AI adoption in organizations (Kellogg, Valentine, and Christin 2020).

Overall, the novel findings contribute to scholars’ understanding of human-AI collaboration—

namely, how redesigning a job to incorporate AI has implications for the creativity of a human worker.

The findings are based on mixed methods: quantitative, experimental methods to identify a meaningful

effect in the field, paired with qualitative methods for elucidating mechanisms.

THEORY AND HYPOTHESES

AI-Human Collaboration Changes Job Design

AI technologies have been implemented to conduct a fast-growing set of economic activities with

higher accuracy, reliability, and scalability than humans (e.g., Balasubramanian et al. 2018;

Balasubramanian et al. 2020). Current AI technologies are particularly effective in performing repetitive,

well-codified tasks that follow clear, specific protocols and scripts (Choudhury et al. 2020; Berenthe, Gu,

Recker, and Santhanam 2021). However, the current form of AI faces limitations in handling unscripted,

higher-level, unstructured tasks (Brynjolfsson and McAfee 2014, Berenthe et al. 2021), partially because

it does not have human-like creativity in generating new answers to unknown problems (O’Carroll 2017;

Wilson and Daugherty 2018).

A division of labor between AI and humans may be plausible to leverage the relative strengths of AI

and human employees (Puranam, 2020). Any successful division of labor entails breaking down the goal

of achieving a particular organizational task into interdependent work sub-clusters (e.g., Milgrom and

Roberts 1990). Such interdependence can manifest sequentially when the outcome of one subtask becomes
the input for another or in parallel when subtasks jointly produce a common outcome (Christensen and

Knudsen 2020).2

In this study, we focus on the sequential division of labor; specifically, that AI handles the well-

codified, repetitive portion of the task, whose outcomes are necessary inputs for subsequent higher-level

problem-solving portions of the same task. This is a form of AI-human collaboration. In reality, many

sequential multistage tasks exist. For example, in making sales, the sequential portions involve vetting

potential customers' interests to generate sales leads and then persuading the leads to make a purchase

(e.g., Debecker 2019; Kasan 2020a; Sabnis et al. 2013). In managing patients, the sequential components

involve triaging by collecting information on patients to sort them into categories so that medical doctors

can subsequently conduct more focused examinations and diagnoses (Longoni, Bonezzi, and Morewedge

2019). In the recruiting task, the sequential components entail prescreening to identify and exclude any

poor fit for the firm based on basic information. Thus, subsequent in-depth interviews can focus on a

smaller set of more targeted candidates (Chen and Li 2018; van den Broek et al. 2021).

Introducing such a division of labor between AI and human employees results in a structural change

in the job design of human employees, with two key features: it lowers the workload on employees to

perform repetitive, well-codified, and structured work (which AI now handles), whereas it intensifies

employees’ workload of handling unstructured, high-level problem-solving.

Is AI theoretically important to this design? AI is essential to this division of labor, hence, job

design changes for two reasons. First, if the initial portion of the task is delegated to other employees, two

problems emerge. The first problem pertains to the interdependence between the two tasks. AI performs

repetitive, well-codified work with greater reliability and consistency than human workers, whose

performance can be negatively affected by boredom and fatigue (Barbalet 1999, Casilli and Posada 2019).

Therefore, employees handling the subsequent portion of the work may experience negative spillovers if

2
In studying the division of labor between AI and humans in organizations, Jia et al. (2021) examined a parallel
division of labor when AI and managers use their respective strengths to jointly produce feedback for employees in
the process of creating complementarity greater than what each can generate independently.
the delegated portion is not performed consistently by other human employees. This negative experience

or concern may undermine focal employees’ perceived autonomy or personal responsibility for their work

(Hackman et al., 1975), thus reducing their intrinsic motivation at work, including being creative (Zhou

and Shalley, 2003). Increased task interdependence constrains employees and complicates their behavior

at work (Chae and Choi, 2018).

The second problem is that this alternative design represents a regression of the established insights

of the job enrichment model (Hackman et al., 1975; Hackman and Oldman, 1980), which advocates for

making work more “interesting, challenging, and intrinsically motivating” (Elsbach and Hargdon, 2006,

Page 470). These features are the opposite of the well-codified, repetitive work that AI can handle, so

hypothetical employees to whom, instead of AI, this work is delegated will experience the known negative

consequences that scholars of job design have strived to avoid.

Second, AI is the most advanced among existing technological tools for handling well-codified and

repetitive work. For example, conversational AI is a mature technology widely adopted by many industries

(Debecker 2019). Compared with alternative technologies, such as previous-generation rule-based pre-

recorded systems, AI conversational bots are more powerful. This is because alternative technologies

cannot engage in human-like conversations, resulting in less effective information exchange and a higher

likelihood of triggering customer aversion (Luo et al. 2019).

Theoretical Tension Over AI-Augmented Employee Creativity

In organizational research, employee creativity refers to intentionally generating new and useful

ideas, methods, or practices (for a comprehensive review, see Anderson et al. 2014).3 Employee creativity

is a context-based concept that refers to employees’ generation of new ideas to solve problems in an

organizational context. An employee’s new idea to address a problem in the focal organization may

resemble ideas already developed elsewhere. However, to the extent that the focal organization does not

3
A closely related concept is innovation. Organizational scholars distinguish innovation as “both the production of
creative ideas as the first stage and their implementation as the second stage” (Anderson et al. 2014, p. 1298).
Further, it is important to distinguish creativity from novelty or radicalness—some new ideas are more radical or
novel, whereas other new ideas are less novel and more incremental (Anderson et al. 2014; Zhou and George 2001).
know those existing ideas and thus faces a problem that would otherwise not be solved had the employee

not generated the new idea, we consider this employee to have demonstrated creativity. Therefore, an

employee does not have to be the first or only one in the world to generate ideas that are considered

creative. Thus, employee creativity is a within-organizational phenomenon that varies at lower levels,

such as individuals, teams, and tasks (Anderson et al. 2014).

Based on research on job characteristic models and employee creativity, there exists theoretical

tension regarding whether the changes in job design induced by the sequential division of labor between

AI and human employees increase employee creativity, thus generating AI-augmented employee

creativity.

There are two reasons for this positive effect. First, the changed job design helps with the

conservation of resources (Hobfoll, Shirom, and Golembiewski, 2000) needed by employees for higher-

level problem-solving. When employees can use the output generated by the well-codified and repetitive

work carried out by AI without having to conduct these activities themselves, they can invest their

conserved cognitive, mental, and emotional resources to solve subsequent higher-level problems than

when they have to handle the initial work themselves. With greater conservation of such resources,

employees are more likely to stay focused on the work at hand, which increases the likelihood of achieving

creative outcomes (Oldham & Cummings, 1996).

Second, the changed job design, by eliminating the well-codified and repetitive portions of tasks

from employee responsibilities, increases the overall complexity and challenges of the problems that

employees are exposed to at work. Research has found that, when facing complex and challenging jobs,

employees become more motivated, proactive, excited, and entrepreneurial, which increases the number

of new ideas they generate (Oldham & Cummings, 1996; Chae & Choi, 2018; Shalley, Gilson, & Blum,

2009). A critical mechanism is the sense of autonomy: employees report achieving more creative

outcomes when they consider working on challenging assignments that offer them more freedom

(Amabile and Grykiewicz, 1989; Hatcher et al., 1989). Another mechanism is that when employees are
more excited about work, they have a higher intrinsic motivation to explore new paths and find new

solutions, which critically increases their creativity (Amabile et al. 1990).

However, an opposite outcome may occur for two reasons. First, increased workload pressure

constitutes an obstacle to employee creativity (Amabile, et al., 1996, Perlow, 2001; Elsbachand Hargadon,

2006). High job demands heighten workload pressure to overcome cognitive challenges and cope with

time pressure, and lower job control over timing, pacing, and quality of work (Elsbach and Hargadon,

2006). For employees, the focal job design, while alleviating workload pressure from the repetitive, well-

codified portion of the task, intensifies the workload pressure of solving higher-level problems. Solving

higher-level problems is more cognitively challenging than performing well-codified work, and it does

not always result in the luxury of greater control over the timing, pace, and quality of work. If the AI-

induced change in job design increases the overall workload pressure, it threatens to reduce employee

creativity.

Second, in addition to tangible resources provided by organizations that enable employees to search

for new solutions (for a review, see Anderson et al. 2014), the perception of greater organizational support

is more likely to increase employee motivation to search for newer and better solutions (Liu et al. 2017;

Farmer, Tierney, Kung-McIntyre, 2003), thus increasing employee creativity (Zhou and George, 2001;

Shalley, Gilson, and Blum, 2009). While employees may view organizations’ use of AI in the sequential

division of labor as aiding them in their jobs, hence increasing organizational support, there also exists a

salient concern that AI adoption triggers employees’ fear of job displacement by technology (Tong, Jia,

Luo, and Fang, 2021), which can be perceived as lowered organizational support for employees.

Despite the competing implications for the effect of a sequential division of labor between AI and

human employees on employee creativity, our core argument is that employees’ job skills constitute a

critical condition for delineating the competing effects, which we illustrate next.

Critical Condition of Job Skills: Skill-Biased AI-Augmented Employee Creativity

We consider employees’ job skills to be a critical condition that delineates two competing

implications for AI-augmented employee creativity. Employees’ job skills are their domain expertise or
knowledge of how to carry out the tasks required by their jobs. Job skills determine how well employees

can overcome challenges encountered at work; thus, these abilities matter to how well they perform

relevant tasks (MacInnis, Moorman, and Jaworski 1991).

Recall that the arguments for a positive effect produced by the changed job design as a result of a

sequential division between AI and human employees on employee creativity include that it conserves

the cognitive resources of employees and that it increases the complexity of the work handled by

employees, both of which are conducive to employee creativity. These mechanisms are stronger for

employees with higher job skills because they need to possess sufficient domain-relevant expertise to

capture the opportunities to demonstrate the creativity presented to them by complex tasks (Liu et al.

2017) or conserved resources. That is, without a sufficient amount of domain knowledge needed to

develop new solutions, employees will not be able to achieve creativity, no matter how motivated or

energized they are to do so. Several existing theories have highlighted the importance of expertise in

creating new ideas. The componential theory of employee creativity holds that domain-relevant skills are

important drivers of employee creativity in that domain (Amabile, 1996). As supported by absorptive

capacity theory (wherein greater existing knowledge enables individuals and organizations to learn new

knowledge better; Cohen and Levinthal 1990) and the theory of innovation (wherein new knowledge

results from the recombination of prior knowledge; Fleming et al. 2007), greater existing skills of

employees in the job-related domain improve their ability to find new solutions to problems. Therefore, the

greater an employee’s job skills, the more likely they are to take advantage of the cognitive resources

conserved, and the higher the job complexity generated by the AI-induced changes to the job design,

both of which are conducive to workplace creativity.

Recall that countervailing arguments are rooted in concerns over the changed job design

intensifying the workload pressure of employees or being perceived by employees as reduced

organizational support owing to their fear of displacement by AI. While both concerns are obstacles to

employee creativity, we argue that they are alleviated to a greater extent for employees with higher

skills. First, although the changed job design increases employees’ workload of engaging in high-level
problem-solving, higher job skills enable employees to solve problems in their job domain, so the

overall workload pressure is less likely to increase or increase to a lesser extent, for higher-skilled

employees. Second, when employees have higher job skills, they develop greater advantages in a “horse

race” with AI, which alleviates their concerns about being replaced by AI (Qin, Jia, Luo, and Liao,

2022).

Consequently, the positive impact of the changed job design resulting from a sequential division

of labor between AI and human employees on employee creativity is amplified when employees with

higher job skills are involved. In contrast, the potential negative impact is alleviated for higher-skilled

employees. As a result, we conclude that the sequential division of labor between AI and human

employees is more likely to increase the creativity of employees with greater job skills. In other words,

AI-augmented employee creativity is more likely for employees with better job skills; hence, it is skill-

biased. The following hypothesis supports this conclusion.

H1: In an organizational task with a sequential division of labor between AI and human employees, AI
assistance with handling the initial well-codified, repetitive portion of the task is more likely to increase
employees’ creativity in solving subsequent higher-level problems of the task when these employees have
higher job skills.

Figure 1 summarizes the chain of logic employed to generate this conclusion.

[INSERT FIGURE 1 HERE]

Performance Implications

By demonstrating greater creativity at work, employees generate new and useful solutions for

problems whose answers have not been developed, which increases the likelihood of completing tasks and

thus achieving better job performance (Gray, Knight, and Baer, 2020). It has been demonstrated that higher

employee creativity is a critical path to increasing job performance (Anderson et al., 2014; Thomke and

Fujimoto 2000; Zhou and George 2001; Chae and Choi, 2018). Therefore, if giving AI assistance to

employees increases their creativity, then it follows that AI assistance ultimately increases employee job
performance. Thus, we hypothesized that AI assistance could improve employee job performance by

increasing creativity at work, as captured by the following hypothesis:4

H2: If AI assistance with handling the initial well-codified, repetitive portion of the task indeed increases
employees’ creativity in solving subsequent higher-level problems of the task, then increased employee
creativity is a path for AI assistance to increase employees’ performance.

STUDY 1: A FIELD EXPERIMENT

Field Experiment Setting

We conducted a randomized field experiment in a large telemarketing company in Asia, the name

of which will remain confidential owing to company preferences. This company specializes in selling a

wide variety of products and services to more than 30 million customers across multiple industries,

including telecom, retail, fintech, and real estate. At the time of the experiment, the company was

preparing to launch a new business line for marketing credit cards in partnership with a major bank. None

of the employees had prior experience selling credit cards before the launch. Our experiment was

conducted at the beginning of the new business launch after employees received basic training on selling

credit cards with relevant scripts. This ensured that all employees had equal prior exposure and

knowledge specific to credit card sales.

The company has adopted the common practice of designing sales tasks as two sequential

components. In the first stage, employees call customers to introduce general information about the

product and probe the initial interest of customers to generate “sales leads,” described as customers who

are interested in learning more about the product (without yet committing to make a purchase). Customers

who were not interested were filtered out. The sales lead generation was a well-codified activity for which

the company provided numerous protocols and scripts. The second stage pertained to sales persuasion,

wherein employees continued serving the leads by finding out more about their needs, trying to match

their needs with the product, and convincing the lead to make a purchase (i.e., to apply for a credit card in

4
H2 does not hinge on distinguishing employees’ job skills, because for the mediation hypothesis in H2 to hold, we
only need to satisfy the condition that AI assistance indeed increases employee creativity.
our setting). Sales persuasion was considered a much less structured activity than sales lead generation.

While the company provided training to employees with a knowledge bank, unexpected questions

commonly occurred, and the knowledge bank needed to be updated.5 However, these two stages were

closely connected as a single sales task because the initial lead generation critically enhanced the

effectiveness of subsequent second-stage sales persuasion by saving effort and mental power that would

otherwise be wasted on trying to persuade customers who are inherently uninterested in the product

(Sabnis et al. 2013).

The company used AI conversational bot technology to generate sales leads and reduce labor costs.

The AI conversational bot was empowered by cutting-edge deep learning neural networks, voice

recognition algorithms, and natural language understanding via bidirectional encoder representations from

transformers (Brynjolfsson and McAfee 2014; Davenport, Guha, and Grewal 2021; Luo et al. 2021). It

was trained with terabytes of telemarketing call data and could engage in natural, human-like

conversations with customers. Its “speech-to-text” process recognized human language and converted

audio data to a machine-understandable language. Moreover, “grammatical parts-of-speech tagging”

identifies each word in the corpus based on its definition and context.

Furthermore, the AI conversational bot applied deep learning algorithms to dynamically

understand the answers to customer questions based on both correct answers (positive samples), which

increased the probability of sales and incorrect answers (negative samples). Via the “text-to-speech”

function, the trained AI conversational bot could understand customer questions and communicate correct

answers drawn from the knowledge bank to the customers in natural conversations. According to the

company’s records, the AI conversational bot passed the Turing test because, during the short (two–three

minute) phone conversation, nearly 97% of the customers failed to distinguish the AI conversational bot

from human agents. A high-tech firm developed and commercialized AI technology before licensing the

focal company. 6

5
Our semi-structured interviews in Study 2 provide more evidence of divergent features of work at the two stages.
6
Similar to the AI conversational bot used here, many other AI conversational bots such as Cogito, Amelia, and
As the sales lead generation is well codified and highly scripted, the trained AI conversational bot

can handle this work proficiently, for which we provide more evidence in the next section. However,

current forms of AI technology are less effective in conducting sales persuasion, which is less codifiable

and requires conversations that exceed the scope of scripts. This is because customers may have

numerous idiosyncratic reasons for not applying for a credit card even after they are confirmed as leads

(Raisch and Krakowski 2021). Owing to these common features of current AI conversational bots,

industry experts have recognized that customer service jobs that involve a human following a script to

interact with customers are at the greatest risk of being replaced by AI. Human interactions that require

real, unscripted conversations are not at risk (McKendrick, 2021).7

Experimental Design

The task is to sell credit cards to customers through outbound sales calls. The field experiment

followed a two (with AI assistance versus no AI assistance) by two (top versus bottom employees) full-

factorial design, resulting in four experimental groups.

The first pertains to the presence of AI assistance. In the groups with AI assistance, the AI

conversational bot made outbound calls (the identification of AI was not revealed to customers). If the

customer indicated an interest in learning more about the credit cards (i.e., the customer was confirmed

as a lead), the conversational bot would thank the customer and hand over the call to a human employee,

who was referred to as “an exclusive VIP client manager,” to serve as the lead in the sales presentation.

An employee makes outbound calls in groups without AI assistance to generate sales leads. If the

Amazon Lex have been widely adopted across several industries, such as advertising, airlines, automobiles, banking,
finance, healthcare, and retail (Luo et al. 2019; Gossett 2021). Over 80% of Fortune 500 companies use AI
conversational bots in their call centers to save on labor cost and improve customer services (Insider Intelligence
2021).
7
We also have one experimental group in which the AI conversational bot engaged in both lead generation and sales
persuasion, without any human agents involved. Because of the absence of human involvement, insights developed
from this treatment group do not directly help us address the research question of this paper which hinges on human
agent’s creativity demonstrated during sales persuasion. Nonetheless, we find that, compared with human employees
conducting those activities, the AI conversational bot was as effective as human employees in generating sales leads
(this result is consistent with those shown in this paper) but lagged behind top-skilled human employees in sales
persuasion. More details are included in Appendix 11.
customer indicated an interest, the same employee would thank the customer and say they would

become the “exclusive VIP client manager” to continue the call and engage in a sales presentation.

Human employees and an AI conversational bot follow the same protocols to generate sales leads.

We restricted the AI conversational bot to following common protocols rather than allowing it to

dynamically learn customer preferences and personalize the conversation with each customer (which is

more difficult for human agents). This restriction likely makes our results more conservative.

Furthermore, the partnering bank developed and provided common protocols and scripts used by

employees and the AI conversational bot. Appendix 1 provides an example of the protocol used. A

typical protocol includes several steps: opening, product introduction, elaboration, and ending/transfer.

AI and human agents called all customers during the same period (1–3 pm on a Wednesday).

The second factor of the experimental design pertained to employees’ expertise. We distinguish

top employees from bottom employees based on their prior performance. Specifically, we ranked

employees based on their sales volumes of other products for the previous months. We identified

employees whose performance was in the lowest and highest terciles (i.e., the bottom 33% and top 33%,

respectively) and then selected 20 employees from each tercile whose performance was closest to one

another. Using this approach, we maximized the performance difference between top and bottom

employees and minimized the performance variation within each group. The employee performance

supports this group assignment demonstrated in a pretest in which we had these 40 employees sell another

product (results are available upon request).

We randomly assigned 40 top and bottom employees to be assisted by AI or work independently

and randomly assigned 3,144 customers to each of the four experimental groups.8 Figure 2 illustrates the

design of the treatment groups.

[INSERT FIGURE 2 HERE]

8
Within each group, we also stratified the customer sample based on whether customers had previously inquired
about the credit cards with the partnering bank (cold call versus warm call customers) for validity check, as
illustrated later.
Data, Variables, and Randomization Checks

A customer-level dataset was constructed; customers constitute an important level of analysis

because each customer essentially represents a “sales task” for employees to handle. We analyzed this

dataset and focused on how well each sales task was performed. We captured two outcomes—creativity

and performance–achieved by employees, who are called sales agents in this context, serving each

customer during sales persuasion.

To measure an employee’s creativity during sales persuasion (i.e., sales presentation to confirmed

leads), we analyzed the audio data of all calls in our experiment. A dedicated AI algorithm first identified

whether each question asked by customers during sales persuasion was within the scope of the knowledge

bank that was used to train employees before the experiment. For questions that fell outside the scope of

the knowledge bank (i.e., “untrained” questions”), an experienced manager determined whether

employees successfully answered them.9 Employee creativity is measured by solving outside-knowledge-

bank questions, which is the ratio of untrained questions successfully resolved by the employee to all

questions asked by the focal customer.10 Thus, not all employees’ answers to untrained customer questions

are automatically counted as creative. Instead, answers were needed to solve the “new” question (i.e.,

questions that exceed the scope of the knowledge bank), thus becoming both novel and useful—two key

criteria for determining employee creativity (Liu et al. 2017). Thus, our measure of employee creativity is

objective and draws on “hard data” on employee behavior, extending the common approach of measuring

9
Two pieces of critical information contribute to our confidence in the measure. To enhance their competitive
advantage, the company considered it an important strategy to continuously identify untrained questions that
emerged during the focal time window, create scripts for those questions, and use this information to expand their
knowledge bank. The company undertook two actions. The first was to regularly use the aforementioned advanced
AI algorithm to compare recorded sales calls with the existing knowledge bank, to identify untrained questions. This
application is primarily enabled by ASR (automatic speech recognition) and NLP (Natural Language Processing),
and the algorithms that support each reached over 98% accuracy. These algorithms have also been widely
commercialized, including being adopted by the largest e-commerce platform of the country. Second, the company
hosted regular meetings with multiple domain experts and experienced managers to assess how agents handled
untrained questions and develop scripts for them. In the meeting that occurred after the experiment, the focal
manager’s judgements about whether each untrained question was successfully answered or not were all confirmed
by this group of experts.
10
We use the proportion to account for the workload of serving the given customer. We note that customers asked
substantially fewer questions outside the scope of the knowledge bank than those within the scope.
employee creativity based on recalls and perceptions (e.g., Anderson et al. 2014; Liu et al. 2017; Zhou

and George 2001).

Employee performance is captured by customer purchase, a binary variable indicating whether

customers used the link sent to them to open a credit card within 24 hours after the sales call.

Approximately 4% of all customers who received outbound calls ended up applying for credit cards;

among the customers confirmed as sales leads, the purchase rate was 8%. Nearly half of the customers

who received outbound calls were confirmed as sales leads.

Based on the four treatment groups, we constructed AI-Human Hybrids, which equals one for the

two experimental groups that involve AI assistance to either type of employee and zero for the remaining

two experimental groups that do not involve AI assistance. We construct Top Agents, which equals 1 for

the two experimental groups that involve top employees with or without AI assistance, and 0 for the

remaining two experimental groups that involve bottom employees with or without AI assistance.

We also collected information on customer demographics to provide more information on the sales

tasks faced by employees. On average, customers receiving outbound calls in the experiment were 31

years old, and approximately half were men. They were relatively well educated, approximately 70%

holding undergraduate or postgraduate degrees. Approximately 25% of customers owned at least one

credit card before the call. We also surveyed sales leads on the extent to which they perceived being

handed over for a sales presentation as a disturbance to address potential confounding effects. Panel A of

Table 1 presents the summary statistics of all variables and pairwise correlations.

[INSERT TABLE 1 HERE]

Randomization checks were performed on the task heterogeneity (customer demographics) across

the four experimental groups. Table 2 presents the results. Across all covariates, F- and chi-square tests

indicated that the differences in the mean values across the experimental groups were jointly insignificant;

thus, our data passed the randomization checks.

[INSERT TABLE 2 HERE]


Results on Employee Creativity

First, we present the model-free evidence. Figure 3 demonstrates the means and standard errors of

solving outside-knowledge-bank questions for the two aggregate experimental groups that differed in

terms of whether AI assistance was used for lead generation. With AI assistance, an average agent was

2.33 times as successful in answering questions they were not previously trained for and thus had to

generate answers as their counterparts that did not have AI assistance. This difference was statistically

significant (p < 0.05). These results indicate that the overall effect of AI assistance on the creativity of an

average employee is positive. However, to test our hypothesis that employees’ job skills constitute a

critical condition, we differentiate treatment groups based on whether they involve top or bottom agents.

Figure 4 shows the means and standard errors of solving outside-knowledge-bank questions for all four

experimental groups. Although AI assistance helps both agents at the top and bottom to demonstrate

greater creativity than without AI, the magnitude of this increase is much more pronounced among top

agents than among bottom agents (p < 0.01). Specifically, for top agents, the magnitude of the increased

success in answering untrained questions due to receiving AI assistance, compared with no AI assistance,

was 2.81 times that of bottom agents. Thus, these results support H1.

[INSERT FIGURES 3 AND 4 HERE]

We also validated these findings using regression results. We regress solving outside-knowledge-

bank questions on indicators of AI-Human hybrids, top agents, their interaction term, and all control

variables, with standard errors clustered at the agent level. Table 3 presents the results. Model 1 in Table

3 contains all control variables. Model 2 in Table 2 adds AI-Human Hybrids as a covariate. This shows

that all else being equal, an average agent with AI assistance demonstrates greater creativity in solving

outside-knowledge-bank questions than the average agent without AI assistance by a degree that amounts

to 10% of the mean value of solving outside-knowledge-bank questions. This is a substantial increase.

Model 4 in Table 3 shows a positive interaction effect produced by AI-Human hybrids and top

agents. This indicates that the enhanced creativity demonstrated by agents as a result of AI assistance in

lead generation is even more pronounced for highly skilled agents than for bottom agents with lower job
skills. As an alternative way to demonstrate the moderating effect of an agent’s job skills, Models 6 and 7

in Table 3 examine the main effect of AI-Human Hybrids on employee creativity in the subsamples of top

and bottom agents, respectively. While agents with AI assistance demonstrate greater creativity than those

without such assistance in both subsamples, the magnitude of the estimated effect of AI assistance among

top agents is six times that of the bottom agents, further substantiating H1. To address the concern that

agent-level variation (top vs. bottom agents) moderates the effect of task-level variation (customers served

by AI-human teams versus humans on their own) on task outcomes (customers’ questions answered), we

also employ multilevel models, which generate consistent results, as reported in Appendix 2.

[INSERT TABLE 3 HERE]

Alternative Explanations. As a result of our experimental design involving a two-stage sequential

labor division, we checked whether a selection bias existed in the first-stage sales lead generation, which

may confound second-stage sales persuasion outcomes, including employee creativity and performance,

which we will discuss next. Therefore, we examine the effect of AI assistance on the likelihood of

generating sales leads in the first stage. As Appendix 3 shows, all four treatment groups performed

similarly in generating sales leads. This finding also confirms the key assumption we made in the theory

development that first-stage sales generation is repetitive, well-defined, and scripted work that does not

leverage much sales capability. Thus, employees with lower expertise levels perform as well as those with

higher expertise levels, and AI performs as proficiently as human employees. The results confirmed that

no selection occurred during the first-stage lead generation.

Furthermore, sales leads confirmed in the first stage by different experimental groups were

homogenous. We examined the demographic characteristics of sales leads generated by the four

experimental groups. Appendix 4 shows that the one-way analysis of variance (ANOVA) and chi-square

test fail to reject the null hypothesis that the mean values of these variables for sales leads are not

different among the four experimental conditions. These results further alleviate concerns that the

heterogeneity of sales leads might confound the results that performance and employee creativity

demonstrated in persuading sales leads to make a purchase vary across the treatment groups.
Finally, a necessary condition for successfully addressing outside-knowledge-bank questions is that

such questions must be asked by customers to agents first. Therefore, we examined whether there was a

systematic difference in the availability of such questions between the treatment groups. Appendix 5

shows that, among the four treatment groups, customers asked a similar number of questions outside the

scope of the knowledge bank. Thus, we can alleviate the concern that heterogeneous questions asked by

consumers drive findings on employing creativity.

Validity Check. If the theory that AI assistance enables employees to demonstrate a higher level of

creativity holds true, this causal relationship should manifest more strongly in situations that create greater

opportunities to demonstrate creativity. Specifically, when serving customers who ask more outside-

knowledge-bank questions, the theorized relationship should be more pronounced than when serving

customers who ask fewer questions. This is because, in the latter case, there is less need to create answers

to outside-knowledge bank questions.

Customer heterogeneity creates the opportunity to conduct a validity check. Some customers had

inquired about the credit cards with the bank before the bank entered a partnership with the focal

telemarketing firm to sell the credit cards, which we call ‘warm call’ customers. Otherwise, they are

referred to as ‘cold call’ customers. Customers who inquired about this product may have a greater need

for it; thus, they could be more easily persuaded by agents to purchase during the sales persuasion stage

(Alwitt and Pitts 1996; Tauber 1973). Indeed, Appendix 6 shows that the average “warm call” customer

is almost twice as likely to make a purchase than an average “cold call” customer. One may speculate that

customers who had inquired about credit cards might have possessed more knowledge and thus might

demand more sophisticated services before making purchase decisions, making them more difficult to sell.

However, this speculation is not consistent with our data. Appendix 7 shows that the outside-knowledge-

bank questions asked by “cold call” customers are more than four times that asked by “warm call”

customers. To summarize, “cold call” customers asked more outside-knowledge-bank questions and were

less likely to make purchases than “warm call” customers.


We used stratified sampling to ensure that each treatment group served approximately the same

number of customers of each type. Figure 5 presents solving outside-knowledge-bank questions for eight

groups, with each of the four treatment groups further divided into “cold call” and “warm call” customers.

The results show that both the positive effect of AI assistance on solving questions outside the knowledge

bank and the enhanced moderating effect of top agents are present among “cold call” customers (who ask

more questions outside the knowledge bank), as indicated by the gray shades. However, these effects are

nearly absent among “warm call” customers (who ask a few questions outside the knowledge bank). Thus,

these results validated H1.

[INSERT FIGURE 5 HERE]

Results on Employee Performance

H2 posits that providing AI assistance to employees improves their performance by increasing their

creativity to solve problems at work, particularly when higher-skilled employees are involved. First, we

compared the customer purchase rates for the experimental groups with and without AI assistance, as

shown in Figure 6. We find that customers served by an AI-assisted sales agent are almost twice as likely

to make a purchase than those served by sales agents alone, and this difference is statistically significantly

different from 0 (p < 0.05).

[INSERT FIGURE 6 HERE]

However, the unconditional mean comparison does not directly test H3. We conducted a causal

mediation analysis for randomized experimental data (Imai, Keele, and Tingley, 2010) with 1,000

bootstrap replications (Preacher and Hayes 2004). We use Customer Purchase as the dependent variable,

AI-Human Hybrids as the independent variable, Solving Outside-knowledge-bank Questions as the

mediator, and all control variables. We report the regression results in Appendix 8 and summarize the key

estimates in Figure 7. The results show that agents with AI assistance, compared to those without, are

more successful in solving outside-knowledge-bank questions (which confirms the results mentioned

above). In turn, this enhanced success further increases customer purchases. These results support H3.

[INSERT FIGURE 7 HERE]


STUDY 2: SEMI-STRUCTURED INTERVIEWS

Motivation of Mixed Methods

Mixed-methods approaches provide “triangulation” to improve the confidence that results are not

produced by some artifacts of a particular data source or method (Jick, 1979) and generate opportunities

to enrich explanations (Creswell and Creswell, 2017). In our study, both studies had distinct strengths and

weaknesses. The field experiment provides causal identification of objective outcomes based on actual

workplace behavior when real economic interests are at stake (Harrison and List, 2004), but it does not

directly reveal the mechanisms. Semi-structured interviews do not produce causal relationships and are

subjective but enrich our knowledge of the behavioral and psychological processes that constitute

mechanisms. Using these two methods contributes to and compensates for the weaknesses of each method

(Siebel, 1973; Small, 2011), thus creating complementarity (Small, 2011). We used a field experiment as

the primary study to establish causal evidence, followed by an inductive, qualitative study to enrich our

understanding of the mechanisms.

Data and Analysis Strategy

To further unpack the theoretical mechanisms through which AI assistance shapes employee

behavior and performance at work, we conducted 28 semi-structured interviews with a random sample of

sales agents who participated in the experiment, stratified between agents who received AI assistance and

those who did not, and between the top and bottom agents (i.e., we had the same number of interviewees

from each of the four treatment groups). This nested design enables researchers to penetrate deeper into

the subjects of study (Small, 2011).

All interviews were conducted in person and lasted between 50 and 90 minutes. Although we used

an interview protocol as a guide (Appendix 9), we encouraged interviewees to talk freely about their

experiences, share stories and examples, and express their feelings and emotions so that they could discuss

what they considered relevant and important (Weiss 1994).

Subsequently, we followed a grounded theory approach to engage in inductive data analysis, which

comprises three stages (Strauss and Corbin 1994). In the first stage, we conducted open coding to generate
common themes as first-order codes and assigned tentative categories. For example, first-order codes

capture highly skilled agents expressing happier and more relaxed feelings from serving the sales leads

generated by AI. As the data analysis progressed, some tentative categories were later preserved, whereas

others that did not fit the data well were revised or abandoned. In the second stage, we consolidated first-

order codes into second-order theoretical categories and focused on the contrasts and connections among

the categories (Gioia, Corley, and Hamilton, 2013). For example, interviewees’ descriptions of the positive

feelings they experienced from serving the sales leads generated by AI, including better mood, high

morale, and a sense of freedom, pointed to a common theme of positive emotions experienced while

performing specific tasks. Descriptions offered by interviewees on the pride and honor they felt about

their firm’s adoption of AI, the feelings of greater recognition of their work, and greater organizational

support pointed to related (in terms of psychological factors) but different categories of enhanced

organizational commitment that are not task specific. We generated additional aggregated theoretical

dimensions in the third stage based on second-order categories. Thus, we drew on previous research that

distinguished cognitive skills from psychological factors as distinct determinants of employee

performance (e.g., Duckworth, Quirck, Gallop, and Matthews 2019). Figure 8 summarizes the data

analysis process and the resulting framework of analysis, which we will elaborate on next.

[INSERT FIGURE 8 HERE]

Finding 1: AI Assistance Alters Job Design for Employees

All agents pointed to the design of having AI make outbound calls to customers to probe initial

interest (lead generation) and pass confirmed leads only to human agents for a sales presentation as

creating critical change to their work mode. First, the nature of lead generation was considered by agents

to be “hard labor…[because] if you reach out to customers by yourself, there will be many situations

including connection failures, customers hanging up on you, and customers scolding you upon picking up

the calls” (#H5) 11 and “you rarely have real communication with customers” (#H1). Thus, multiple

11
The letter and numbers in parentheses identified the interviewees who provided the quote. Letter “H” indicates an
interviewee from the highly skilled agent sample, and “L” indicates an interviewee from the lower-skilled agent
interviewees described the lead generation as being boring, “requiring no skills” (#L1), and “highly-

frequent but minimally-effective communication” (#H13).

Second, with AI assistance screening out customers who had minimal intention to purchase, a

significantly larger proportion of customers who were passed to agents “had clear ideas about what they

want” (#H9), “were truly willing to listen to [agents’] introduction [of the product]” (#H6), and thus of

“high value” (#H13). Another change was that agents’ “likelihood of actually engaging in conversations

with customers was almost 100%” (#L3). However, without AI screening, such conversations were

“seldom” (#L5) because agents spent most of their time “trying to get connected, [dealing with] hang-ups,

and having very short conversations if customers even picked up the calls” (#L13). These changes have

“increased the intensity and challenges of [agents’] work” (#L14). Therefore, these findings corroborate

the initial stage of our theoretical framework, in which AI changes job design.

Thus, these findings corroborate the arguments we provided as the first step of the theoretical

framework that a sequential division between AI and human employees changes job design by alleviating

employees’ burden of handling well-codified, standard work but intensifying the need for employees to

solve more complex, higher-level problems.

Overall divergence. It is interesting to note that highly skilled and lower-skilled agents reported a

drastically different impact produced by this change on their work performance and psychological well-

being in general. Although highly skilled agents also recognized that with AI assistance, “the difficulty of

the content of [their] work has increased” (#H7), they considered it “a good thing for [their] efficiency

and performance” (#H7) and commonly reported feeling “elated…because this changed work mode

significantly helps and improves [their] work” (#H13). Specifically, they both expressed relief about not

having to generate sales leads and considered that more intensive interactions with sales leads offered

greater career opportunities.

“After all, for us, dealing with those boring, non-technical things [lead generation] every day is a
bit overkill. We should be assigned to a more difficult business.” (#H12)

sample.
“I have no problem dealing with questions that I have been trained on, but I think we need to have
more opportunities to contact customers. Only after communicating with customers can I know what
problems exist, how to deal with those problems, and then think about it repeatedly to further
improve my scripts, invent new scripts, and better deal with these problems when I encounter them
again in the future.” (#H11)

Conversely, lower-skilled agents considered this change to have lowered their speed and efficiency

at work because AI assistance increased the likelihood that they encountered customers who were serious

about the products and, thus, were more challenging to serve. However, owing to their limited abilities,

they experienced many difficulties in persuading customers to move forward with a purchase.

“[AI assistance] only reduces our work efficiency because AI has processed all the simple and
unskilled tasks, and all the subsequent cases require a certain level of skills in [using the right]
scripts, so we will naturally be much slower to process the cases. The duration of serving each
customer increases, and the number of customers we can serve is significantly reduced.” (#L2)

Moreover, lower-skilled agents described the increased pressure they felt from serving a larger

number of challenging customers, as the following quotes vividly demonstrate.

“Although the frequency of communication with customers increases, the difficulty of customers’
questions raised during the communication process also increases. I am more likely to become stuck
without knowing how to answer the questions raised by customers. It may make customers feel that
I am less professional; thus, for me, the sense of pressure is multiplied, and I must learn as soon as
possible to catch up.” (#L10)

“It’s like putting a doctor who only sees outpatients in the ICU to care for patients. There is a
feeling of driving the duck on a perch. [In the local language, this is a proverb meaning to force
someone to do something beyond their capability.] First and foremost, I do not have strong skills,
and I easily become nervous when I encounter difficult clients. When I become nervous, I do not
know what to do next. So, I am a little worried about failing to serve potential customers well.”
(#L7)

Although these findings are consistent with our theory of job skills as a differentiating factor, we

will be able to provide a deeper analysis to delineate several mechanisms to explain why such divergence

occurred, as illustrated next.

Finding 2: AI Assisted with Development of Cognitive Skills Conducive to Creativity

We find that key changes to the mode of work induced by AI assistance, on the one hand, affected

the actual outcomes agents achieved in developing new scripts and improving existing scripts, both of

which were considered “innovations” in this work context, albeit with a sharp divergence between highly

skilled and lower-skilled agents. On the other hand, higher- and lower-skilled agents agreed that changes
to their work mode should enhance their abilities to demonstrate greater creativity in the longer term. Each

point is illustrated below.

Divergence between high- and low-skilled employees. AI assistance increases agent exposure to

customers with a confirmed interest in the product. With this change, highly skilled employees reported

developing enhanced job skills, which enabled them to generate new and improved answers to customer

questions through two primary channels (denoted as [1] in Figure 8). In the first channel, highly skilled

agents thought that by spending less time and effort on generating sales leads, AI assistance enabled them

to “devote more time and stay more concentrated on thinking about how to resolve questions [that were

challenging]” (#H9). Thus, they both developed new scripts for challenging questions as a pre-existing

script was not available and improved existing scripts to make them more effective, both of which they

called “innovations”:

“AI assistance freed up more time for us to think more about how to overcome some difficulties.
For example, when there was no AI assistance, about half of our day was spent dialing numbers
and dealing with no answers, hang-ups, short conversations, and so on. Thus, we could not handle
many real cases in one day. However, after AI intervenes, we can also handle the same number of
cases in one day as we previously did but have a lot more time to think.” (#H6)

“When I have sufficient time, I can think more comprehensively, and the answers to the questions
are better… when I can concentrate better, my thinking will be more focused, and my answers to
some on-the-spot questions should be more accurate.” (#H1)

“With the assistance of AI, we are liberated from tedious and repetitive calls to better focus on
serving willing customers. We have more time and freedom to improve our skills and innovate our
scripts continuously.” (#H3)

In the second channel, highly skilled employees discussed how more challenging cases called for

newer and better answers from customers. Therefore, their increased exposure to these types of cases

provided greater opportunities for them to develop new and improved scripts.

“There is a saying that ‘knowledge comes from practice.’ By constantly encountering problems in
real businesses, solving them, and accumulating experience from serving challenging customers,
we can continuously improve and innovate the content of scripts. Without AI assistance, we will not
have that much time to interact with these valuable customers to update our answers to questions.”
(#H9)

“[AI assistance] stimulates my creativity because I now more frequently encounter important and
difficult problems. For the problems that we have been trained for, I can provide different solutions,
continue to innovate them, and replace existing solutions with better ones.” (#H5)
We also collected several specific examples provided by highly skilled agents on new scripts they

developed for untrained questions and better scripts to improve upon existing ones for trained questions.

Conversely, there was a major divergence in lower-skilled agents. Although lower-skilled agents

also agreed that they saved time and energy and had increased opportunities to interact with customers

from AI assistance, they felt that these changes did not make a difference in helping them find new or

better answers because of their limited abilities and thus reported limited innovative outcomes (denoted

as [2] in Figure 8).

“Paying more attention and spending more time [on solving questions] probably do not make a
difference; I can’t think of a better solution.” (#L5)

“Even with more time, I am not sure if I can find a better solution because solving some problems
does not necessarily hinge on spending more time to think but on my limited abilities.” (#L6)

“I have low ability and a weak foundation, and it is difficult for me to innovate when encountering
challenging cases.” (#L1)

Nevertheless, lower-skilled agents observed their highly skilled colleagues to solve challenging questions

by developing new innovative answers:

“[I benefit from] the scripts developed by higher-performing colleagues. As AI manages cases that
do not require skills, the remaining cases passed to humans are relatively more difficult. It is
difficult for us to innovate for these cases, but my higher-performing colleagues can continue to
break through and innovate, and it will also benefit us.” (#L14)

“In fact, [AI assistance] can indeed help us to explore see if we can innovate the answers to the
problems for which we have been trained. Although I cannot do that myself, I have seen some
outstanding colleagues coming up with new answers.”

These findings show that job skills or domain knowledge are important to creativity (Simon, 1985)

because “creativity requires a complex thought process” (Shalley et al. 2009, Page 492). Moreover, these

findings directly support our theoretical argument that without the necessary domain knowledge or job

skills, conserved cognitive resources, and strengthened motivations alone cannot generate new ideas.

Convergence among employees. Beyond generating innovative solutions to specific cases that had

occurred, agents discussed how this change generated by AI assistance could help them expand their

general experiences and skills germane to developing better and newer solutions in the future. Both highly
skilled and lower-skilled agents converge on how these general future benefits can be created by AI

assistance. These general benefits fall into the following four categories.

The first category (denoted as [3] in Figure 8) is that increased interactions with customers can

enable agents to collect customer feedback and reaction to the answers provided by agents, based on which

agents can further adjust, adapt, and revise their answers: “The more we contact customers who are willing

to communicate, the greater the amount of information we obtain from them, and the more we can review

and innovate our business through iterations” (#H2). Greater awareness and willingness to respond to

customers’ reactions increases the likelihood that agents generate solutions that are more appropriate for

the context than existing answers, which can lead to the innovation of new scripts.

The second category (denoted as [4] in Figure 8) pertains to agents’ increased ability to distinguish

opportunities to innovate scripts and to understand additional ways of improving current scripts. For

example, “we can accumulate more experience of serving complicated cases, which provides ideas for

how to innovate in the future: the more customers we serve, the more we can judge whether there may be

room to adjust the existing, trained way of solving problems” (#L14).

The third category (denoted as [5] in Figure 8) pertains to an increased ability to handle cases with

greater flexibility following the idiosyncrasies of the situation instead of sticking to pre-determined,

trained solutions: “After all, I have gained more practical experience and thus can handle problems more

flexibly” (#H5). A greater willingness to adapt to a situation enables agents to deviate from existing scripts,

thus increasing the likelihood that they will generate better and newer solutions.

The last category (denoted as [6] in Figure 8) pertains to how increased exposure to real customers,

because of AI assistance, enables the agents to stay calm and prepare to “play on the spot” in response to

unexpected situations. For example, “[t]he key to dealing with problems on the spot is to have enough

actual ‘combat experience’ so that we do not panic, and we can readily use our skills to ‘get the man.’

Therefore, to deal with these problems well, we need to accumulate rich experience” (#L2). As deviations

from trained situations offer opportunities for new scripts or to improve existing scripts, better mental and

cognitive preparation to “play on the spot” in such situations should increase the likelihood of generating
newer and better scripts.

Thus, employees agree that four abilities are enabled by AI assistance: to use feedback from the

customer to create innovative scripts; to identify opportunities to innovate scripts; to interact with clients

more flexibly, thus calling for innovations with scripts, and to “play on the spot” to generate innovative

scripts. All these functions are enabled by resources freed by AI assistance, thus enriching the process

through which job design and conservation of resources change creativity (Chae and Choi 2018; Elsbach

and Hargadon 2016; Shalley et al., 2009).

Finding 3: AI Assisted with Development of Psychological Outcomes Conducive to Creativity.

We found that the changed mode of work and its source of AI assistance produced various

psychological consequences that may affect agent creativity. On the one hand, highly skilled agents

experienced more positive emotions from performing the changed tasks, including better mood, increased

morale, and greater passion. However, lower-skilled agents reported negative emotions, including

nervousness, demoralization, and feelings of rejection. Better mood is important for the performance of

service agents (Rothbard and Wilk, 2011). Positive emotions lead to greater creativity according to the

broaden-and-build theory (Fredrickson, 2004). On the other hand, highly skilled and lower-skilled agents

reported more positive sentiments about the firm’s adoption of AI assistance and greater organizational

commitment. In the following paragraphs, we elaborate on each of these categories.

Divergence between high- and low-skilled employees Diverging emotions associated with

performing tasks at work include the following. First, with the changed work mode, highly skilled agents

expressed three positive psychological outcomes while performing their tasks, whereas lower-skilled

agents expressed the opposite (denoted as [7] in Figure 8). The first positive psychological outcome is “a

greater sense of relief” and “a better mood” (#H6) for highly skilled agents, primarily for two reasons.

The first reason was that they no longer needed to conduct “repetitive, tedious, and meaningless” (#H5)

phone calls for lead generation, which they found “frustrating” (#H7): “The work in the previous stage is

always hard labor. If processed by humans, emotional fluctuations are inevitably generated. …it is a waste

of my time and energy” (#H5). Second, they felt more relaxed and happier engaging with clients willing
to purchase.

“[With AI assistance,] all the customers whom I handle have intentions [to buy] and are willing to
listen to my introduction. When chatting with them, I feel much more relaxed, and I am in a much
better mood; thus, naturally, I feel that pressure at work is much reduced, and I feel greater relief
and pleasure.” (#H6)

In stark contrast, dealing with such clients increased the pressure felt by lower-skilled agents who

reported feelings of “depression and distress” (#L1):

“I don’t feel relieved. However, it makes life more stressful because I have to deal with many more
complex businesses. They give me headaches throughout the day; how can I be more relaxed?”
(#L9)

Second, as AI assistance exposed agents to the more challenging tasks of a sales presentation and

persuasion, highly skilled agents commonly described a boosted “moral of combat” or “fighting spirit”

(e.g., #H5, #H13). This resulted from being given additional opportunities to handle more important and

challenging businesses and greater success in overcoming these challenges by having productive

communication with customers that resulted in purchases (denoted as [8] in Figure 8). For example,

“This approach [using AI assistance] makes me feel that our work is quite challenging. After
adopting AI assistance, we focus on dealing with more difficult problems, but the more challenging
the customers are, the more motivated we are, and the more work we want to do.” (#H3)

The opposite effect was observed for lower-skilled agents. They first discussed how lowered efficiency

and reduced speed of serving more challenging customers “interfered with [their] mentality at work,

making it more difficult for [them] to do business in the future” (#L11). They also described how

experiencing a larger number of occurrences in which they failed to persuade challenging customers

reduced morale:

“Judging from my current performance, the customers who have passed the AI screening are a big
challenge for me, and I am not always able to ‘overcome’ them. Thus, it is difficult to stimulate my
fighting spirit. Conversely, it can make me lose confidence in my work.” (#L14)

“In addition to amping up pressure, [a change of work mode] gradually destroys my fighting spirit
because the difficulty of the cases is too great, my progress is too slow, and the outcomes are not
desirable. Thus, the work becomes increasingly less engaging.” (#L2)

Third, highly skilled agents described increased “work motivation” (#H9) and greater “passion”

(#H6) from serving customers with real intentions to purchase during sales presentations than reaching
out to random customers to generate sales leads. Interestingly, they also discussed how the changed mode

of work gave them a greater sense of freedom to innovate.

“We have more opportunities to encounter difficult and challenging questions from customers, some
of which are not in the knowledge bank; without the restrictions of the knowledge bank, we have
more freedom to be innovative [with new scripts].” (#H2)

Conversely, lower-skilled agents discussed being demoralized, as previously reported. Instead of feeling

that dealing with new questions gave them the freedom to be innovative, they wished that “[i]t would be

best if there was a standard answer to every question encountered’ (#L9). (This contrast is denoted as [9]

in Figure 8.)

Thus far, we have found a divergence between high-skilled and lower-skilled employees in terms

of mood, morale, and passion resulting from receiving AI assistance. Positive emotions contribute to

desirable outcomes at work, including employee creativity (Fredrickson, 2004). Therefore, such

divergence can explain the diverging effects produced by AI assistance in achieving creative outcomes

between the two groups because psychological states and emotions not only affect the sales process

(Sutton and Rafaeli, 1988) but also critically shape employees’ creativity at work (Elsbach and Hargadon,

2006; Knight, 2015; Zhang and Bartol, 2010).

Convergence among employees. Highly skilled and lower-skilled agents converged to express

positive sentiments about the firm adopting AI to assist agents. The first sentiment is “a sense of pride”

and “a sense of honor” (e.g., #L3, #L14, #H6, #H13) to work for a firm that adopts the latest AI

technologies, as the agents considered such adoption to be “trendy” and a demonstration of “strategic

vision” for the firm (#L11) (denoted as [10] in Figure 8). One agent elaborates as follows:

“[AI assistance] makes me feel more superior because AI is a big trend now, and the company is
constantly innovating. Working in such a company makes me feel like I am at the frontier of our
time, being fashionable and not rustic, just like buying the latest mobile phone models. My sense of
pride naturally arises. Although other companies, such as Railway Construction Corporation and
Sanitation Company, are large in scale and have state-owned enterprise backgrounds, they sound
like yokels. I can even show off to my friends. I feel enthusiastic about working in such a work
environment.” (#H9)

Second, all agents, regardless of their skills to persuade challenging customers, felt that increased

opportunities for them to serve challenging customers demonstrated the firm’s “recognition of [their] job
skills” (#H33) and that the firm considered them to be “an indispensable part of the business” (#H33).

Even lower-skilled agents who complained about their lack of ability to handle challenging customers

thought that using AI assistance meant that “[t]he company still thinks highly of [their] work skills” so

that they felt “proud to be assigned such complex and difficult cases” (#L4). (This point is denoted as [11]

in Figure 8.)

Third, the agents considered the adoption of AI assistance to indicate that the company was willing

to provide stronger technical support to better prepare agents for their communication with customers

(denoted as [12] in Figure 8). For example, one explained that by adopting AI, the company “certainly

gave us more support in business and laid the foundation for us to communicate with customers. At least

the purpose of our communication does not need to be explained [to customers], so customers may become

more likely to cooperate with us” (#L2). Moreover, some agents interpreted a firm’s intention to adopt AI

as providing employees with greater organizational support.12

“I think [by adopting AI] our firm wants us to have more opportunities to meet and communicate
with real customers, so we don’t have to repeat those high-frequency scripts.” (#L10)

Finally, it is interesting that all agents recognized the threat of being replaced by AI in the future,

but they did not blame the company for it. While highly skilled agents described mixed feelings of

appreciating the help they received from AI and were concerned about being replaced by AI in the future,

they explicitly said that the company adopted the right strategy (denoted as [13] in Figure 8). For example,

“In the short term, [AI assistance] is a good thing, but in the long run, there will be threats. Although
AI assistance with our work has helped improve work efficiency and capabilities, will it replace us
when AI becomes even more mature in the future? Everyone understands this possibility, but current
AI technology has been widely used, and it is correct for the company to adopt AI; otherwise, the
company itself may be eliminated. Even if this stage [AI displacing agents] is reached in the future,
it will be a necessary decision by the company for technological progress. If there are opportunities,
we can choose to transfer them to different positions.” (#H13)

12
While we have developed theoretical reasons why deploying AI assistance is more likely to be seen by higher-
skilled employees as the firm rendering stronger organizational support to them than lower-skilled employees
because of greater job displacement concerns of the latter, in the interview data we do not observe top and bottom
agents expressing different views on organizational support. This outcome may be explained by the subsequent
findings that while bottom agents are concerned about job displacement, they blamed the technology more than the
firm for this risk.
Even lower-skilled agents considered the company’s adoption of AI “an inevitable trend” (#L4) that

was imperative for “the company’s own survival” (#L1); thus, “the company’s thinking is reasonable: the

company needs to grow in the long run, the employees need to continuously make progress, and the new

technologies need to continuously expand” (#L9). Lower-skilled agents called for more training and

sharing experiences with highly skilled colleagues to help them adapt.

Against the backdrop of the common age of labor against technologies and firm owners (as

demonstrated by famous historical events such as the Luddite and Captain Swing riots; for example,

Mokyr, Vickers, and Ziebarth [2015]), we discovered that in this particular context of AI adoption by the

focal telemarketing firm, employees appear to be understanding and supportive of technology adoption

by their employers, which is interesting.

Finding 4: Performance Consequences and Suggested Changes to AI Adoption

Highly skilled agents commonly state that AI assistance increases their ability to meet key

performance indicators (KPI). One agent compared their work post- and pre-AI adoption as “high

efficiency and high quality versus low efficiency and low quality” (#H13). While lower-skilled agents

generally complained about their lower likelihood of successfully persuading sales leads to purchase, some

acknowledged that the net impact on their performance might not necessarily be negative: “In the past,

the volume of business was large, and the success rate was low. Now, although the volume of business

[that I can handle] reduced less, the success rate can be much higher” (#L1).

Consistently, most highly skilled agents advocated expanding the use of AI to handle more

“ineffective calls that otherwise occupied too much of our time” (#H2). They also advocated using their

innovations in scripts to actively update AI’s knowledge bank:

“[AI involvement] can be maintained [at the current level] for now and gradually increased. As
some of the difficult problems that we have encountered are not in the AI knowledge bank, [the
company] needs us to continue summarizing our experiences and iteratively update the knowledge
in the AI library” (#H11).

Many lower-skilled agents advocated for maintaining the current level of AI assistance without

expanding it: “if it continues to increase, the jobs that are left for us to handle will become even more
difficult, and I am not sure if I can complete them” (#L3). Some lower-skilled agents acknowledged the

tension between the company’s interests and their own.

“From the company’s perspective, [the use of AI assistance] should increase. As previously
mentioned, this is a trend. From my personal perspective, it is good to maintain [my current level].
After all, those of us with poor performance need to be left with some work to do.” (#L10)

Employees’ perceptions of their changed performance are consistent with our analytical results of

their sales performance, although the latter may be seen as more objective and accurate. Their suggestions

on whether AI adoption should be expanded are consistent with their positive and negative experiences.

Delegating AI’s Task to Human Workers? We have provided a theoretical discussion on whether

sales generation can be delegated to lower-cost human employees to produce the same results. We

previously argued that, unlike humans, AI does not suffer from a potential decline in productivity because

of boredom and fatigue when performing lead generation, a standardized, scripted, and repetitive work,

so it is less likely to generate negative spillovers to human employees handling subsequent sales

persuasion. The interview data corroborated this point. When asked about a hypothetical case of sales lead

generation being taken over by other human employees instead of AI, agents expressed concerns about

such negative spillovers from lead generation to sales presentations. They did not trust human employees

to be as consistent as AI in handling the lead generation in compliance with instructions and training. They

expected to expend more resources to fix customer relations if other human employees mishandled

customers during the lead generation. As one agent expressed, “Humans have emotions, especially when

working on high-frequency and boring tasks, and are prone to developing unpleasant tones. Once the

front-end [agents who generate sales leads] leaves a customer with a bad impression, it will be difficult

for us to communicate with the customer afterward. The customer will trust us less, and we may even lose

this customer” (#L1). Therefore, it is theoretically important to involve AI in handling codified and

repetitive activities to boost employees' creativity in subsequent higher-level problem-solving.

Summary

Semi-structured interviews corroborated our theory and contributed even richer, deeper, and more

nuanced insights. First, the addition of AI assistance resulted in significant changes in the mode of work
of human agents. Human agents no longer handle lead generation, which is well-scripted, repetitive, and

mostly involves customers with a minimal willingness to communicate. They are more frequently exposed

to serious customers who are interested but can ask tough questions.

Second, consistent with our theory, this model change saves agents time and energy, enabling highly

skilled agents to generate new answers to untrained questions and/or improve existing answers to trained

questions, thus demonstrating greater creativity at work. However, lower-skilled agents cannot do so.

Beyond our theory, this change also increased agents’ ability to (a) use feedback from clients to create

innovative scripts, (b) identify opportunities to innovate scripts, (c) deal with clients more flexibly, which

calls for innovations with scripts, and (d) “play on the spot” to generate innovative scripts. Thus, AI

assistance generates positive cognitive outcomes (job skills) conducive to workplace innovation and more

so for highly skilled agents.

Finally, beyond our theory, this change in work mode results in a better mood, increased morale,

and greater passion for highly skilled agents, all of which are conducive to workplace creativity.

Conversely, opposite changes occurred for lower-skilled agents, including greater pressure and lower

morale. As our theory predicted, all agents expressed greater perceived support for the organization, but

with greater nuances beyond our theory, including pride and sense of honor, perceived good intention of

the firm, and avoidance of blaming the firm for potential future job displacement by AI. Thus, AI

assistance also generates positive psychological outcomes conducive to workplace innovation and even

more so for highly skilled agents. All the quotes are summarized in Appendix 10. These insights further

enriches the mechanisms of the theoretical framework as indicated in Figure 1.

DISCUSSION

This study examines the organizational design of using AI in a sequential division of labor by

assigning the repetitive, well-codified portion of a task to AI to generate the input needed to perform the

portion of the task that requires higher-level problem-solving by human employees. Drawing on multiple

theories, including AI-human collaboration, job characteristic model, and employee creativity, and mixed
methods, we analyze whether AI assistance enables employees to solve higher-order problems more

creatively, thus generating AI-augmented employee creativity. Results from a field experiment conducted

in a telemarketing company provide causal evidence that such AI-augmented creativity is skilled biased:

particularly for agents with higher job skills. During lead generation, AI assistance enhanced agents’

success in developing new answers to customers’ questions that exceeded the scope of the company’s

knowledge bank, thus demonstrating greater creativity than independently achieved by these agents.

Findings from semi-structured interviews with agents revealed more nuanced insights into multiple

channels through which AI enhanced the creativity of higher-skilled agents at work by improving their

cognitive skills and psychological outcomes. In contrast, these benefits were limited for lower-skilled

agents.

Contributions to the Bright Side of AI Augmentation

How organizations leverage the rapid development of AI technologies to create value has become

an increasingly pressing question for researchers and practitioners. Academic research has commonly

focused on organizations’ use of AI to replace employees in performing various tasks (e.g., Felten et al.

2018). An attractive idea is to allow AI and employees to compensate for each other’s weaknesses to

generate complementarity, thus creating “augmented intelligence” (e.g., Choudhury et al. 2020; Kannan

and Bernoff 2019; Kesavan and Kushwaha 2020; Puranam 2020; Raisch and Krakowski 2021; Wilson and

Daugherty 2018; Lebovitz et al. 2022). This study contributes to the burgeoning literature by showing that

even a simple sequential division of labor can create synergies between AI and the human employees

involved. Without employees, weak AI faces limitations in independently handling high-level problem

solving (Berenthe et al., 2022); without AI assistance, employees are distracted and demoralized by simple,

repetitive work, whereas they desire interesting and creative work (Fleming and Sturdy 2011). The AI-

human collaboration thus “kills two birds with one stone.”

This study deepens the conventional vision that using AI to handle tedious and repetitive jobs will

allow human employees to focus on developing more creative outcomes (Kasan 2020b; McKendrick

2021; O’Carroll 2017). We show that this desirable speculation does not always hold; theoretical tension
exists over whether such AI assistance indeed increases employee creativity, but employees’ domain

expertise or job skills constitute a critical condition that reconciles this tension. Employees with higher

job expertise benefit more from AI assistance in developing creative solutions. Thus, we enrich the

theoretical insights and provide empirical evidence for this vision.

Finally, employee boredom at work is a common and problematic experience that dates back to the

industrial age. The advancement of technology exacerbates this concern because it leads to further

fragmentation of the work (Casilli and Posada 2019). However, this study shows that because AI

technologies can effectively perform well-codified, repetitive work, they can enable employees to remain

focused on more interesting work, which may result in a more meaningful work experience. Enhanced

employee creativity in response to AI assistance contributes to a new channel in the literature on improving

employee creativity at work (Anderson et al., 2014; Fredrickson, 2004).

Contribution to the Dark Side of AI Augmentation

Researchers are keenly aware of the “dark side” of data science and algorithms, or the potential

negative consequences of these practices for workers and customers (e.g., Berente et al. 2021; Fast and

Jago, 2020). Common concerns arise when AI technologies replace human workers (Felten et al. 2018;

Tong et al. 2021) or render employees to carry out fragmented jobs, causing dehumanization (Kellogg et

al 2020). However, we show that even when AI takes over menial, drudgery work to facilitate human

employees with work that calls for creative solutions and is thus potentially interesting, concerns over

the well-being and welfare of low-skilled employees heightened. These employees experienced negative

emotions as they felt more tension, greater pressure, and more nervousness after receiving AI assistance

in serving customers. They also experienced lower morale from failing to overcome challenges and

frequent failure, making their task less interesting. They did not experience a sense of freedom to create

new scripts, instead wishing for standard answers. Although they hoped for additional help from their

higher-skilled colleagues, without deliberate job designs, including giving financial incentives or

building a team-oriented culture (Gray et al, 2020), it is unclear why higher-skilled colleagues would

devote time to providing them with the desired assistance.


Second, we highlight that AI technology’s “dark side” affects human workers in a starkly unequal

fashion due to the skill-biased outcomes of AI augmentation. Human workers with lower job skills

experience “double loss”— from the lack of job skills per se and negative experience working in a team

with AI. However, it is important not to ignore low-skilled employees because they are members of

society, and many of them can grow to develop greater job skills in the future. We suggest directions for

future research to investigate ways to manage AI and related processes to ensure that low-skilled

employees are not left behind.

Generalizability

The sequential division of labor that we examine in the sales context is common in many other

business contexts. For example, recruiters have increasingly used AI to make the first cut in the initial

screening stage before human experts handle the subsequent interview stage (Elmira, Anastasia, and

Marleen 2021). It has been reported that 67% of hiring managers surveyed by LinkedIn considered AI a

time-saving tool (Heilweil 2019). In healthcare services that commonly use a triage process to categorize

patients, AI has assisted employees with medical chart coding (Wang, Gao, and Agarwal 2019).

Therefore, the presence of contexts where AI can successfully perform lower-level preparatory work as

inputs for subsequent higher-level interesting work to be managed by human experts offers the necessary

contexts for the predictions generated in the paper to prevail.

Employee performance is important to service organizations (Fuller & Smith, 1991; Holman et al.

2002, Lee, Batt & Mohiyan. 2019). However, in generating the theoretical framework, we did not rely on

any assumption idiosyncratic to the context of telemarketing. Instead, we drew heavily on the interaction

of three general pieces of literature–AI–human collaboration, job characteristics model, and employee

creativity–whose logic applies to various organizations. Thus, we expect the theoretical mechanisms

developed in this context to be relevant to many other organizational contexts that can implement the

abovementioned division of labor between AI and human employees.

Limitations and Future Research

More sophisticated forms of AI-human collaboration may involve allowing humans and AI to use
or modify the content of others’ work (e.g., Choudhury et al. 2020, Jussupow et al. 2021; Kesavan and

Kushwaha 2020). Research on such direct interactions between human workers and AI in organizations

focuses on what human workers do with AI-generated information (Lebovitz, Lifshitz-Assaf, and Levina

2021; Waardenburg, Huysman, and Sergeeva 2022) and how developers and users of AI co-create

technology (van den Broek, Segeeva, and Huysman 2021; Singer, Kellogg, Galper, and Viola 2022). Can

direct interaction between AI and human workers enhance employees’ creative or innovative outcomes?

On the one hand, human workers can “stand on the shoulders” of AI-generated predictions about the

structured components of their tasks, which could enable them to explore new ideas. On the other hand,

human workers have limitations in fully understanding AI’s work (Waardenburg, et al 2022) and may stay

unengaged with it (Lebovitz et al. 2021), thus hindering their exploration. However, this tension warrants

further investigation.

Although both the domain knowledge and psychology of human workers are important to how

they understand, use, and interact with AI (Singer et al 2022; van den Broek et al. 2021, Lebovitz et al

2021), we show that human workers’ domain knowledge or job skills critically condition AI’s effect on

their psychological outcomes. Which is more critical to the productive adoption of AI in organizations—

training human workers with proper domain knowledge or helping them tackle psychological obstacles?

Furthermore, does the preparation of necessary domain knowledge need to be coordinated with the right

mentality for human workers in AI adoption? These questions deserve further attention.

Finally, curbing the dark side of AI adoption requires firms and managers to better design the

adoption of AI (Kellogg et al., 2020). For example, complementary investments accompanying AI

adoption may be warranted in the form of extra training for lower-skilled employees. Since low job skills

lead to “double loss,” elevating job skills generates duplicative returns for employees and the firm.

Another type of complementary policy is to create incentives and a culture for creative outcomes generated

by higher-skilled employees under AI assistance to spread more widely.

CONCLUSION
AI technologies may assist employees in becoming more creative by generating new and useful

ideas at work, but starkly more so for employees with higher job skills. Thus, AI-augmented employee

creativity is skill-biased. We provide causal evidence from a field experiment in a telemarketing firm and

enrich the theoretical mechanisms through a qualitative study using semi-structured interviews. We

highlight the value created by AI-human collaboration in enhancing human creativity (which is key to the

Fourth Industrial Revolution) and the unequal effects on human workers with varying existing job skills,

which deserve greater attention from scholars, practitioners, and policymakers.

REFERENCES

Alwitt, L. F., & Pitts, R. E. 1996. Predicting purchase intentions for an environmentally sensitive product.
Journal of Consumer Psychology, 5(1): 49-64.
Amabile, T. M. 1996. Creativity and innovation in organizations (Vol. 5). Boston: Harvard Business
School Press.
Amabile, T. M., & Gryskiewicz, N. D. 1989. The creative environment scales: Work environment
inventory. Creativity Research Journal, 2(4): 231-253.
Amabile, T. M., Conti, R., Coon, H., Lazenby, J., & Herron, M. 1996. Assessing the work environment
for creativity. Academy of management journal, 39(5): 1154-1184.
Amazon. 2021 Amazon lex customers. Retrieved from https://aws.amazon.com/lex/customers. Accessed
February 12, 2021.
Anderson, N., Potočnik, K., & Zhou, J. 2014. Innovation and creativity in organizations: A state-of-the-
science review, prospective commentary, and guiding framework. Journal of Management, 40(5):
1297-1333.
Anicich, E. M. 2022. Flexing and floundering in the on-demand economy: Narrative identity construction
under algorithmic management. Organizational Behavior and Human Decision Processes, 169:
104138.
Balasubramanian, N., Ye, Y., & Xu, M. 2022. Substituting human decision-making with machine
learning: Implications for organizational learning. Academy of Management Review, 47(3): 448-
465.
Barbalet, J. M. 1999. Boredom and social meaning. The British Journal of Sociology, 50(4): 631-646.
Berente, N., Gu, B., Recker, J., & Santhanam, R. 2021. Managing artificial intelligence. MIS Quarterly,
45(3): 1433-1450.
Brynjolfsson, E., & McAfee, A. 2014. The second machine age: Work, progress, and prosperity in a
time of brilliant technologies. New York: WW Norton & Company Press.
Brynjolfsson, E., Rock, D., & Syverson, C. 2021. The productivity J-curve: How intangibles complement
general purpose technologies. American Economic Journal: Macroeconomics, 13(1): 333-72.
Card, D., & DiNardo, J. E. 2002. Skill-biased technological change and rising wage inequality: Some
problems and puzzles. Journal of Labor Economics, 20(4): 733-783.
Chae, H., & Choi, J. N. 2018. Contextualizing the effects of job complexity on creativity and task
performance: Extending job design theory with social and contextual contingencies. Journal of
Occupational and Organizational Psychology, 91(2): 316-339.
Chen, B. R., & Li, S. 2018. Prehire screening and subjective performance evaluations. Management
Science, 64(10): 4953-4965.
Choudhury, P., Starr, E., & Agarwal, R. 2020. Machine learning and human capital complementarities:
Experimental evidence on bias mitigation. Strategic Management Journal, 41(8): 1381-1411.
Christensen, M., & Knudsen, T. 2020. Division of roles and endogenous specialization. Industrial and
Corporate Change, 29(1): 105-124.
Cohen, W. M., & Levinthal, D. A. 1990. Absorptive capacity: A new perspective on learning and
innovation. Administrative Science Quarterly, 128-152.
Creswell, J. W., & Creswell, J. D. 2017. Research design: Qualitative, quantitative, and mixed methods
approaches. India: Sage publications Press.
Daugherty, P. R., & Wilson, H. J. 2018. Human+ machine: Reimagining work in the age of AI. Boston:
Harvard Business Press.
Davenport, T. H., & Kirby, J. 2016. Only humans need apply: Winners and losers in the age of smart
machines. New York: Harper Business Press.
Davenport, T., Guha, A., & Grewal, D. 2021. How to Design an AI Marketing Strategy: What the
Technology Can Do Today—and What’s Next. Harvard Business Review, 99: 42-47.
Debecker, A. 2019 Chatbots as a CRO tool: How conversational AI helps convert more leads. Retrieved
from https://www.convert.com/blog/optimization/chatbots-conversational-ai-cro-tool/. Accessed
February 12, 2021.
Duckworth, A. L., Quirk, A., Gallop, R., Hoyle, R. H., Kelly, D. R., & Matthews, M. D. 2019. Cognitive
and noncognitive predictors of success. Proceedings of the National Academy of Sciences, 116(47):
23499-23504.
Elsbach, K. D., & Hargadon, A. B. 2006. Enhancing creativity through “mindless” work: A framework of
workday design. Organization Science, 17(4): 470-483.
Farmer, S. M., Tierney, P., & Kung-McIntyre, K. 2003. Employee creativity in Taiwan: An application of
role identity theory. Academy of Management Journal, 46(5): 618-630.
Fast, N. J., Jago, A.S. 2020. Privacy matters… or does it? Algorithms, rationalization, and the erosion of
concern for privacy. Current opinion in psychology, 31: 44-48.
Felten, E. W., Raj, M., & Seamans, R. 2018. A method to link advances in artificial intelligence to
occupational abilities. In AEA Papers and Proceedings, 108: 54-47.
Fleming, L., Mingo, S., & Chen, D. 2007. Collaborative brokerage, generative creativity, and creative
success. Administrative Science Quarterly, 52(3): 443-475.
Fleming, P., & Sturdy, A. 2011. ‘Being yourself’ in the electronic sweatshop: New forms of normative
control. Human Relations, 64(2): 177-200.
Fredrickson, B. L. (2004). The broaden-and-build theory of positive emotions. Philosophical
Transactions of the Royal Society of London Series B-Biological Sciences, 359, 1367–1377.
Fuller, L. & Smith, V. (1991). Consumers' Reports: Management by Customers in a Changing Economy.
Work, Employment and Society, 5(1): 1-16
Gioia, D. A., Corley, K. G., & Hamilton, A. L. 2013. Seeking qualitative rigor in inductive research:
Notes on the Gioia methodology. Organizational Research Methods, 16(1): 15-31.
Glikson, E., & Woolley, A. W. 2020. Human trust in artificial intelligence: Review of empirical research.
Academy of Management Annals, 14(2): 627-660.
Graham, M., & Dutton, W. H. (Eds.). 2019. Society and the internet: How networks of information and
communication are changing our lives. New York: Oxford University Press.
Gray, S. M., Knight, A. P., & Baer, M. 2020. On the emergence of collective psychological ownership in
new creative teams. Organization Science, 31(1): 141-164.
Hackman, J. R. 1980. Work redesign and motivation. Professional Psychology, 11(3): 445.
Hackman, J. R., Oldham, G., Janson, R., & Purdy, K. 1975. A new strategy for job enrichment.
California Management Review, 17(4): 57-71.
Harrison, G. W., & List, J. A. 2004. Field experiments. Journal of Economic literature, 42(4): 1009-
1055.
Hatcher, L., Ross, T. L., & Collins, D. 1989. Prosocial behavior, job complexity, and suggestion
contribution under gainsharing plans. The Journal of Applied Behavioral Science, 25(3): 231-248.
Heilweil, R. 2019. Artificial intelligence will help determine if you get your next job. Retrieved from
https://www.vox.com/recode/2019/12/12/20993665/artificial-intelligence-ai-job-screen. Accessed
February 12, 2021.
Hobfoll, S. E., Shirom, A., & Golembiewski, R. 2000. Conservation of resources theory. In R. T
Golembiewski(Eds.), Handbook of Organizational Behavior, Revised and Expanded: 57-80. New
York: Routledge.
Holman, D., Frenkel, S., Sørensen, O., Wood, S., 2009. Work Design Variation and Outcomes in Call
Centers: Strategic Choice and Institutional Explanations. ILR Review. 62(4): 510-532
IBM 2021 AI for customer service. Retrieved from https://www.ibm.com/cloud/ai/customer-service.
Accessed February 12, 2021.
Imai, K., Keele, L., & Tingley, D. 2010. A general approach to causal mediation analysis. Psychological
Methods, 15(4): 309.
Insider Intelligence. 2021 Chatbot market in 2021: Stats, trends, and companies in the growing AI chatbot
industry. Retrieved from https://www.businessinsider.com/80-of-businesses-want-chatbots-by-2020-
2016-12. Accessed February 12, 2021.
Jia, N., Luo, X., & Fang, Z. 2020. Can artificial intelligence (AI) substitute or complement managers?
Divergent outcomes for transformational and transactional managers in a field experiment.
Working Paper.
Jick, T. D. 1979. Mixing qualitative and quantitative methods: Triangulation in action. Administrative
Science Quarterly, 24(4): 602-611.
Jussupow, E., Spohrer, K., Heinzl, A., & Gawlitza, J. 2021. Augmenting medical diagnosis decisions? An
investigation into physicians’ decision-making process with artificial intelligence. Information
Systems Research, 32(3): 713-735.
Kannan, P. V., & Bernoff, J. 2019. The future of customer service is AI-Human collaboration. MIT Sloan
Management Review. Available at https://sloanreview.mit.edu/article/the-future- of-customer-
service-is-ai-human-collaboration/.
Kasan. 2020a Stop your leaky funnel with conversational lead nurturing. Retrieved from
https://exceed.ai/conversational-lead-nurturing/. Accessed February 12, 2021.
Kasan. 2020b Conversational AI can supercharge your lead conversion process: Here’s how. Retrieved
from https://me dium.com/swlh/conversational-ai-can-supercharge-your-lead-conversion-process-
heres-how 79c91b41e9ce. Accessed February 25, 2021.
Kellogg, K. C., Valentine, M. A., & Christin, A. 2020. Algorithms at work: The new contested terrain of
control. Academy of Management Annals, 14(1): 366-410.
Kesavan, S., & Kushwaha, T. 2020. Field experiment on the profit implications of merchants’
discretionary power to override data-driven decision-making tools. Management Science, 66(11):
5182-5190.
Knight, A. P. 2015. Mood at the midpoint: Affect and change in exploratory search over time in teams
that face a deadline. Organization Science, 26(1): 99-118.
Lebovitz, S., Lifshitz-Assaf, H., & Levina, N. 2022. To engage or not to engage with AI for critical
judgments: How professionals deal with opacity when using AI for medical diagnosis. Organization
Science, 33(1): 126-148.
Lebovitz, S., Lifshitz-Assaf, H., & Levina, N. 2022. To engage or not to engage with AI for critical
judgments: How professionals deal with opacity when using AI for medical diagnosis. Organization
Science, 33(1): 126-148.
Lee, J. E., & Batt, R. & Moynihan, L.M., 2019. Strategic Dilemmas: How Managers Use HR Practices to
Meet Multiple Goals. British Journal of Industrial Relations, 57(3): 513-539.
Liu, D., Gong, Y., Zhou, J., & Huang, J. C. 2017. Human resource systems, employee creativity, and firm
innovation: The moderating role of firm ownership. Academy of Management Journal, 60(3):
1164-1188.
Longoni, C., Bonezzi, A., & Morewedge, C. K. 2019. Resistance to medical artificial intelligence.
Journal of Consumer Research, 46(4): 629-650.
Luo, X., Qin, M. S., Fang, Z., & Qu, Z. 2021. Artificial intelligence coaches for sales agents: Caveats and
solutions. Journal of Marketing, 85(2): 14-32.
Luo, X., Tong, S., Fang, Z., & Qu, Z. 2019. Frontiers: Machines vs. humans: The impact of artificial
intelligence chatbot disclosure on customer purchases. Marketing Science, 38(6): 937-947.
MacInnis, D. J., Moorman, C., & Jaworski, B. J. 1991. Enhancing and measuring consumers' motivation,
opportunity, and ability to process brand information from ads. Journal of Marketing, 55(4): 32-53.
McKendrick, J. 2021 Needed: People to put the intelligence in artificial Intelligence. Retrieved from
https://www.forbes.com/sites/joemckendrick/2021/02/13/needed-people-to-put-the-intelligence-in-
artif icial-intelligence/?sh=7e55d7283160. Accessed February 27, 2021.
Milgrom, P., & Roberts, J. 1990. The economics of modern manufacturing: Technology, strategy, and
organization. The American Economic Review, 511-528.
Mokyr, J., Vickers, C., & Ziebarth, N. L. 2015. The history of technological anxiety and the future of
economic growth: Is this time different?. Journal of Economic Perspectives, 29(3): 31-50.
Newman, D. T., Fast, N. J., & Harmon, D. J. 2020. When eliminating bias isn’t fair: Algorithmic
reductionism and procedural justice in human resource decisions. Organizational Behavior and
Human Decision Processes, 160: 149-167.
O’Carroll, B. 2017 What are the 3 types of AI? A guide to narrow, general, and super artificial
intelligence. Retrieved from https://codebots.com/artificial-intelligence/the-3-types-of-ai-is-the-
third-even-possible. Accessed February 12, 2021.
Oldham, G. R., & Cummings, A. 1996. Employee creativity: Personal and contextual factors at work.
Academy of Management Journal, 39(3): 607-634.
Parth, S. 2020 Chatbot to human handoff: Best practices for human takeover in a hybrid solution.
Retrieved from https://chatbotslife.com/chatbot-to-human-handoff-best-practices-for-human-
takeover-in-a-hybrid-solution-7cf1c3e396ec. Accessed February 12, 2021.
Perlow, L. A. 2001. Time to coordinate: Toward an understanding of work-time standards and norms in a
multicountry study of software engineers. Work and Occupations, 28(1): 91-111.
Pettersen, K. 2021 How customer service chatbots are redefining customer engagement with AI?
Retrieved from https://www.intercom.com/blog/customer-service-chatbots. Accessed February 12,
2021.
Preacher, K. J., & Hayes, A. F. 2004. SPSS and SAS procedures for estimating indirect effects in simple
mediation models. Behavior Research Methods, Instruments, & Computers, 36(4): 717-731.
Press, G. 2020 AI stats news: Only 14.6% of firms have deployed AI capabilities in production. Retrieved
from https://www.forbes.com/sites/gilpress/2020/01/13/ai-stats-news-only-146-of-firms-have-
deployed-ai-capabilities-in-production/?sh=1175ff212650. Accessed February 12, 2021.
Puranam, P. 2018. The microstructure of organizations. New York: Oxford University Press.
Puranam, P. 2021. Human–AI collaborative decision-making as an organization design problem. Journal
of Organization Design, 10(2): 75-80.
Qin S. Jia N., Luo X., Liao C. 2022. How can Artificial Intelligence Technologies be a “Useful
Servant” for Managers? A Case of Employee Performance Evaluation and Feedback. Working
paper.
Raisch, S., & Krakowski, S. 2021. Artificial intelligence and management: The automation–augmentation
paradox. Academy of Management Review, 46(1): 192-210.
Ranganathan, A., & Benson, A. 2020. A numbers game: Quantification of work, auto-gamification, and
worker productivity. American Sociological Review, 85(4): 573-609.
Rothbard, N. P., & Wilk, S. L. 2011. Waking up on the Waking up on the right or wrong side of the bed:
Start-of-workday mood, work events, employee affect, and performance. Academy of Management
Journal, 54(5): 959-980.
Sabnis, G., Chatterjee, S. C., Grewal, R., & Lilien, G. L. 2013. The sales lead black hole: On sales reps'
follow-up of marketing leads. Journal of Marketing, 77(1): 52-67.
Shalley, C. E., Gilson, L. L., & Blum, T. C. 2009. Interactive effects of growth need strength, work
context, and job complexity on self-reported creative performance. Academy of Management
Journal, 52(3): 489-505.
Sieber, S. D. 1973. The integration of fieldwork and survey methods. American Journal of Sociology,
78(6): 1335-1359.
Simon, H. A. 1985. What we know about the creative process. In R. L. Kuhn (Ed.), Frontiers in creative
and innovative management: 3–20. Cambridge, MA: Ballinger.
Singer, S. J., Kellogg, K. C., Galper, A. B., & Viola, D. 2022. Enhancing the value to users of machine
learning-based clinical decision support tools: A framework for iterative, collaborative development
and implementation. Health Care Management Review, 47(2): E21-E31.
Small, M. L. 2011. How to conduct a mixed methods study: Recent trends in a rapidly growing literature.
Annual Review of Sociology, 37: 57-86.
Strauss, A., & Corbin, J. 1994 Grounded Theory Methodology. Handbook of Qualitative Research,
17(1): 273–285.
Tauber, E. M. 1973. Reduce new product failures: measure needs as well as purchase interest. Journal of
Marketing, 37(3): 61-64.
Thomke, S., & Fujimoto, T. 2000. The effect of “front‐loading” problem‐solving on product development
performance. Journal of Product Innovation Management: An International Publication of the
Product Development & Management Association, 17(2): 128-142.
Tong, S., Jia, N., Luo, X., & Fang, Z. 2021. The Janus face of artificial intelligence feedback:
Deployment versus disclosure effects on employee performance. Strategic Management Journal,
42(9): 1600-1631.
van den Broek, E., Sergeeva, A., & Huysman, M. 2021. When the Machine Meets the Expert: An
Ethnography of Developing AI for Hiring. MIS Quarterly, 45(3).
Waardenburg, L., Huysman, M., & Sergeeva, A. V. 2022. In the land of the blind, the one-eyed man is
king: Knowledge brokerage in the age of learning algorithms. Organization Science, 33(1): 59-82.
Wang, W., Gao, G. G., & Agarwal, R. 2019. Friend or Foe? The Interaction Between Human and
Artificial Intelligence on Performance in Medical Chart Coding. The Interaction Between Human
and Artificial Intelligence on Performance in Medical Chart Coding. Available at
SSRN: https://ssrn.com/abstract=3405759
Weiss, R. S. 1994. Learning from Strangers: The Art and Method of Qualitative Interview Studies.
New York: The Free Press.
Wilson, H. J., & Daugherty, P. R. 2018. Collaborative intelligence: Humans and AI are joining forces.
Harvard Business Review, 96(4): 114-123.
Zhang, X., & Bartol, K. M. 2010. Linking empowering leadership and employee creativity: The influence
of psychological empowerment, intrinsic motivation, and creative process engagement. Academy of
Management Journal, 53(1): 107-128.
Zhou, J., & George, J. M. 2001. When job dissatisfaction leads to creativity: Encouraging the expression
of voice. Academy of Management Journal, 44(4): 682– 696.
Zhou, J., & Shalley, C. E. 2003. Research on employee creativity: A critical review and directions for
future research. Research in Personnel and Human Resources Management, 22: 165–217.
Figure 1. Theoretical Framework
Figure 2. Experimental design

Figure 3. Comparison of agents and AI-assisted agents in solving outside-knowledge-bank questions

Solving outside-knowledge-bank questions


0.009
0.008
0.007
0.006
0.005
D1>0 (p = 0.024)
0.004
0.003 0.007
0.002
0.001 0.003
0.000
Agents on their own Agents with AI Assistance
Figure 4. Comparison of top vs. bottom agents, with and without AI assistance in solving outside-
knowledge-bank questions

Solving outside-knowledge-bank questions


0.014

0.012

0.010

0.008 D2>0 (p = 0.012)

0.006
0.011
0.004 D3>0 (p = 0.053)

0.002 0.005
0.001 0.003
0.000
Top agents on their AI-top-agent hybrids Bottom agents on their AI-bottom-agent
own own hybrids

Figure 5. Validity check: solving outside-knowledge-bank questions for cold-call vs warm-call


customers

Solving outside-knowledge-bank questions


0.030

Cold Call Customers Warm Call Customers


0.025

0.020

0.015

0.010 0.021

0.005 0.010
0.001 0.005 0 0.001 0 0
0.000
s

s
s

s
s
s

s
er

er
er

er
er

er
er

er
m

m
m

m
m
m

to
sto

to
to

to
to

to
to

us
us

s
s

us
s

us
cu

cu
cu
cu

c
c
c

c
ll

ll

ll
ll

ll
ll
ll

ll
ca

ca

ca
ca

ca
ca
ca

ca
ld

m
ld
ld

m
m
ld

m
ar
co

ar
co

ar
co

ar
co

w
w
w
g

ng
g
ng

in
in

ng
g
ng
vi
vi

in
in
rv
v

vi
vi
er
er

er

rv
v
se

er
er

er
s

ss
ss

se
s

ss
ss
ts
ts

ri d
rid

ts
ts
en
en

ri d
rid

en
en
yb
yb
ag

ag

yb
yb
ag

ag
th
th
p

th
th
To

m
n
en

tto

To
ge

n
en

tto
ag

ge
Bo

-a

ag

Bo
-

-a
m
op

m
op
to
I-t

to
ot

I-t
A

ot
I-b

I-b
A

A
Figure 6. Comparison of performance outcomes

Customer Purchase Rate


0.060

0.050

0.040
D4>0 (p = 0.012)

0.030

0.045
0.020
0.028
0.010

0.000
Human agents on their own AI-human Hybrids

Figure 7. Summary of causal mediation analysis results


Figure 8. Data Analysis of Semi-structured Interviews
First-Order Codes Second-Order Themes Aggregate Theoretical Dimensions

Demonstrated
H: More time and energy to focus on creating newer and better answers for trained and untrained questions Innovation Skills:
[1] Finding 4
H: More exposure to clients creates opportunities to generate newer and better answers Finding 2
H-skill agents more
creatively answer

knowledge bank; most L-skill agents desire maintaining the current level of AI usage without further expansion; some L-skill agents call
customer questions and Improved

Suggested Adjustment to AI adoption: H-skill agents desire increasing the proportion of work assigned to AI and updating AI’s
more effectively Cognitive Skills
L: More time, energy and exposure to clients make little difference; unable to create newer and better answers [2]
communicate with clients Conducive to
than L-skill agents

Performance: H-skill agents consider AI assistance to improve their KPI but L-skill agents have mixed assertions
Creativity (w.
(Divergence) Variation)
Finding 1 H & L: Serving more clients à Using more client feedback to develop newer and better answers [3]
Future Innovation Skills: Greater immediate
benefits for H-skill
AI Assistance Changed H & L: Serving more clients à Better judgement for room of improvement and ways to find new answers [4]

for more knowledge sharing with them by H-skill colleagues along with AI adoption
H-skill agents improve agents than L-skill
Job Design general skills that enable agents; L-skill
H & L: Serving more clients à Greater flexibility in conversing with clients by adapting answers to situations [5] them creatively answer agents expect to
H-skill and L-skill agents
unexpected questions; L- achieve benefits in
all identify the following
skill agents expect to principle and in the
changes to the content of H & L: Serving more clients à Greater abilities to “play on the spot” and generate newer and better answers [6]
develop more skills in the longer term
their job as a result of AI
longer term
assistance
(Convergence)
• Nature of Calls H: Better mood, more relaxed in serving customers
Handled by AI: Most of [7]
L: Tenser, greater pressure, more nervous in serving customers
the calls are hung up, not
picked up, or picked up Positive Emotions at
by customers who lack Performing Tasks:
interest in the product and H: Higher morale from answering challenges and working on interesting tasks
are unwilling to have [8] Finding 3
H-skill experience positive
conversations L: Lower morale from failing to overcome challenges; frequent failure makes task uninteresting
emotions in performing
sales persuasion tasks; L- Improved
• Changes to the skill agents experience Psychological
Customer Types Served H: Greater sense of freedom to create newer and better scripts; greater passion negative emotions Outcomes
by Agents: Sales leads [9] (Divergence) Conducive to
handed over by AI to L: Do not cherish sense of freedom to create new scripts; wished for standard answers Creativity (w.
agents have real Variation)
intentions to buy, have
clear goals and More positive
preferences, are willing to H & L: Greater pride and sense of honor in the firm because using AI signals strategic vision & innovativeness [10] emotions
learn more about product experienced by H-
and to have conversations skill agents than L-
with agents H & L: Using AI in leads generation indicates that firm recognizes their importance in sales persuasion [11] Organizational
skill agents, but both
Commitment:
express support for
firm
All agents express
H & L: Using AI in leads generation indicates that firm lend more support to their job
[12] appreciation of firm
L: Calls for more help with overcoming challenges in serving a larger number of “real” customers strategy of adopting AI;
L-skill agents call for
additional assistance
(Convergence)
H & L: Consider job displacement risk by AI inevitable but do not blame firm for it; firm helps them [13]

Notes: “H” and “H-skill agents” stand for high-skilled sales agents; “L” and “L-skill agents” stand for low-skilled sales agents.
Table 1: Summary Statistics and Pairwise Correlations
Panel A Obs Mean Std Min Max (1) (2) (3) (4) (5) (6) (7) (8)
(1) Solving Outside-knowledge-bank
Questions 3,144 0.04 0.19 0 1 1
(2) Customer Purchase 1,528 0.00 0.03 0 0.29 0.07 1
(3) AI-Human Hybrid 3,144 0.50 0.50 0 1 0.08 0.05 1
(4) Top Agent 3,144 0.50 0.50 0 1 0.12 0.05 0.00 1
(5) Age 3,144 30.88 6.50 19 55 0.00 0.02 0.00 -0.01 1
(6) Education 3,144 2.79 0.83 1 4 0.03 0.01 0.00 0.01 -0.02 1
(7) Gender 3,144 0.51 0.50 0 1 -0.01 -0.02 0.02 0.00 -0.02 0.01 1
(8) Other Credit Cards 3,144 0.25 0.43 0 1 0.00 0.01 0.01 0.00 0.02 -0.05 -0.03 1
(9) Disturbed by Transition 1,487 0.27 0.44 0 1 0.00 -0.01 -0.01 0.02 0.03 -0.04 0.02 0.02

Table 2: Randomization Check


Disturbed by
Panel B N Age Education Gender Other Credit Cards
Transition
Served by top agents on their own 783 30.89 2.79 0.50 0.25 0.26
Served by AI-top-agent hybrid 776 30.76 2.81 0.52 0.25 0.30
Served by bottom agents on their own 798 30.81 2.79 0.50 0.25 0.29
Served by AI-bottom-agent hybrid 787 31.05 2.77 0.51 0.26 0.23
Prob > F 0.83 0.90 0.75 0.93 0.12
Prob > Chi Square 0.18 0.35 1.00 0.96 0.33
Note: Age is the age of the customer; Education refers to customers highest degree attained (1=High school degree, 2=junior or community college degree, 3=bachelor’s
degree, 4=post graduate degree); Gender refers to customer’s gender (0 = female; 1 = male); Other Credit Cards indicate if the customer owns other credit cards (0 = no; 1 =
yes); Disturbed by Transition indicates if the customer reports feeling disturbed by the experience of being handed over to another agent for sales persuasion (0 = no; 1 = yes).
Table 3. Employee Creativity Outcomes

DV: Solving outside-knowledge-bank questions


(1) (2) (3) (4) (6) (7)
Sample All All All All Sub-Sample: Sub-Sample:
Involving top Involving bottom
agents only agents only
AI-Human Hybrids 0.004** 0.004*** 0.002** 0.006** 0.001*
(0.002) (0.001) (0.001) (0.002) (0.001)
Top Agents 0.007*** 0.004**
(0.001) (0.002)
AI-Human Hybrids * Top Agents 0.005*
(0.003)
Age 0.000 0.000 0.000 0.000 -0.000 0.000*
(0.000) (0.000) (0.000) (0.000) (0.000) (0.000)
Other Credit Cards -0.000 -0.000 -0.000 -0.000 0.000 -0.000
(0.001) (0.001) (0.001) (0.001) (0.003) (0.001)
Education 0.001 0.001 0.001 0.001 0.002 -0.001
(0.001) (0.001) (0.001) (0.001) (0.001) (0.001)
Gender -0.000 -0.001 -0.001 -0.001 0.001 -0.002*
(0.001) (0.001) (0.001) (0.001) (0.002) (0.001)
Disturbed by Transition 0.000 0.000 0.000 -0.000 -0.000 -0.000
(0.002) (0.002) (0.002) (0.002) (0.003) (0.001)
Constant 0.002 0.000 -0.003 -0.002 0.001 -0.001
(0.004) (0.003) (0.003) (0.003) (0.007) (0.003)
Observations 1487 1487 1487 1487 739 748
R2 0.001 0.007 0.024 0.026 0.012 0.015
Standard errors clustered at the agent level, reported in parentheses
*
p < 0.1, ** p < 0.05, *** p < 0.01
Supplemental Materials for

When and How Artificial Intelligence Augments Employee Creativity

by

Nan Jia
Xueming Luo
Zheng Fang
Chengcheng Liao

Appendix 1: An example protocol of sales leads generation ....................................................................... 2


Appendix 2. Results of Multilevel Models .................................................................................................. 3
Appendix 3: Addressing alternative explanations – sales leads generation among experimental groups .... 4
Appendix 4: Characteristics of sales leads (customers who confirmed their interest at the first stage)
across four experimental groups: one-way analysis of variance (ANOVA) ................................................ 4
Appendix 5: Addressing alternative explanations – Number of outside-knowledge-bank questions asked
among experimental groups ......................................................................................................................... 5
Appendix 7: Validity check – outside-knowledge-bank questions asked by cold-call vs warm-call
customers ..................................................................................................................................................... 6
Appendix 8. Results of Causal Mediation Analysis..................................................................................... 7
Appendix 9. Semin-Structured Interview protocol ..................................................................................... 8
Appendix 10. Summary of Interview Quotes ............................................................................................ 11
Appendix 11: A different research question: A “horse race” between AI and human agents for sales
persuasion .................................................................................................................................................. 15

1
Appendix 1: An example protocol of sales leads generation

2
Appendix 2. Results of Multilevel Models

Multilevel regression with random effect at the agent level


DV: Solving outside-knowledge-bank questions
(1) (2) (3) (4) (6) (7)
Sample All All All All Sub-Sample: Sub-Sample:
Involving top Involving bottom
agents only agents only
AI-Human Hybrids 0.004** 0.004*** 0.002 0.006** 0.001
(0.002) (0.001) (0.002) (0.003) (0.001)
Top Agents 0.007*** 0.004**
(0.001) (0.002)
AI-Human Hybrids * Top Agents 0.005*
(0.003)
Age 0.000 0.000 0.000 0.000 -0.000 0.000*
(0.000) (0.000) (0.000) (0.000) (0.000) (0.000)
Other Credit Cards 0.000 0.000 -0.000 -0.000 0.000 -0.000
(0.002) (0.002) (0.002) (0.002) (0.003) (0.001)
Education 0.001 0.001 0.001 0.001 0.002 -0.001
(0.001) (0.001) (0.001) (0.001) (0.001) (0.001)
Gender -0.000 -0.001 -0.001 -0.001 0.001 -0.002*
(0.001) (0.001) (0.001) (0.001) (0.003) (0.001)
Disturbed by Transition 0.000 0.000 0.000 -0.000 -0.000 -0.000
(0.002) (0.002) (0.002) (0.002) (0.003) (0.001)
Constant 0.002 0.000 -0.003 -0.002 0.001 -0.001
(0.004) (0.004) (0.004) (0.004) (0.008) (0.003)
Observations 1487 1487 1487 1487 739 748
R2 0.001 0.007 0.024 0.026 0.012 0.015
Standard errors clustered at the agent level, reported in parentheses
*
p < 0.1, ** p < 0.05, *** p < 0.01

3
Appendix 3: Addressing alternative explanations – sales leads generation among experimental
groups

Sales Leads Generation Rate


0.520
0.510
0.500
0.490
0.480
0.470
0.460 0.492 0.485 0.489
0.479
0.450
0.440
0.430
Top agents on their AI-top-agent hybrids Bottom agents on their AI-bottom-agent
own own hybrids

Appendix 4: Characteristics of sales leads (customers who confirmed their interest at the first
stage) across four experimental groups: one-way analysis of variance (ANOVA)

Other Disturbed
N Age Education Gender Credit by
Cards Transition
Served by top agents on their own 385 30.76 2.78 0.51 0.25 0.26
Served by AI-top-agent hybrid 376 30.84 2.77 0.52 0.26 0.30
Served by bottom agents on their
own 390 31.23 2.80 0.51 0.25 0.29
Served by AI-bottom-agent hybrid 377 31.11 2.70 0.53 0.28 0.23
Prob > F 0.74 0.36 0.92 0.74 0.12
Prob > Chi Square 0.20 0.56 1.00 0.87 0.33

4
Appendix 5: Addressing alternative explanations – Number of outside-knowledge-bank questions
asked among experimental groups

# of Outside-knowledge-bank questions asked


0.160
0.140
0.120
0.100
0.080
0.060 0.122 0.128 0.131 0.130
0.040
0.020
0.000
Top agents on their AI-top-agent hybrids Bottom agents on their AI-bottom-agent
own own hybrids

Appendix 6: Validity check – Customer purchase rate by cold-call vs warm-call customers

Customer Purchase
0.060
0.050
0.040
D>0 (p = 0.000)
0.030
0.049
0.020
0.010 0.025

0.000
warm call customers cold call customers

5
Appendix 7: Validity check – outside-knowledge-bank questions asked by cold-call vs warm-call
customers

# Outside-knowledge-bank questions asked


0.250

0.200

0.150
D>0 (p = 0.000)
0.100 0.206

0.050
0.050
0.000
warm call customers cold call customers

6
Appendix 8. Results of Causal Mediation Analysis

Treatment Effect on the Mediator


AI-Human Hybrid 0.004***
(0.001)
Age 0.000
(0.000)
Other Credit Cards 0.000
(0.002)
Education 0.001
(0.001)
Gender -0.001
(0.001)
Disturbed by Transition 0.000
(0.002)
Constant 0.000
(0.004)

Mediated Effect on Customer Purchase


AI-Human Hybrid 0.470**
(0.206)
Solving Outside-knowledge-bank Questions 5.355**
(2.717)
Age 0.018
(0.014)
Other Credit Cards 0.016
(0.227)
Education 0.150
(0.124)
Gender -0.244
(0.202)
Disturbed by Transition -0.042
(0.229)
Constant -3.699***
(0.621)
N 1487
Total Effect Mediated 0.044

Standard errors in parentheses


*
p < 0.1, ** p < 0.05, *** p < 0.01

7
Appendix 9. Semin-Structured Interview protocol

The third and the fourth authors directly participated in and closely managed the interviews, by (a)
conducting initial interviews; (b) providing training to research assistants, who were PhD students in a
large research business school, to conduct interviews; (c) closely monitoring the interview process by
reviewing the transcript of each interview immediately after it was completed and providing feedback to
interviewers; and (d) providing answers and support to questions raised by interviewers.

The questions in the interview guide were not used as a questionnaire but as a guide to help interviewers
initiate conversations followed by questions to clarify or probe into the given answers. More questions
were asked than those included in the interview guide based on open dialogues and the interviewees’
responses. The interviewers emphasized building relationships with interviewees and encouraged them to
express their thoughts and emotions, and elaborate on specific examples and stories.

Protocols for Interviewers


Important notes for interviewers:

• It is important to cover each topic below (in red).


• However, within each topic, there is no need to ask every single one of the sample questions. The
sample questions are not meant to be an exhaustive list of all questions that interviewers could
and should ask (i.e., this is not a survey questionnaire). Instead, sample questions offer useful
suggestions on how to start the conversation on each topic. Some of them will become irrelevant
as the conversation proceeds, depending interviewees’ response. Moreover, spontaneous follow-
up questions that exceed the scope of the list of questions may be—and should be—asked (see
next point).
• Based on interviewees’ response, follow-up questions always need to be asked whereas some
follow-up questions are spontaneous and are thus not included in this list of sample questions.
Please be prepared to capture the opportunities to learn about new issues on the fly. Please bear in
mind that due to the nature of semi-structured interviews, unanticipated and spontaneous
opportunities to ask unscripted questions often emerge.
• The focus here is to let interviewees express feelings and emotions, and to encourage them to use
examples and stories to explain how they feel.

Background questions

• How long have you been working for this firm? What products have you sold so far?
• Is selling credit cards similar to or different from selling other products, based on your own
experience? What are the similarities and/or differences?

Answering questions from customers that fall within or outside the scope of training
[The following two sets of questions are for interviewing all agents, including those who received or did
not receive AI assistance.]

8
• Did any customers ask any questions that you were not being trained for? Example(s)? How did
you answer them?
[Note to interviewer: try to ask and follow up with as many mentions of untrained questions as
possible because they occur less frequently than trained questions.]

o Do you think you successfully addressed the question(s)?


§ Looking back, would you have done differently? How? Why?
o What circumstances could enable you to address those questions more satisfactorily?

[Note to interviewer: We raise a couple of sample questions below. Do not lead with
these sample questions; use them as examples only if interviewees seem confused or
request clarification. Do not constrain interviewee with these questions. Instead,
encourage them to share and talk freely.]

§ For example, if you had more time to think about it, do you think you would have
addressed the question more effectively?
§ For example, if you were somehow able to concentrate better at that moment,
would you have responded differently?

• For questions that customers asked and you have been trained to answer, did you handle them
successfully? Any question that you would have handled differently, in retrospect? Example(s)?
o If there are questions that you wished you handled differently, looking back, what could
have enabled you to better handle it on the spot? Why? Example(s)?

[The following two sets of questions are only for interviewing agents who received AI assistance]

• How do you feel about AI assistance—that AI reach out to customers and hand over confirmed
leads to you? Example(s)?
[Note to interviewer: Please cover all bullet point below but be prepared to follow up with
interviewees on what they say, by asking additional questions. Please note that one answer may
pertain to multiple bullet points. Do not feel obligated to cover these bullet points in the same
order as listed here. Encourage interviewees to share stories and feelings.]

o Do you prefer this way, or do you prefer to reach out to customers on your own to
generate sales leads, like you did in the past? Why? Example(s)?
o Does having AI generating sales leads (instead of yourself) somehow make your life
easier or more difficult? Why/why not? Example(s)?
o Do you have any concerns about using AI assistance in this way? Does it hinder your
work in any way? Does it help with your work in any way? Example(s)?
o How does having AI generate sales leads (instead of doing it on your own) affect the way
you address untrained questions? Why/why not? Example(s)?
§ Does it make it easier or more difficult for you to handle those questions?
o How does having AI generate sales leads (instead of doing it on your own) affect the way
you address trained questions? Why/why not? Example(s)?
§ Does it make it easier or more difficult for you to handle those questions?
o If sales leads were generated by other colleagues instead of the AI chatbot, would it make
a difference to you? Would it affect how you handled your part of the sales?

• How do you feel about the company’s practice of adopting AI to generate sales leads?

9
[Note to interviewer: Please cover all bullet points below, but it is very important to be open to
all sorts of feedback and feeling. Encourage interviewees to share stories and feelings.]

o Is it a good thing for employees that your company adopts these AI chatbots to generate
sales leads for employees to serve? Why/why not? Example(s)?
§ Are you happier to have AI assistance than no such assistance? Why/why not?
Example(s)?
§ Does the adoption of AI chatbots in this fashion concern you in any way?
Example(s)?
o Does this practice change how you feel about the company? How?
[Note to interviewer: We raise a sample question below. Do not lead with this sample
questions; use it as an example only if interviewees seem confused or request
clarification. Do not constrain interviewee with this question. Instead, encourage them to
share and talk freely.]

§ E.g., Do you feel that with this practice, the company aims to give you more
support?

Future: open ended questions

• Do you prefer to continue to have AI chatbot generate sales leads for you? Why/why not?
• Should the company increase/maintain/decrease of the use of AI chatbot to generate sales leads?
Why?
• Would you use the AI chatbot in a different way? If so, what is your suggestion(s)?
• Other thoughts to share?

10
Appendix 10. Summary of Interview Quotes

Quotes from High-Skilled Agents Quotes from Low-Skilled Agents


“…is hard labor…[because] if you reach out to customers by yourself, there will be many situations including connection failures, customers hanging up on you, and customers
scolding you upon picking up the calls” (#H5)
Feature of
“you rarely have real communication with customers” (#H1)
lead
generation Lead generation “requiring no skills” (#L1) and “highly-frequent but minimally-effective communication” (#H13)

Conversations with customers were “seldom” (#L5) because agents spent most of their time “trying to get connected, [dealing with] hang-ups, and having very short conversations if
customers even picked up the calls” (#L13)
Customers “had clear ideas about what they want” (#H9), “were truly willing to listen to [agents’] introduction [of the product]” (#H6), and thus of “high value” (#H13)
Feature of
sales Agents’ “likelihood of actually engaging in conversations with customers is almost 100%” (#L3)
persuasion
Handing sales persuasion without lead generation “increased the intensity and challenges of [agents’] work” (#L14)
AI changes job design

Although with AI assistance, “the difficulty of the content of [their] work has increased” “[AI assistance] only reduces our work efficiency because AI has processed all the
(#H7), agents considered this change “a good thing for [their] efficiency and performance” simple and unskilled tasks, and all the subsequent cases require a certain level of skills
(#H7) and commonly reported feeling “elated…because this changed work mode in [using the right] scripts, so we will naturally be much slower to process the cases.
significantly helps and improves [their] work” (#H13) The duration of serving each customer increases, and the number of customers we can
serve is significantly reduced.” (#L2)
“After all, for us, dealing with those boring, non-technical things [lead generation] every
day is a bit overkill. We should be assigned to a more difficult business.” (#H12) “Although the frequency of communication with customers increases, the difficulty of
customers’ questions raised during the communication process also increases. I am
Overall impact “I have no problem dealing with questions that I have been trained on, but I think we need more likely to become stuck without knowing how to answer the questions raised by
to have more opportunities to contact customers. Only after communicating with
of changed job customers. It may make customers feel that I am less professional; thus, for me, the
customers can I know what problems exist, how to deal with those problems, and then
design think about it repeatedly to further improve my scripts, invent new scripts, and better deal sense of pressure is multiplied, and I must learn as soon as possible to catch up.”
with these problems when I encounter them again in the future.” (#H11) (#L10)

“It’s like putting a doctor who only sees outpatients in the ICU to care for patients.
There is a feeling of driving the duck on a perch. First and foremost, I do not have
strong skills, and I easily become nervous when I encounter difficult clients. When I
become nervous, I do not know what to do next. So, I am a little worried about failing
to serve potential customers well.” (#L7)

11
(Appendix 10 Cont’d)
Quotes from High-Skilled Agents Quotes from Low-Skilled Agents
AI assistance enabled agents to “devote more time and stay more concentrated on thinking “Paying more attention and spending more time [on solving questions] probably do not
about how to resolve questions [that were challenging]” (#H9) make a difference; I can’t think of a better solution.” (#L5)

“AI assistance freed up more time for us to think more about how to overcome some “Even with more time, I am not sure if I can find a better solution because solving some
difficulties. For example, when there was no AI assistance, about half of our day was spent problems does not necessarily hinge on spending more time to think but on my limited
dialing numbers and dealing with no answers, hang-ups, short conversations, and so on. abilities.” (#L6)
Thus, we could not handle many real cases in one day. However, after AI intervenes, we
can also handle the same number of cases in one day as we previously did but have a lot “I have low ability and a weak foundation, and it is difficult for me to innovate when
Time and
more time to think.” (#H6) encountering challenging cases.” (#L1)
concentration
AI-Induced Development of Cognitive Skills Conducive to Creativity

“When I have sufficient time, I can think more comprehensively, and the answers to the
questions are better… when I can concentrate better, my thinking will be more focused,
and my answers to some on-the-spot questions should be more accurate.” (#H1)

“With the assistance of AI, we are liberated from tedious and repetitive calls to better
focus on serving willing customers. We have more time and freedom to improve our skills
and innovate our scripts continuously.” (#H3)

“There is a saying that ‘knowledge comes from practice.’ By constantly encountering Observing highly skilled colleagues solving challenging questions by developing new,
problems in real businesses, solving them, and accumulating experience from serving innovative answers:
challenging customers, we can continuously improve and innovate the content of scripts.
Without AI assistance, we will not have that much time to interact with these valuable “[I benefit from] the scripts developed by higher-performing colleagues. As AI
customers to update our answers to questions.” (#H9) manages cases that do not require skills, the remaining cases passed to humans are
relatively more difficult. It is difficult for us to innovate for these cases, but my higher-
Job complexity
“[AI assistance] stimulates my creativity because I now more frequently encounter performing colleagues can continue to break through and innovate, and it will also
important and difficult problems. For the problems that we have been trained for, I can benefit us.” (#L14)
provide different solutions, continue to innovate them, and replace existing solutions with
better ones.” (#H5) “In fact, [AI assistance] can indeed help us to explore see if we can innovate the
answers to the problems for which we have been trained. Although I cannot do that
myself, I have seen some outstanding colleagues coming up with new answers.”

Customer “The more we contact customers who are willing to communicate, the greater the amount of information we obtain from them, and the more we can review and innovate our business
feedback through iterations.” (#H2)

Identify “We can accumulate more experience of serving complicated cases, which provides ideas for how to innovate in the future; the more customers we serve, the more we can judge
opportunities whether there may be room to adjust the existing, trained way of solving problems.” (#L14)

Flexibility “After all, I have gained more practical experience and thus can handle problems more flexibly.” (#H5)

“Play on the “The key to dealing with problems on the spot is to have enough actual ‘combat experience’ so that we do not panic, and we can readily use our skills to ‘get the man.’ Therefore, to
spot” deal with these problems well, we need to accumulate rich experience.” (#L2)

12
(Appendix 10 Cont’d)
Quotes from High-Skilled Agents Quotes from Low-Skilled Agents
They experience “a greater sense of relief” and “a better mood” (#H6) because hey no “I don’t feel relieved. However, it makes life more stressful because I have to deal with
longer needed to conduct “repetitive, tedious, and meaningless” (#H5) phone calls for lead many more complex businesses. They give me headaches throughout the day; how can
generation, which they found “frustrating” (#H7): “The work in the previous stage is I be more relaxed?” (#L9)
always hard labor. If processed by humans, emotional fluctuations are inevitably
Mood generated. …it is a waste of my time and energy” (#H5)

“[With AI assistance,] all the customers whom I handle have intentions [to buy] and are
willing to listen to my introduction. When chatting with them, I feel much more relaxed,
and I am in a much better mood; thus, naturally, I feel that pressure at work is much
reduced, and I feel greater relief and pleasure.” (#H6)
AI-induced Psychological Outcomes Conducive to Creativity

boosted “moral of combat” or “fighting spirit” (e.g., #H5, #H13) Serving more challenging customers “interfered with [their] mentality at work, making
it more difficult for [them] to do business in the future” (#L11)
“This approach [using AI assistance] makes me feel that our work is quite challenging.
After adopting AI assistance, we focus on dealing with more difficult problems, but the “Judging from my current performance, the customers who have passed the AI
more challenging the customers are, the more motivated we are, and the more work we screening are a big challenge for me, and I am not always able to ‘overcome’ them.
want to do.” (#H3) Thus, it is difficult to stimulate my fighting spirit. Conversely, it can make me lose
Morale
confidence in my work.” (#L14)

“In addition to amping up pressure, [a change of work mode] gradually destroys my


fighting spirit because the difficulty of the cases is too great, my progress is too slow,
and the outcomes are not desirable. Thus, the work becomes increasingly less
engaging.” (#L2)

Increased “work motivation” (#H9) and greater “passion” (#H6) “It would be best if there was a standard answer to every question encountered” (#L9).

“We have more opportunities to encounter difficult and challenging questions from
Passion customers, some of which are not in the knowledge bank; without the restrictions of the
knowledge bank, we have more freedom to be innovative [with new scripts].” (#H2)

“a sense of pride” and “a sense of honor” (e.g., #L3, #L14, #H6, #H13)
Perception of “[AI assistance] makes me feel more superior because AI is a big trend now, and the company is constantly innovating. Working in such a company makes me feel like I am at the
firm’s
frontier of our time, being fashionable and not rustic, just like buying the latest mobile phone models. My sense of pride naturally arises. Although other companies, such as Railway
adoption of AI Construction Corporation and Sanitation Company, are large in scale and have state-owned enterprise backgrounds, they sound like yokels. I can even show off to my friends. I feel
enthusiastic about working in such a work environment.” (#H9)

Firm’s Using AI to give assistance to agents shows firm’s “recognition of [their] job skills” (#H33) and that the firm considered them to be “an indispensable part of the business” (#H33). It
recognition of means that “[t]he company still thinks highly of [their] work skills” so that they felt “proud to be assigned such complex and difficult cases” (#L4).
agents

13
the company “certainly gave us more support in business and laid the foundation for us to communicate with customers. At least the purpose of our communication does not need to
Organizational be explained [to customers], so customers may become more likely to cooperate with us” (#L2)
support “I think [by adopting AI] our firm wants us to have more opportunities to meet and communicate with real customers, so we don’t have to repeat those high-frequency scripts.”
(#L10)

“In the short term, [AI assistance] is a good thing, but in the long run, there will be threats. Although AI assistance with our work has helped improve work efficiency and
capabilities, will it replace us when AI becomes even more mature in the future? Everyone understands this possibility, but current AI technology has been widely used, and it is
Threat of correct for the company to adopt AI; otherwise, the company itself may be eliminated. Even if this stage [AI displacing agents] is reached in the future, it will be a necessary decision
substitution by by the company for technological progress. If there are opportunities, we can choose to transfer them to different positions.” (#H13)
AI
the company’s adoption of AI “an inevitable trend” (#L4) that was imperative for “the company’s own survival” (#L1); thus, “the company’s thinking is reasonable: the company
needs to grow in the long run, the employees need to continuously make progress, and the new technologies need to continuously expand” (#L9).

Contrast between their work post- and pre-AI adoption as one of “high efficiency and high “In the past, the volume of business was large, and the success rate was low. Now,
Performance
quality versus low efficiency and low quality’ (#H13) although the volume of business [that I can handle] reduced less, the success rate can be
Consequences
much higher” (#L1).

Suggest expanding the use of AI to handle more “ineffective calls that otherwise occupied “if it continues to increase, the jobs that are left for us to handle will become even more
too much of our time” (#H2) difficult, and I am not sure if I can complete them” (#L3)
Suggested Changes to
“[AI involvement] can be maintained [at the current level] for now and gradually “From the company’s perspective, [the use of AI assistance] should increase. As
AI Adoption increased. As some of the difficult problems that we have encountered are not in the AI previously mentioned,, this is a trend. From my personal perspective, it is good to
knowledge bank, [the company] needs us to continue summarizing our experiences and maintain [my current level]. After all, those of us with poor performance need to be left
iteratively update the knowledge in the AI library” (#H11). with some work to do.” (#L10)

Notes: The interviews were conducted in the local language. The first and second authors who were native speakers of the local language
translated the transcripts into English and cross checked the quality of translation.

14
Appendix 11: A different research question: A “horse race” between AI and human agents for
sales persuasion

The main paper focuses on AI-human collaboration, with the research goal of examining how
AI’s assistance with lead generation, the initial stage of the sales task, affects employees’ creativity
demonstrated during sales persuasion, the subsequent stage of the same sales task. While in the main
paper we do not intend to study how AI compares with human agents in performing the subsequent stage
of sales persuasion, we indeed have an experimental group in which the AI chatbot engaged in both lead
generation and sales persuasion, without any human agents involved. We note, however, that because of
the absence of human involvement, insights generated by this treatment group do not directly help us
address the research question of the main paper, which hinges on human agent’s creativity demonstrated
during sales persuasion. Thus, the discussion below addresses a different research question—a “horse
race,” instead of collaboration, between AI and human agents. We have briefly discussed our findings in
Footnote 6 of the main text, and we report more details on our theoretical expectations and empirical
results.
As discussed in the main text, unscripted questions are more likely to occur during sales
persuasion than lead generation. Current AI technologies have a weaker ability to solve unscripted
questions, although they are competent in handling scripted questions for which they have been trained
with. In contrast, greater job expertise of human agents enables them to develop creative solutions to
address unfamiliar challenges (Amabile, 1996). Further, prior studies demonstrate that when customers’
questions are resolved in the selling process, customers are more likely make a purchase (Sabnis et al.
2013; Pennachin and Goertzel 2007). Thus, we expect that in a “horse race” between AI and human
agents in performing the entire sales task (including lead generation and sales persuasion), AI agents are
less effective than high-skilled human agents in sales persuasion, and thus the overall sales performance.
In this additional experimental group, 893 randomly selected customers were served by the AI
chatbot alone. That is, the AI made outbound sales calls to customers to sell the product (credit cards),
and performed both lead generation and sales persuasion without any human sales agents involved at any
stage. The AI followed the same protocols and trainings as human agents of the firm, and made the calls
during the same time slot as the four treatment groups reported in the main text. Randomization checks
show that across experimental groups, all covariates, including customer age, gender, education, and
number of credit cards owned, do not differ (p > 0.58).
The AI-alone group achieved a purchase rate of 0.021, which was similar to that achieved by
bottom agents on their own (0.020) but half in magnitude as the purchase rate achieved by top agents on
their own (0.037). The performance of the first-stage lead generation is the same across the three groups
(which is consistent with Appendix 2), thus the variation of success of the second-stage sales persuasion
closely traced the variation of the customer purchase rate. These results inform us that the AI chatbot on
its own underperformed top-skilled sales agents on their own, not in lead generation but in sales
persuasion. However, the AI chatbot on its own achieved a performance parity as bottom-skilled sales
agents. While these insights are less relevant to the research question of the main text, we report them
here for full transparency.

15

View publication stats

You might also like