Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
311 views17 pages

Yi 2009

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
311 views17 pages

Yi 2009

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Knowledge Management Research & Practice (2009) 7, 65–81

& 2009 Operational Research Society. All rights reserved 1477–8238/09


www.palgrave-journals.com/kmrp/

A measure of knowledge sharing behavior:


scale development and validation

Jialin Yi1 Abstract


The concept of knowledge sharing is getting more and more attention in the
1
Northwestern Memorial Hospital, 1315 research and practice of knowledge management. It is necessary to develop
Greenwood Avenue, Deerfield, IL, U.S.A. relevant performance assessment and reward systems to encourage people’s
knowledge sharing behaviors (KSBs). Till now, little effort has been put into
Correspondence: Jialin Yi, 1315 Greenwood developing a valid and reliable measure of KSB. The primary purpose of this
Avenue, Deerfield, IL 60015, U.S.A. study is to develop a new measure of KSB with desirable psychometric
Tel: þ 1 224 515 6096;
properties – a well-developed KSB scale with a sufficient level of reliability and
validity. This main objective was achieved by using the following procedures:
(1) specify domain of construct, (2) generate scale items, (3) purify scale, and
(4) validate scale. The new KSB scale developed in this study is a 4-dimensional,
28-item, 5-response choice frequency scale. The scale includes written
contributions, organizational communications, personal interactions, and
communities of practices dimensions. The results provided evidence of the
dimensionality, reliability, and validity of the KSB scale.
Knowledge Management Research & Practice (2009) 7, 65–81.
doi:10.1057/kmrp.2008.36

Keywords: knowledge sharing; measurement; performance management; knowledge


management practice

Introduction
Knowledge is regarded more and more as the critical resource of firms and
economies (Quinn, 1992). It is considered a key part of the strategy to use
knowledge and expertise to create a sustainable competitive advantage in
today’s business environment. This is why knowledge management has
become very popular although the field is only about 10 years old.
Knowledge management is a very broad research area that could be
explored from different aspects such as knowledge identification, creation,
organization, storage, sharing, use, and maintenance. Among these
aspects, knowledge sharing or knowledge transfer is becoming an
increasingly popular area of interest to researchers, especially when the
human factor of knowledge management is stressed (Dougherty, 1999;
Hislop, 2003). How knowledge can best be shared as a corporate asset is a
critical and challenging issue in knowledge management (Oh, 2000).
Knowledge sharing links individuals and organizations by transferring
knowledge from an individual to an organizational level, and hence it
brings competitive value for the organization (Ipe, 2003).

Research objectives
The concept of knowledge sharing is getting more and more attention in
the research and practice of knowledge management because of its
Received: 2 May 2006 potential benefits to individuals and the organization. Thus, it is necessary
Accepted: 15 October 2008 to develop relevant performance assessment and reward systems to
66 A measure of knowledge sharing behavior Jialin Yi

encourage people’s KSBs. Otherwise, employees will focus organizational performance. How frequently or how well
on measurable performances rather than immeasurable employees share knowledge cannot easily and correctly
performances like sharing knowledge and experience be measured just by the number of ‘posts’ in a knowledge
with others (Currie & Kerrin, 2003). base (Harvard, 1997). The communication of informal
If managers of a company believe knowledge sharing is interaction is important but not usually recorded so the
beneficial to both individuals and the organization and contributions are difficult to evaluate.
would like to reward KSBs, it is necessary to evaluate and
measure the behaviors first. A challenge associated with
Just asking
setting up knowledge sharing rewards is the difficulty
Some empirical studies assessed KSB by simply asking
in identifying and evaluating the knowledge sharing
people’s perception of the degree of knowledge sharing in
(especially the tacit knowledge) at the individual level
different scenarios, such as the studies of Zarraga &
(Ipe, 2003; Michailova & Husted, 2003). For example,
Bonache (2003) and Kamdar et al. (2004).
tracking the contribution to an online knowledge sharing
For instance, in Chow et al.’s (2000) experimental
system is relatively easy; however, monitoring whether
design to test the effects of national culture on level of
knowledge sharing takes place socially in situations like
knowledge sharing, KSB is measured by three scenarios
face-to-face conversation is very difficult.
describing day-to-day operation stories. Participants
Till now, little effort has been put into developing a
(middle-level managers) were asked to assess the extent
valid and reliable measure of KSB. The measurement of
to which a typical employee in their firm would reveal a
KSB is still such a new area that no definitive measure of
past personal job-related mistake, ask a clarifying ques-
it exists. Based on the literature review, mainly three
tion, or express a contrary or challenging opinion in the
methods were used in quantitative studies to measure
context of a meeting.
employees’ KSBs: number counting, just asking, and
Without discussing whether knowledge sharing scenar-
taxonomy based on knowledge/technology type. These
ios are appropriate to evaluate KSB, this method is
methods remain problematic, as discussed below.
impossible to apply as an assessment tool in any job
performance evaluation system. Its use is limited to
Number counting
research.
Huysman & de Wit (2002) argued that the need to
measure often leads to the wrongly developed measure-
ment. In the absence of good measuring tools, many Taxonomy based on knowledge type or technology type
companies use solutions such as counting the number of Some researchers assess KSB according to different
hits on personal postings, the number of documents taxonomies based on different knowledge types or
submitted or consulted, the number of contributions to technology types. Bock & Kim (2002) measured KSB by
meetings, the number of written reports, the rate of asking participants how frequently they share different
contribution to knowledge data bases, the number of new knowledge (reports, official documents, manuals, meth-
ideas, the number of improvement suggestions made, the odologies, models, experience, know-how, know-where,
number of presentations made, etc. (Liebowitz, 2002; know-whom, and expertise from education and training)
Smith & McKeen, 2003). with other members, and how frequently they use
Obviously, computer-based knowledge sharing is rela- different information technology (bulletin board, e-mail,
tively easier to track because an individual’s contribu- Webpage, chat room, e-document management system,
tions to knowledge bases or online discussions are readily and knowledge repository) to share their knowledge.
observable. For instance, Samsung Life Insurance rewards Cummings (2001) proposed a similar method to
employees for their contribution to the knowledge measure KSB: asking participants how often they shared
sharing system through the design of a point system each kind of knowledge (general overview, specific
(Moon & Park, 2002). Using a point redemption system, requirements, analytical techniques, progress reports,
the company awards points for knowledge flow each year. and project results) with group members or non-group
Each time an employee logs into the system, 10 points are employees inside the company. Likewise, Lee (2001) and
given. Two hundred points are additionally granted for Lin & Lee (2004) measured KSB by asking if employees in
creating knowledge material and one point each for a the company share know-how from work experience,
content search. The reward points can be redeemed to go share expertise from education and training, share
on overseas training or used as cash-in opportunities or business knowledge obtained informally (e.g., new
accounts for 10% of an employee’s bonus. Also the stories, gossip) or from partners, etc.
company nominated ‘knowledge masters.’ The KSB taxonomy based on various knowledge types
However, these measures result in an emphasis on the or types of technology used to obtain the knowledge is
product approach to knowledge, while the process problematic. The argument in this research is that the
approach is ignored (Huysman & de Wit, 2002). Other knowledge or technology dimensions do not constitute
forms of knowledge sharing such as those taking place different behaviors. To measure KSB, the questions
through informal conversations and personal networks should be about different kinds of KSBs by individuals
are not considered, which might have greater impact on in an organization.

Knowledge Management Research & Practice


A measure of knowledge sharing behavior Jialin Yi 67

The primary purpose of this study is to develop a new Techniques Used


measure of KSB with desirable psychometric properties – 1 Specify domain of construct Literature review
a well-developed scale with a sufficient level of reliability
and validity. This objective will be achieved by identify- 2 Generate scale items Literature review
ing the construct of KSB and categorizing different Focus group interviews
Expert item-evaluation
behaviors, developing the scale for each category, and 3 Collect data
empirically testing the scale’s validity and reliability.
4 Purify scale Coefficient alpha
Confirmatory factor analysis
5 Collect new data
Research significance
6 Assess reliability Coefficient alpha
Doing research on the measurement of KSB will make a
significant contribution to the development of effective 7 Assess validity Confirmatory factor analysis
means of knowledge sharing. First, as noted above, little Structural equation modeling
attention has been paid to the creation of a valid and Correlation coefficient
reliable measurement instrument of KSB. The currently Figure 1 Procedures for developing a measure of KSB.
available instruments are questionable. A commonly
existing problem in academic areas is, as Schwab (1980)
argued, that measures are often used to empirically Specify domain of the construct
examine a hypothesized relationship between variables
A definition of KSB
without adequate data supporting their reliability and
Knowledge could be shared at individual, unit or group, and
validity. This problem causes difficulties in interpreting
organizational levels, within or across organizations (Ipe,
whether a statistical finding is believable or not because
2003). This study focuses on the analysis of knowledge
the measures may generate invalid data (Churchill, 1979;
sharing at the individual level within an organization because
Hinkin, 1995). This study can contribute to the develop-
knowledge sharing fundamentally takes place between
ment and validation of a new scale of KSB with assured
individuals. Some research refers to knowledge sharing as
accuracy of measurement, which can be used as an
the attitude or ability to share knowledge. In other research,
instrument in relationships research related to KSB in the
knowledge sharing may mean KSB, or the term may mean
future field studies conducted by researchers.
both the ability to share knowledge and the action of sharing
Second, up till now, not much effort has been made
it. This study focuses on the KSB of individuals because the
to overcome performance evaluation issues relevant to
behavior is what an organization wants to evaluate, measure,
knowledge sharing in organizations. If an organization
and integrate into its performance evaluation system.
wants to encourage people’s KSB, KSB should be explicitly
A scale cannot be developed to measure a construct
recognized as part of the individual performance domain
unless the nature of that construct is clearly described.
and should be linked to performance appraisal practices.
Without a well-defined construct, it is difficult to write
This study could help support the design and develop-
good items and to validate the scale (Spector, 1992). The
ment of an effective performance evaluation system to
relevant literature demonstrated that there is no clearly
successfully facilitate knowledge sharing activities.
defined KSB concept. The current definitions do not meet
the criteria for a good definition of a construct and they are
weak in at least one of the following criteria: specification
of a common theme, use of unambiguous terms, contribu-
Research outline
tion to overall understanding of the concept, and clear
A new scale of KSB was designed following typical
distinction from related concepts (Podsakoff, 2003). Some
procedures of scale development (Churchill, 1979;
examples of KSB definitions are given below.
Spector, 1992), with some revisions. Specifically, this
study involved four major steps (presented in Figure 1): Knowledge sharing is defined as activities of transferring
(1) specify domain of construct: defining the construct or disseminating knowledge from one person, group or
clearly and precisely based on the literature, and propos- organization to another. (Lee, 2001, p. 324)
ing a new taxonomy of the construct based on the
literature; (2) generate scale items: using a combination People who share a common purpose and experience
of deductive and inductive approaches to develop an similar problems come together to exchange ideas and
information. (MacNeil, 2003, p. 299)
item pool (multiple-item scales) to measure different
dimensions of the construct, then asking experts to
Knowledge sharing is basically the act of making knowledge
evaluate face and content validity of the scale; (3) purify available to others within the organization. (Ipe, 2003, p. 32)
scale: collecting empirical data to pre-test reliability and
factor structure (scale dimensionality) of the scale; and Knowledge sharing is the behavior of disseminating one’s
(4) validate scale: providing new empirical data to assess acquired knowledge with other members within one’s
this new scale’s validity and reliability. organization. (Ryu et al., 2003, p. 113)

Knowledge Management Research & Practice


68 A measure of knowledge sharing behavior Jialin Yi

Knowledge sharing refers to the degree to which one Differences between KSB and other related concepts
actually shares knowledge with others. (Bock & Kim, There are some differences between KSB and other related
2002, p. 16; Lin & Lee, 2004, p. 115) organizational behavior concepts such as organizational
In this study a clearly defined new construct of KSB is citizenship behavior (OCB). As defined by Organ (1988),
proposed, which guides the subsequent scale develop- ‘OCB represents individual behavior that is discretionary,
ment in this study. not directly or explicitly recognized by the formal reward
system, and that in the aggregate promotes the effective
Knowledge sharing behavior is a set of individual behaviors functioning of the organization’ (p. 4). The behavioral
involving sharing one’s work-related knowledge and expertise contents of OCB include helping, volunteering, persisting
with other members within one’s organization, which can
with extra effort, following rules, no complaining, etc.
contribute to the ultimate effectiveness of the organization.
KSB and OCB are similar in some aspects. All of the
behaviors can contribute to organizational successes such
Individual behaviors as productivity, efficiency, innovation, and retention of
Sharing of knowledge may occur at various levels in productive employees, though there is no direct effect
organizations such as at the individual, team, or depart- on financial performance (Motowidlo, 2000). Since the
ment level, or at the level of the organization as a whole behaviors described under these terms are beyond what is
(Erhardt, 2003), but it starts with the individual (Gurteen, required for jobs, they may not be explicitly captured
1999). It relies on the behavioral choice of individuals by an organization’s performance management system.
(Dougherty, 1999). Therefore, this definition confines The rewards for the contribution may be indirect or
KSB to the individual level within an organization. uncertain, as compared to more formal contributions
related to task performance.
KSB is different from OCB for two reasons. First, the
Sharing
concept of OCB is broader than the concept of KSB. It
Sharing here means the action moves from knowledge
refers to ‘all actions an organization would like to see
provider to knowledge recipient and does not include
their employees to perform but cannot require them to
two-way knowledge exchanges between knowledge
perform’ (Motowidlo, 2000, p. 119). In other words, it
provider and knowledge recipient which are defined as
refers to all behaviors except task performance. In
knowledge transfer or knowledge flow (Szulanski, 1996).
contrast, KSB specifically refers to behaviors of sharing
In other words, KSB is limited to the behavior of
work-related knowledge and expertise, occurring either
knowledge providers, not the behavior of knowledge
formally, in cases like a discussion in a meeting, or
receivers. A knowledge provider’s KSB is the performance
informally, in cases like a conversation in the hallway.
that an organization wants to evaluate and reward.
Second, an individual’s decision about whether to share
knowledge or not can be influenced by perceived benefits
Work-related knowledge and expertise and costs of sharing. This is supported by the empirical
The knowledge that people share formally or informally data of Kamdar et al.’s (2004) experimental study on the
is relevant to tasks performed. It is not only information- relationship between OCB and KSB. The authors assumed
based or know-what knowledge. More important, it is that since sharing knowledge with co-workers enhances
know-how, know-why, experiences, ideas, skills, and general productivity and is discretionary, it might be
expertise. The latter kind of knowledge is more tacit regarded as a form of OCB – a form of altruism or helping.
and harder to share, and it is constructed through social Yet their findings showed no strong relationship between
relationships and interactions. Therefore, knowledge KSB and OCB ratings. They indicated that unlike forms of
here is defined as ‘the explicit job-related information OCB, knowledge sharing requires a thorough cost–benefit
and implicit skills and experiences necessary to carry out analysis on the part of the employee. KSB requires that
tasks’ (Kubo et al., 2001, p. 467). people be willing to give up the ‘secret of fire’ for the
benefit of themselves, co-workers, the work unit, or the
Within one’s organization organization. This distinction can explain the varying
The KSB here refers to the behaviors that occur within relation between KSB and OCB.
one’s organization, but not to inter-organizational
knowledge sharing. Yet KSB could occur either within Taxonomy of KSB
or between different teams, departments, or divisions. KSB is a difficult concept to define and measure. The
measurement of KSB is still a very new practice. In order
Contribute to the organization to design a new instrument to measure KSB, first it is
The KSB that occurs can bring value-added benefits to the necessary to examine the possible subsets of KSB.
organization and contribute to the ultimate effectiveness Particularly, the KSB categories classified in this
of the organization. Knowledge sharing can ultimately study, based on the literature, are similar to the four
increase productivity, improve the work process, create major mechanisms or modes for individuals to share
new business opportunities, and help the organization to their knowledge in organizations identified by Bartol
achieve its performance objectives. & Srivastava (2002): ‘contribution of knowledge to

Knowledge Management Research & Practice


A measure of knowledge sharing behavior Jialin Yi 69

organizational database, sharing knowledge in formal that their contributions will be valuable to the organiza-
interactions within or across teams or work units, sharing tion, give themselves positive feelings of sociability or
knowledge in informal interactions, and sharing knowl- doing the right thing, or promote personal responsibility
edge within communities of practice’ (p. 65). (Cabrera & Cabrera, 2002). Organizational commitment
Bartol and Srivastava’s classification is consistent with can positively affect employees’ attitudes and behaviors
Hansen et al.’s (1999) statement that there are two major of knowledge sharing (Hislop, 2003; MacNeil, 2003).
knowledge management strategies used in companies: Since this type of KSB occurs in more formalized
codification strategy and personalization strategy. While routines like formal meetings or workshops, the social
knowledge contribution to a database falls under the interactions such as discussions in meetings or presenta-
codification strategy, the other three dimensions belong tions in seminars are easily noticed and remembered by
to the personalization strategy (Bartol & Srivastava, supervisors and colleagues. Thus, the behaviors are more
2002). The four subsets of KSB proposed in this study likely to be considered and rewarded through a com-
are written contributions (WC), organizational commu- pany’s incentive systems. Rewards at different levels
nications (OC), personal interactions (PI), and commu- (individual rewards, rewards based on team performance,
nities of practice (CP). profit or gain sharing plans across teams) may enhance
individual knowledge sharing within or across teams
Written contributions (Bartol & Srivastava, 2002).
This dimension of KSB includes behaviors of employees’
contributing their ideas, information, and expertise Personal interactions
through written documentation rather than dialogs, This dimension of KSB includes behaviors of sharing
such as by posting ideas to organizational database and knowledge in informal interactions among individuals,
submitting reports which can benefit other employees such as chatting over lunch and helping other employees
and the organization. Knowledge is shared through a who approach them. Knowledge is shared through the
person-to-document channel. Codification strategy of informal social interactions of a person-to-person chan-
knowledge management and sharing of explicit knowl- nel. The personalization strategy of knowledge manage-
edge are emphasized for this type of knowledge sharing. ment and the sharing of tacit knowledge through
Bartol & Srivastava (2002) argued that knowledge con- informal conversation are highlighted for this type of
tribution to a database is easily associated to rewards relevant knowledge sharing.
to KSBs. The reason is that the contributions to the know- This subset of KSB involves absolutely voluntary and
ledge databases can be easily tracked, accessed, evaluated, natural behaviors. The aim of helping and assisting is to
and recorded, so employees are sure their knowledge sharing help other employees with specific problems, to help
will not be ignored or devalued by the organization and them work better and more efficiently, to help them
therefore it will definitely be rewarded later. Thus, it is likely avoid risks or trouble, or to let others share their genuine
that the extrinsic motivation for sharing knowledge is passion and excitement on some specific subject. Ob-
generally high because knowledge sharing is perceived to be viously, the larger the personal networks and the better
externally controlled (Kaser & Miles, 2001). Therefore, the personal relationships an individual has, the greater
specifying the benefits of sharing knowledge in the the chance that the individual will share knowledge with
incentive system is a useful means of promoting this type people he or she knows in his or her social networks
of KSB. Possible individual rewards for knowledge sharing (Kubo et al., 2001).
could be either pay-based or recognition-based. The intrinsic motivation of this type of KSB is high
because the sharers perceive knowledge sharing as self-
Organizational communications determined (Kaser & Miles, 2001). These informal social
This dimension of KSB includes behaviors of sharing interactions, like chatting at lunchtime, are hard for the
knowledge in formal interactions within or across teams organization to notice and evaluate. Therefore, as Bartol &
or work units. For example, working teams or project Srivastava (2002) noticed, the rewarding of this type of KSB
groups may have regular meetings for brainstorming will be less effective than the rewarding of the first two
or problem solving by seeking ideas from employees. types. They further suggested procedural and distributive
Knowledge is shared through formal social interactions of fairness of rewards by managers could convey a signal to
a person-to-group channel. Personalization strategy of employees that the organization values them and is
knowledge management and sharing of tacit knowledge trustworthy, which in turn may have a positive impact on
through formal face-to-face conversation are stressed for employees’ KSBs.
this type of knowledge sharing.
This subset of KSB is motivated by employees’ will- Communities of practice
ingness to contribute to the success of team and organiz- This dimension of KSB includes behaviors of sharing
ation. In this situation, employees believe through knowledge within CP, which are voluntary groups of
knowledge sharing they can help the organization as a employees communicating around a topic with common
whole meet its business objectives, but not for their interests in a non-routine and personal way, as previously
self-interests (Gurteen, 1999). Employees may believe described. Knowledge is shared through informal social

Knowledge Management Research & Practice


70 A measure of knowledge sharing behavior Jialin Yi

interactions of a person-to-group channel. The persona- above’) or inductive (‘grouping,’ ‘classification from
lization strategy of knowledge management and sharing below’) approaches (Hinkin, 1995). Deductive scale deve-
of tacit knowledge through informal conversation are lopment method requires a thorough review of literature
emphasized for this type of knowledge sharing. and a clear understanding of constructs, while inductive
This subset of KSB is composed of primarily voluntary scale development method is used when little theory
and natural behaviors based on the general expectation of could be used to identify constructs requiring items to be
reciprocity. Kaser & Miles (2001) called this behavior social generated by asking a sample of respondents to provide
exchange relationship-based behavior (a social exchange descriptions relevant to studied constructs. In this study
relationship is reciprocal acts in which individuals offer both approaches are used for item generation.
help to one another). Social exchange relationship-based First, on the basis of the thorough literature review of
behavior happens where there are common areas of existing theoretical and empirical research, an initial
interest, shared passion, specific shared problems. It is item pool was generated. Since the concept of KSB is
used in establishing group identity and shared perception multidimensional, four subsets of items were tentatively
of value (e.g., both parties know this knowledge has real developed to tap each of the four dimensions of the
potential value). An individual shares his or her knowl- construct: WC, OC, PI, and CP.
edge expecting reciprocity, which is based on the trust that In addition, an inductive scale development approach,
others will also share their knowledge. specifically a focus group interview, was conducted to
The intrinsic motivation of this type of KSB is high. confirm the re-conceptualization of the definition and
Although this type of KSB may be supported by an organi- categories of KSB, to uncover dimensions of behavior not
zation, it does not need to be specified and valued by the yet recognized in the literature, and to help generate
organization (Kaser & Miles, 2001). Therefore, as Bartol & additional items of KSBs. A 2-h semi-structured focus
Srivastava (2002) hypothesized, ‘intrinsic rewards that build group interview was conducted with five practitioners
expertise and feelings of competence are more appropriate from different industries whose work is relevant to the
for influencing knowledge sharing within organizational knowledge management area. Responses were then
communities of practice’ (p. 72). For example, the World classified into a number of categories by content analysis
Bank encourages knowledge sharing by increasing intrinsic based on key words or themes.
motivation of community members such as the opportunity The KSB items were then written to correspond to
to work on interesting ideas and building relationships with categories proposed according to the analysis of literature
colleagues (Bartol & Srivastava, 2002). review and focus group interview results. The product
A summary and comparison of the above four subsets was the knowledge sharing behavior scale (KSBS) initial
of KSB is shown in Table 1. item pool comprising 32 items to capture the domain of
KSB. The initially created KSBS is a 4-dimensional, 32-
item, 5-response choice frequency scale.
Generation of initial item pool

Study 1: Item generation Study 2: Evaluation of items


There are two basic approaches for item generation: Once the item pool of KSBS is generated, what is needed
deductive (‘logical partitioning,’ ‘classification from next is to examine how well the items of the construct

Table 1 Comparison of four KSB subsets

Written contributions Organizational communications Personal interactions Communities of practice

Channel Person-to-document, for Person-to-group (social, Person-to-person (social, Person-to-group (social,


example, post ideas to formal), for example, informal), for example, informal), for example, meet
online database brainstorming meetings face-to-face talk over lunch with community members to
discuss problems
Motivation Extrinsic: high Extrinsic: high Extrinsic: low Extrinsic: moderate
Intrinsic: moderate Intrinsic: moderate Intrinsic: high Intrinsic: high
Rewarding Individual rewards Rewards at both individual and Procedural and distributive Intrinsic rewards such as
strategy team levels fairness of rewards building relationship with
colleagues
Affecting factors Knowledge sharing Commitment to the Personal networks and Trust between two parties
rewards organization relationships
Knowledge More explicit More tacit More tacit More tacit
shared
Knowledge Codification strategy Personalization strategy Personalization strategy Personalization strategy
management
strategy

Knowledge Management Research & Practice


A measure of knowledge sharing behavior Jialin Yi 71

tap into its conceptual domain (Podsakoff, 2003). The survey that was used to collect data, every question had
purpose of item-evaluation in this stage is to analyze both to be answered in order to successfully submit the
the face validity of the items and the content validity of completed survey. All analyses below of this pilot study
the measure. Five knowledge management experts in the were conducted through SAS 9.13 version distributed by
United States, who are familiar with the knowledge SAS Institute.
sharing context, were invited to review the initial 32
items and evaluate both face validity and content validity Formative vs reflective indicators
of the KSBS. Classical test theory (e.g., Nunnally, 1978; Churchill,
Face validity refers to whether an item, on its face, 1979) assumes that the underlying latent construct or
appears to measure the construct (Podsakoff, 2003). variable causes the observed indicators or items in the
Overall, the five experts reported that the items have measure. So the assumption of the measurement model is
face validity because they looked good, appropriate, and that the direction of causality is from the latent variable
clear. to its indicators. However, researchers (Bollen & Lennox,
Content validity refers to whether items used ade- 1991; MacCallum & Browne, 1993; Diamantopoulos,
quately tap into the construct’s domain of interest 1999; Diamantopoulos & Winklhofer, 2001; Jarvis et al.,
(Hinkin, 1995), that is, the extent to which a specific 2003) found that this is not always true. For some
set of items reflects a content domain. The measure constructs, it makes more sense conceptually to view
should be neither deficient (having too few items) nor the direction of causality from the indicators to the latent
contaminated (having too many items). A matrix method variable.
(Podsakoff, 2003) reporting the relationship between An example is a social interaction construct measured
items and dimensions of the construct was adopted to by indicators of time spent with spouse, with co-workers,
evaluate content validity of KSBS. Knowledge manage- with friends, and with others (Bollen & Lennox, 1991): if
ment experts were asked to classify the randomly ordered time spent with spouse increases, the social interaction
items into one of several categories (KSB dimensions plus increases even if time spent with co-workers, with friends,
an ‘other’ dimension). The items that are assigned to the and with others does not change; however, an increase of
proper category with a high percentage (e.g., 80% – four social interaction does not require a simultaneous
out of five experts) of the time by the experts will be increase in all four indicators.
retained for further analysis. Therefore, the observed indicators can be either
The experts evaluated whether each item tapped into reflective (effect) or formative (cause). Jarvis et al. (2003)
each dimension or component through the matrix developed a set of conceptual criteria that can be used to
method. Among the items, 27 of them were assigned to determine whether a construct should be modeled as
the proper category with 80% or higher by the experts. having formative or reflective indicators. The major
The remaining five items were either deleted or reas- differences between formative and reflective measure-
signed to another category, to make sure the measure was ment models are shown in Table 2.
not deficient or contaminated. Therefore, the revised While reflective indicators are typical of classical test
KSBS is a 4-dimensional, 28-item, 5-response choice theory and factor analysis models, formative indicator
frequency scale (see Appendix). models do not need to conform to the conventional
guidelines, like the claim that construct indicators should
Scale purification (study 3) be internally consistent and items with low item-to-total
The third step in developing a measure of KSB was to do a correlations should be dropped (Bollen & Lennox, 1991).
pilot study to refine the KSBS. The pilot study provides Jarvis et al. (2003) pointed out that the choice between
information about deficiencies and suggestions for formative and reflective models can substantially influ-
improving the measure through the examination of ence estimation procedures, and the misspecification of
measurement model type, internal consistency reliability, the direction of causality between a construct and its
and factor structure (scale dimensionality) of the KSBS. indicators can lead to inaccurate conclusions about the
The KSBS was pre-tested using a convenience sample of structural relationships between constructs.
212 subjects, including about 120 distance MBA students To decide whether a construct is formative or reflective
and 92 employees working in companies. The distance in nature, it is important to have a clear conceptual
MBA students were from two large mid-west universities. definition of the construct, generate indicators which can
MBA students were readily available. They can be fully represent the domain of the construct, and consider
regarded as a sample of the ultimate population for the causality relationship between the construct and its
which the KSBS is intended because they were all full- indicators carefully. The judgment can be made based on
time employees while studying in part-time MBA pro- the difference (Table 2) proposed by Jarvis et al. (2003).
grams. The 92 employees were working in relatively large In this study, the subscales ‘Written Contributions’ and
companies where they needed to share knowledge and ‘Personal Interactions’ were identified as formative
information in their day-to-day work. The data were models because the direction of causality can be viewed
collected by online survey. One thing to mention is that as from indicators to construct. Indicators are not
there was no missing value problem because in the online expected to be correlated and a change in the construct

Knowledge Management Research & Practice


72 A measure of knowledge sharing behavior Jialin Yi

Table 2 Summary of differences between types of measurement models (Jarvis et al., 2003)

Formative model Reflective model

Direction of causality is from indicators to construct Direction of causality is from construct to indicators
A change in the construct does not implicate changes in all indicators A change in the construct causes changes in all indicators
No reason to expect the indicators are correlated (internal consistency Indicators expected to be correlated (should possess internal
reliability is not implied) consistency reliability)
Indicators are defining characteristics of the construct Indicators are manifestations of the construct
Dropping an indicator from the measurement model may alter the Dropping an indicator from the measurement model does not
meaning of the construct alter the meaning of the construct
Takes measurement error into account at the construct level Takes measurement error into account at the item level

does not implicate changes in all indicators. For instance,


Zeta 1
the WC subscale is measured by indicators of documents
submission, paper publication, sharing of personal files, WC Xi(1-5)
online database contribution, etc. All these items are
different activities contributing to written contribution
behaviors (heterogeneity of the items of the same
Zeta 3 OC Yi(1-8) i
construct); and if personal file sharing behavior increases,
the WC increases even if other items do not change. Zeta 2
KSBS
On the contrary, the subscales ‘Organizational Com-
munications’ and ‘Communities of Practice’ were identi- PI Xi(1-8)
fied as traditional reflective models since the direction of
causality can be viewed as from construct to indicators.
Indicators are expected to be correlated with each other CP Yi(1-7) i
and a change in the construct causes changes in all
indicators. For example, CP subscale is measured by
indicators that all describe activities within a community Internal Consistency Reliability
of practice. The items are different activities but all Figure 2 KSBS model specifications.
relevant and most of them usually happen simulta-
neously (homogeneity of the items of the same con- correlations were also calculated for suggested elimina-
struct). In other words, an increase of CP usually causes a tion of individual item. Generally an item–total correla-
simultaneous increase in all items. tion over 0.40 is acceptable. For all items in OC and CP,
It is obvious that KSBS is a formative scale since the four their item–total correlations achieved were greater than
subscales are quite different dimensions contributing to 0.58, which means that no item needs to be eliminated.
the KSBS. In other words, the KSBS is a formative/ However, as discussed previously, if indicators are
reflective first-order, then formative second-order scale. formative rather than reflective, high internal consis-
Figure 2 shows the KSBS model specifications. tency reliability is not necessarily expected because of the
heterogeneity of the indicators of the same construct.
The relationships between the indicators can be positive,
Internal consistency reliability negative, or no correlation (Bollen & Lennox, 1991). And
Internal consistency reliability refers to the extent to it is necessary to be very cautious when removing an item
which items intercorrelate with one another. Internal of a formative scale because omitting an item is omitting
consistency implies that multiple items measure the same a part of the construct. As shown in Table 3, the reliability
construct, and intercorrelate with one another. In con- values of WC and PI were 0.506 and 0.710, respectively.
trast, low inter-item correlations indicate that some items The overall KSBS reliability was 0.73 as calculated.
are not drawn from the appropriate domain and produce Additionally, almost all item–total correlations of WC
unreliability (Churchill, 1979). and PI items represented were lower than 0.50.
The commonly accepted measure of internal consis-
tency reliability is Cronbach’s coefficient alpha. Nunnally
(1978) suggested that an alpha of 0.70 be the minimum Factor structure (scale dimensionality)
acceptable standard for demonstrating internal consis- Factor analysis can be used to suggest and confirm scale
tency. As shown in Table 3, the reliability values of dimensionality. Both exploratory and confirmatory fac-
OC and CP are 0.915 and 0.939, respectively, which tor analysis (CFA) in scale construction aim to examine
indicate the items perform very well in terms of OC and the stability of the factor structure and provide informa-
CP reliability. Furthermore, subsequent item-to-total tion that will facilitate the refinement of a new measure

Knowledge Management Research & Practice


A measure of knowledge sharing behavior Jialin Yi 73

Table 3 Pilot-study KSBS internal consistency reliability and intercorrelations

1 2 3 4

1. Written contributions (WC) (0.506) 0.443* 0.417* 0.433*


2. Organizational communications (OC) (0.915) 0.516* 0.423*
3. Personal interactions (PI) (0.710) 0.427*
4. Communities of practice (CP) (0.939)

Note: N ¼ 212, coefficient alpha on the diagonal, *Po0.0001, one-tailed.

Table 4 Pilot-study model-fit indices

Chi-square d.f. P (close) RMSEA CFI NFI NNFI GFI RMR

Two-factor model (OC and CP) 270.90 89 0.00 0.098 0.930 0.901 0.918 0.857 0.050
Two-factor model (WC and PI) 200.70 64 0.00 0.100 0.763 0.693 0.711 0.866 0.103
Four-factor model 770.89 344 0.00 0.076 0.874 0.796 0.862 0.784 0.088
Null model 3788.69 350 0.00 0.215 0.008 0.000 0.088 0.255 0.274

Note: N ¼ 212, RMSEA ¼ root mean square error of approximation, CFI ¼ comparative fit index (Bentler), NFI ¼ normed fit index (Bentler and Bonett),
NNFI ¼ non-normed fit index (Bentler and Bonett), GFI ¼ goodness-of-fit index, RMR ¼ root mean square residual.

(Hinkin, 1995). He argued that while exploratory Step 1: Reviewing the chi-square test.
factor analysis allows the elimination of obviously The chi-square test provides a statistical test of the null
poorly loading items (used to determine the number of hypothesis that the model fits the data. If the model provides
components), the advantage of CFA is that it allows a good fit, the chi-square value will be relatively small, and
more precision in evaluating the measurement model the corresponding P value will be relatively large (above 0.05
(used to test a hypothesized structure). Therefore, in and preferably closer to 1.00). However, with large samples
this study CFA is an appropriate method to be used to and real-world data, the chi-square statistic is very frequently
validate that the items empirically form the intended significant even if the model provides a good fit.
subscales (factors) because the factors and indicators The chi-square test results were presented in Table 4.
have been hypothesized according to sufficient theore- Several models are examined by CFA: a two-factor
tical basis. reflective model for OC and CP subscales, a two-factor
For reflective indicator models, CFA was conducted formative model for WC and PI subscales, a four-factor
with the 212 observations of the sample to determine the model with all four factors of KSBS, and a null model
fit of the measurement model to the data. Several fit where all items load on separate factors. All of these four
indices are used to separately evaluate and compare models had large chi-square values and very small P
across the CFA models. For formative indicator models, values. As stated above, the chi-square test is frequently
the situation is more complicated. There is little research not valid and further goodness-of-fit indices are necessary
providing guidance on how to verify formative indicator to examine the model fit with the data.
models in structural equation modeling (SEM). As Bollen
& Lennox (1991) noticed, because the latent construct is Step 2: Reviewing goodness-of-fit indices.
a linear combination of its indicators and a disturbance, Major overall model-fit indices considered include the
whether the model is good or not cannot be judged from comparative fit index (CFI), the normed fit index (NFI),
its item covariances. The causal indicator model in the non-normed fit index (NNFI), and goodness-of-fit index
isolation is statistically underidentified. Only when (GFI). Values over 0.90 on these indices indicate an
consequences of the latent construct are included can acceptable fit. The fit indices of four models were compared
the formative indicator model be estimated. More will be in Table 4. Obviously the best fit was from the first model –
discussed in study 4. the two-factor reflective model for OC and CP subscales. All
Specifically, the PROC CALIS program distributed by values were over 0.90 (except for the GFI that was close to
the SAS Institute was used and maximum likelihood 0.90) which indicates good model fit. The values (all lower
estimation procedures were employed to conduct CFA than 0.90) of the second model – the two-factor formative
to assess the quality of the factor structure (the fit model for WC and PI subscales – support our model type
between model and data). The process of determining if assumptions and indicate that a formative model might be
the model fits the data begins by reviewing overall more appropriate. The four-factor model and null model
goodness-of-fit indices and more detailed assessment of demonstrate poor fits of the data to the model.
fit indices (Hatcher, 1994), as illustrated in the following In addition, root mean square error of approximation
three steps. (RMSEA) and root mean square residual (RMR) measuring

Knowledge Management Research & Practice


74 A measure of knowledge sharing behavior Jialin Yi

discrepancies between the implied and observed covar- values of factor loading, standard error, and t value are
iance matrices are provided to assess the overall fit. presented in Table 5. All t values for OC and CP factors
Values of 0.10 or less indicate an acceptable fit. For two- were over 3.291 which means a significant factor loading
factor reflective model for OC and CP subscales, RMSEA at Po0.001. And all factor loading coefficients were over
and RMR values were 0.098 and 0.050, respectively, or close to 0.60. Thus, it indicated that all items of OC
which indicate an adequate fit. The RMSEA and RMR and CP factors are doing a good job of measuring the
values of two-factor formative model and null model all underlying factor and there is no need to eliminate any
show poor fits. items. All t values for WC and PI were over 1.96, and most
factor loading coefficients were lower than 0.60 which
Step 3: Reviewing significant tests for factor loadings. also suggested a reflective model is not appropriate for
The factor loading is equivalent to a path coefficient from WC and PI factors.
a latent factor to an indicator variable. A non-significant
factor loading means that the involved indicator variable
Scale reliability and validity (study 4)
is not doing a good job of measuring the underlying
The fourth and last step in developing a measure of KSB
factor, and should possibly be reassigned or dropped. The
was to collect new empirical data to examine the internal
consistency reliability, factor structure, and construct
(convergent and discriminant) validity of the KSBS.
The sample consisted of 196 employees working in a
Table 5 Pilot-study factor loading tests
large U.S. high technology company. High technology
Item Factor loading Designated factor SE t Value firms belong to knowledge intensive companies. The
company’s success depends on developing and applying
1 0.298 WC 0.077 3.83
employees’ knowledge and expertise to solve problems,
2 0.321 WC 0.076 4.20
3 0.557 WC 0.103 5.37 generate ideas, or create new products and services. The
4 0.736 WC 0.106 6.88 sampling method used was random sampling: the online
5 0.243 WC 0.083 2.92 survey Web address was distributed to several units of the
6 0.627 OC 0.046 13.52 company located in the United States to invite employees
7 0.818 OC 0.055 14.85 who need knowledge sharing in their daily work to
8 0.769 OC 0.045 16.80 participate. It took about 2 months to receive 196
9 0.593 OC 0.044 13.20 responses, and the response rate was about 20%. The
10 0.601 OC 0.045 13.15 data was collected through an online survey. There was
11 0.668 OC 0.057 11.65 no missing data because every question of the survey had
12 0.679 OC 0.060 11.24 to be answered in order to submit the completed survey.
13 0.550 OC 0.058 9.37 All analyses below of study 4 were conducted through
14 0.311 PI 0.066 4.71
SAS 8.0 version distributed by SAS Institute.
15 0.483 PI 0.078 6.20
16 0.403 PI 0.044 9.08
17 0.496 PI 0.045 10.91 Reliability assessment
18 0.613 PI 0.045 13.58 The same as the method of reliability testing described in
19 0.568 PI 0.042 13.41 study 3, for traditional reflective scales, internal consis-
20 0.170 PI 0.097 1.75 tency reliability was measured again with new data
21 0.191 PI 0.066 2.88 collected through Cronbach’s coefficient alpha to see if
22 0.885 CP 0.050 17.50 the scale is internally consistent. As shown in Table 6, the
23 0.906 CP 0.049 18.18 reliability values of OC and CP were 0.905 and 0.934,
24 0.911 CP 0.050 18.09 respectively, which indicated the items perform very well
25 0.882 CP 0.050 17.48
in terms of OC and CP reliability. For formative scales, the
26 0.773 CP 0.059 13.03
reliability values of WC and PI were 0.458 and 0.723,
27 0.659 CP 0.063 10.33
28 0.669 CP 0.061 10.87
respectively. But as stated above, high reliability values
are not expected for formative scales.

Table 6 KSBS internal consistency reliability and intercorrelations with new data

1 2 3 4

1. Written contributions (WC) (0.458) 0.176* 0.177* 0.287**


2. Organizational communications (OC) (0.905) 0.547** 0.332**
3. Personal interactions (PI) (0.723) 0.419**
4. Communities of practice (CP) (0.934)

Note: N ¼ 196, coefficient alpha on the diagonal, *Po0.01, **Po0.0001, one-tailed.

Knowledge Management Research & Practice


A measure of knowledge sharing behavior Jialin Yi 75

Factor structure (scale dimensionality) For formative indicator models of WC and PI subscales
As was done in the previous pilot study, CFA was used to testing, SEM was conducted with the 196 new data to
validate traditional reflective indicator models with new determine the fit of the MIMIC model to the data. First,
data collected. For reflective indicator models of OC and the same as in the examination of reflective factor
CP subscales, CFA was conducted the same as that for structure, reviews of chi-square test, fit indices, and
study 3. All fit indices values of the OC and CP model factor loading test are required to examine the model
were over or close to 0.90 that indicated good model fit. fit. The SEM results, as shown in Table 7, indicated that
And the RMSEA and RMR values were 0.100 and 0.054, both WC and PI models now have a better fit of the data
respectively, which also indicated an adequate fit. than the previous CFA model results. All CFI, NFI, NNFI,
However, for complicated formative indicator models, and GFI values over 0.90 indicate an acceptable fit. And
the causal indicator model in isolation is statistically RMSEA and RMR values of 0.10 or less indicate an
underidentified. In this study, for formative model acceptable fit. All these values showed adequate model
testing, the Multiple Indicators Multiple Causes (MIMIC) fits. The PI model results especially indicated a very good
model method was used (MacCallum & Browne, 1993; model fit.
Jarvis et al., 2003), which is a particular case of the general
SEM. The MIMIC model, as shown in Figure 3, refers to a Validity assessment
single latent variable with MIMIC (observed x variables Although reliability assures that a scale can consistently
and y variables). measure something, it does not assure that it will
From the SEM point of view, the left equation in measure what it is designed to measure. The property of
Figure 3 can be regarded as a structural model, while the measuring what a scale intends to measure is validity.
right equation can be regarded as a measurement model There are several types of validity such as content validity
(Hatcher, 1994). In this study, two MIMIC models were (adequacy), construct validity (accuracy), and criterion-
established for WC and PI factors of KSBS. Both models related validity (prediction).
are a single construct with several formative and several In the previous sections, face and content validity have
reflective measures. MIMIC-WC model has five formative been discussed and examined. Criterion-related validity,
indicators and two reflective indicators, while MIMIC-PI also called statistical validity or empirical validity, refers
has eight formative indicators and two reflective indica- to the extent to which a scale performs as expected in
tors. In Figure 3, all X are measures of the key construct, relation to some external variables (or criteria). Since
like the five formative indicators of WC factor. Y1 and Y2, examining criterion-related validity requires more time
to be reflective in nature, need to be content-valid and effort than is proposed in this study, criterion-related
measures tapping the overall level of the construct. validity was not explored in this study.
Examples of potentially appropriate reflective items In the following section, we focused on the examina-
could be ‘Overall, I frequently share my knowledge tion of construct validity (convergent and discriminant
through written communications,’ etc. Therefore, with validity), which is the essential property for a scale. It
the addition of these two reflective indicators, the WC refers to the extent to which a measure can be said to
and PI constructs would have two paths emanating from accurately measure the theoretical construct. We can
it and be identified. understand it as the correlation coefficient between the
construct and the measure, though it is not real
operationally. To establish the construct validity of a
measure, we need to determine the extent to which the
X1 γ 11 measure correlates with other measures designed to
ζ1
λ 11 Y1 ε1 measure the same thing (convergent validity) and the
X2 γ 12
η1 λ 12
extent to which the measure differs from other measures
γ 13
X3 ε2 designed to measure different things (discriminant
γ 14 Y2
validity). Neither is easy to achieve.
X4

η1 = γ 11 X1 + γ12 X2 +...+ γ 1q Xq + ζ 1 Υi= λ i1η1 + ε i Convergent validity


Convergent validity refers to the extent to which
Figure 3 Path diagram for MIMIC model. the measure correlates with other measures that were

Table 7 MIMIC models testing

Chi-square d.f. P RMSEA CFI NFI NNFI GFI RMR

WC model 3.49 3 0.3209 0.029 0.995 0.971 0.965 0.994 0.018


PI model 17.82 6 0.0067 0.100 0.977 0.968 0.829 0.982 0.025

Note: N ¼ 196.

Knowledge Management Research & Practice


76 A measure of knowledge sharing behavior Jialin Yi

designed to measure the same thing. A measure with Table 9, both AVE values for OC and CP exceeded 0.50
convergent validity should correlate highly with other (0.553 and 0.678, respectively), which means more than
measures to measure the same construct (Spector, 1992). 50% of the variance is captured by the factors, and
The measures used to compare could be either different therefore OC and CP subscales have convergent validity.
scales or different methods (e.g., observation, interview). Another method that can be used to estimate subscales’
Since the KSBS designed in this study is the first scale that convergent validity in some way is to compare overall
tried to measure the construct of KSB reliably and validly, rating scores of KSB subscales with relative overall
there is no other available valid scale or method to rating question results got through the survey. The
compare in order to calculate convergent validity of the Pearson correlation coefficients’ results were shown in
KSBS. Tables 10–13. For OC and CP subscales, all correlation
One mathematical way to examine convergent validity values indicated moderate correlations (0.4–0.7) with
(only for reflective models) is through the loadings of Po0.0001. For WC and PI subscales, only weak positive
specific items. Anderson & Gerbing (1988) suggested that relationships were found. All these results are reasonable:
convergent validity can be indicated when each of the the homogeneity of reflective items determines that an
item’s loadings on the construct is greater than twice its overall rating of a scale should be correlated with all
standard error. Factor loading coefficients of OC and CP items of the scale; the heterogeneity of formative items
factors, as shown in Table 8, were all greater than twice determines that an overall rating of a scale makes it hard
their standard errors, which suggest that OC and CP to reflect the different contributions of each item to the
subscales have convergent validity. scale.
In addition, if a factor loading of a reflective scale is
below 0.707, there is more error or unique variance Discriminant validity
associated with that item than common variance. In this Discriminant validity refers to the extent to which the
case, AVE is conducted, which is an index to assess the measure differs from other similar measures designed to
amount of variance that is captured by an underlying measure different things. It shows that the new measure
factor in relation to the amount of variance due to is indeed novel, and not just a reflection of other
measurement error. As calculated and presented in variables. Discriminant validity of the KSBS was assessed
in terms of the correlation between the KSBS and the
scale of OCB (Organ, 1988), and the correlations between
Table 8 Factor loading tests with new data the different dimensions of the KSBS. Our hypotheses are
that KSB and OCB are expected to be different constructs
Item Factor loading Designated factor SE t Value (refer to the section ‘A definition of KSB’), and that the
1 0.090 WC 0.066 1.35 different dimensions of KSB represent different facets of
2 0.214 WC 0.089 2.40 the scale. Discriminant validity can be indicated by
3 0.289 WC 0.112 2.58 relatively low correlations. OCB measurement items were
4 0.933 WC 0.209 4.45 adopted from Smith et al. (1983).
5 0.405 WC 0.112 3.59 Results of the last column in Table 14 showed that the
6 0.625 OC 0.050 12.50 correlations between the KSBS and the OCB scale, and
7 0.721 OC 0.057 12.65 among the four KSB subscales and the OCB scale are all
8 0.660 OC 0.044 15.01 relatively low or non-significant. All values were below
9 0.564 OC 0.049 11.52 0.37. In addition, the correlations between different KSB
10 0.585 OC 0.047 12.45 subscales or dimensions were also only moderate or low
11 0.645 OC 0.058 11.07
(0.17–0.55). These results indicated that the KSB and OCB
12 0.565 OC 0.056 9.93
are different constructs and that the different dimensions
13 0.571 OC 0.059 9.64
of KSB represented different facets of the scale, and
14 0.260 PI 0.059 4.39
15 0.395 PI 0.073 5.41
therefore discriminant validity of the KSBS was demon-
16 0.457 PI 0.049 9.17 strated.
17 0.461 PI 0.053 8.68 Furthermore, mathematically discriminant validity can
18 0.601 PI 0.051 11.77 be examined through the evaluation of the AVE (Fornell
19 0.571 PI 0.043 13.23 & Larcker, 1981). They suggested that the AVE for each
20 0.182 PI 0.097 1.86 construct should be greater than the squared correlation
21 0.328 PI 0.062 5.26 between that construct and any other. Otherwise, the
22 0.886 CP 0.054 16.21 variance accounted for by a construct explains less
23 0.860 CP 0.049 17.31 variance than that explained by that construct’s correla-
24 0.857 CP 0.048 17.85 tion with another, thus indicating a lack of discriminant
25 0.852 CP 0.053 16.01 validity. As shown in Table 15, the AVE values were those
26 0.723 CP 0.063 11.45
on the diagonal. They can be compared to the other three
27 0.751 CP 0.067 11.14
correlation numbers either in the same row or in the
28 0.590 CP 0.065 9.01
same column. All AVE values were larger than the squared

Knowledge Management Research & Practice


A measure of knowledge sharing behavior Jialin Yi 77

Table 9 Standardized factor loading and average variance extracted values

Item Standardized factor loading Designated factor Indicator reliability Error variance Average variance extracted

1 0.114 WC 0.012 0.987 WC: 0.202


2 0.216 WC 0.046 0.953
3 0.235 WC 0.055 0.944
4 0.858 WC 0.736 0.263
5 0.399 WC 0.159 0.840
6 0.773 OC 0.597 0.402 OC: 0.553
7 0.779 OC 0.606 0.393
8 0.871 OC 0.758 0.241
9 0.730 OC 0.532 0.467
10 0.771 OC 0.594 0.405
11 0.709 OC 0.502 0.497
12 0.653 OC 0.426 0.573
13 0.638 OC 0.407 0.592
14 0.331 PI 0.109 0.890 PI: 0.314
15 0.402 PI 0.161 0.838
16 0.634 PI 0.401 0.598
17 0.607 PI 0.368 0.631
18 0.770 PI 0.592 0.407
19 0.840 PI 0.705 0.294
20 0.144 PI 0.020 0.979
21 0.392 PI 0.153 0.846
22 0.901 CP 0.811 0.188 CP: 0.678
23 0.935 CP 0.874 0.125
24 0.951 CP 0.904 0.095
25 0.894 CP 0.799 0.200
26 0.716 CP 0.512 0.487
27 0.701 CP 0.491 0.508
28 0.594 CP 0.352 0.647

Note: N ¼ 196.

Table 10 Pearson correlation coefficients for WC factor Table 12 Pearson correlation coefficients for PI factor

1 2 3 1 2 3

1. Written contributions (WC) 0.180* 0.179* 1. Personal interactions (PI) 0.437** 0.230*
2. Overall WC frequency 0.465** 2. Overall PI frequency 0.218*
3. If WC is a usual way agreement 3. If PI is a usual way agreement

Note: N ¼ 196, *Po0.01, **Po0.0001, one-tailed. Note: N ¼ 196, *Po0.01, **Po0.0001, one-tailed.

Table 11 Pearson correlation coefficients for OC factor Table 13 Pearson correlation coefficients for CP factor

1 2 3 1 2 3

1. Organizational communications (OC) 0.407** 0.429** 1. Communities of practice (CP) 0.537** 0.665**
2. Overall OC frequency 0.479** 2. Overall CP frequency 0.545**
3. If OC is a usual way agreement 3. If CP is a usual way agreement

Note: N ¼ 196, *Po0.01, **Po0.0001, one-tailed. Note: N ¼ 196, *Po0.01, **Po0.0001, one-tailed.

correlations between the factor and the other three conducted for formative subscales WC and PI. This
factors, which demonstrated discriminant validity of external validity test is a necessary step for formative
the KSBS. construct development. As discussed previously, we need
to be cautious when removing an item of a formative
External validity scale because omitting an item is omitting a part of the
Furthermore, a simple test relevant to external validity, as construct. However, ‘an excessive number of indicators is
Diamantopoulos & Winklhofer (2001) suggested, was undesirable because of both the data collection demands

Knowledge Management Research & Practice


78 A measure of knowledge sharing behavior Jialin Yi

Table 14 Discriminant validity tests

1 2 3 4 5 6

1. Knowledge sharing behavior (KSB) 0.364**


2. Written contributions (WC) 0.176* 0.177* 0.287** 0.021
3. Organizational communications (OC) 0.547** 0.332** 0.338**
4. Personal interactions (PI) 0.419** 0.394**
5. Communities of practice (CP) 0.224*
6. Organizational citizenship behavior (OCB)

Note: N ¼ 196, *Po0.01, **Po0.0001, one-tailed.

Table 15 Average variance extracted and squared correlation values

1 2 3 4

1. Written contributions (WC) (0.202) 0.030 0.031 0.082


2. Organizational communications (OC) 0.030 (0.553) 0.299 0.110
3. Personal interactions (PI) 0.031 0.299 (0.314) 0.175
4. Communities of practice (CP) 0.082 0.110 0.175 (0.678)

Note: N ¼ 196, AVE on the diagonal.

Table 16 Pearson correlation coefficients for WC items

WC 1 2 3 4 5

KSBS 0.287* 0.326** 0.242* 0.369** 0.265*

Note: N ¼ 196, *Po0.01, **Po0.0001, one-tailed.

Table 17 Pearson correlation coefficients for PI items

PI 14 15 16 17 18 19 20 21

KSBS 0.440** 0.464** 0.364** 0.449** 0.439** 0.474** 0.495** 0.569**

Note: N ¼ 196, *Po0.01, **Po0.0001, one-tailed.

it imposes and the increase in the number of parameters properties. Overall, this study contributes to the existing
when the construct is embedded within a broader research on KSB measures by identifying the construct of
structural model’ (Diamantopoulos & Winklhofer, 2001, KSB, proposing a new taxonomy to categorize different
p. 272). behaviors into four dimensions of KSB, developing items
To improve the quality of the formative scales, for each of the four categories, and empirically testing its
Diamantopoulos and Winklhofer suggested a simple validity and reliability. The new KSBS was a 4-dimen-
way of correlating each item to another external variable sional, 28-item, 5-response choice frequency scale. The
and eliminating those items with non-significant correla- results provided evidence of the dimensionality, relia-
tions. A global item summarizing the essence of the bility, and validity of the scale.
construct (like KSBS) can be used as a good external This study has some limitations. First, there are some
variable. The Pearson correlation coefficients’ results were weaknesses of the sampling methods used in this study due
shown in Tables 16 and 17. All correlation values to difficulties of large survey data collection. For the pilot
indicated that each item significantly correlated with study, the KSBS was pre-tested using a convenience sample
the KSBS at Po0.01. Therefore, all formative items should of 212 subjects. Because there was a lack of randomization
be retained for the KSBS. for the convenience sample, we cannot be very sure about
the characteristics of the sample pool. There might be some
Conclusions and discussion unobserved variables that may influence the results. For
The primary purpose of this study is to develop and the validation study, the sample is limited to only one large
validate a new scale of KSBs with desirable psychometric company of one particular industry – high technology.

Knowledge Management Research & Practice


A measure of knowledge sharing behavior Jialin Yi 79

Thus, the findings and implications drawn from this study study. Furthermore, this study suggests several possibi-
might not be readily generalized to other industries. lities for future research, like investigating on criterion
However, the results of the pilot study and the validation validity examination, and more correlation studies with
study were very similar which strongly argues for the the scale established.
reliability and validity of the KSBS. The initial validation process in this study provided
The second limitation is related to formative scale support for a promising new measure of KSB. The next
validation, including model identification, scale dimension- step could be to examine criterion-related validity that
ality, reliability, and validity examination difficulties. Up indicates that a scale performs as expected in relation
until now, there was no standard method to assess reliability to some external variables. As stated previously, it
of formative scales. In this study the author tried the MIMIC focuses on specific relationships that were theoretically
model to test the scale dimensionality to some degree. justified in the literature review. For example, the
The third limitation is that the typical approach to relationships between KSB and other variables (e.g.,
examine convergent validity was not used in this study. rewards, trust, or commitment) may be hypothesized
Currently there is no other available valid scale or and tested to see if criterion-related validity can be
method to compare in order to calculate convergent established for the KSBS.
validity of the KSBS. In this study, I tried to test the Another research possibility is to use the reliable and
convergent validity of traditional reflective scales to some valid KSBS developed in this study as an instrument in
degree through a mathematical method and correlations hypothesized relationships research related to KSB in the
with overall rating items. future field studies conducted by researchers. It is, in fact,
To the extent this study was limited more extensive the major theoretical contribution and one of the
studies might overcome the limitations of the present ultimate goals of this study.

References
ANDERSON JC and GERBING DW (1988) Structural equation modeling in GURTEEN D (1999) Creating a knowledge sharing culture. Retrieved 15
practice: a review and recommended two-step approach. Psychological May 2004, from http://www.gurteen.com/gurteen/gurteen.nsf/0/
Bulletin 103(3), 411–423. FD35AF9606901C42802567C70068CBF5/.
BARTOL KM and SRIVASTAVA A (2002) Encouraging knowledge sharing: the HANSEN MT, NOHRIA N and TIERNEY T (1999) What’s your strategy for
role of organizational reward systems. Journal of Leadership and managing knowledge? Harvard Business Review 77(2), 106–116.
Organization Studies 9(1), 64–76. HARVARD (1997) A Note on Knowledge Management. Harvard Business
BOCK GW and KIM Y (2002) Breaking the myths of rewards: an School.
exploratory study of attitudes about knowledge sharing. Information HATCHER L (1994) A Step-by-Step Approach to Using SAS for Factor Analysis
Resources Management Journal 15(2), 14–21. and Structural Equation Modeling. SAS Institute Inc, Cary, NC.
BOLLEN K and LENNOX R (1991) Conventional wisdom on measurement: HINKIN TR (1995) A review of scale development practices in the study of
a structural equation perspective. Psychological Bulletin 110(2), organizations. Journal of Management 21(5), 967–988.
305–314. HISLOP D (2003) Linking human resource management and knowledge
CABRERA A and CABRERA E (2002) Knowledge-sharing dilemmas. Organi- management via commitment: a review and research agenda.
zation Studies 23(5), 687–710. Employee Relations 25(2), 182–202.
CHOW CW, DENG JF and HO JL (2000) The openness of knowledge sharing HUYSMAN M and DE WIT D (2002) Knowledge Sharing in Practice. Kluwer
within organizations: a comparative study of the United States and the Academic Publishers, Dordrecht, the Netherlands.
People’s Republic of China. Journal of Management Accounting Research IPE M (2003) The Praxis of Knowledge Sharing in Organizations: A Case
12, 65–95. Study. University of Minnesota.
CHURCHILL G (1979) A paradigm for developing better measures of JARVIS CB, MACKENZIE SB and PODSAKOFF PM (2003) A critical review of
marketing constructs. Journal of Marketing Research 16(1), 64–73. construct indicators and measurement model misspecification in
CUMMINGS JN (2001) Work group and knowledge sharing in a global marketing and consumer research. Journal of Consumer Research
organization. Academy of Management Proceedings OB, D1–D6. 30(2), 199–218.
CURRIE G and KERRIN M (2003) Human resource management and KAMDAR D, NOSWORTHY GJ, CHIA HB and CHAY YW (2004) Giving up the
knowledge management: enhancing knowledge sharing in a pharma- ‘secret of fire’: The Impact of Incentives and Self-monitoring on Knowledge
ceutical company. International Journal of Human Resource Manage- Sharing. Indian School of Business Working Paper.
ment 14(6), 1027–1045. KASER PA and MILES RE (2001) Knowledge activists: the cultivation of
DIAMANTOPOULOS A (1999) Export performance measurement: reflec- motivation and trust properties of knowledge sharing relationships.
tive vs formative indicators. International Marketing Review 16(6), Academy of Management Proceedings ODC, D1–D6.
444–457. KUBO I, SAKA A and PAM SL (2001) Behind the scenes of knowledge
DIAMANTOPOULOS A and WINKLHOFER HM (2001) Index construction with sharing in a Japanese bank. Human Resource Development International
formative indicators: an alternative to scale development. Journal of 4(4), 465–485.
Marketing Research 38(2), 269–277. LEE J (2001) The impact of knowledge sharing, organizational capability
DOUGHERTY V (1999) Knowledge is about people, not databases. and partnership quality on IS outsourcing success. Information &
Industrial and Commercial Training 31(7), 262–266. Management 38(5), 323–335.
ERHARDT N (2003) Enablers and Barriers for Individuals’ Willingness and LIEBOWITZ J (2002) Facilitating innovation through knowledge sharing: a
Ability to Share Knowledge: An Exploratory Study. Rutgers University, look at the U.S. Naval surface warfare center-carderock division.
Piscataway, NJ. Journal of Computer Information Systems 42(5), 1–6.
FORNELL C and LARCKER DF (1981) Evaluating structural equation models LIN H and LEE G (2004) Perceptions of senior managers
with unobservable variables and measurement error. Journal of toward knowledge-sharing behavior. Management Decision 42(1),
Marketing Research 18(1), 39–50. 108–125.

Knowledge Management Research & Practice


80 A measure of knowledge sharing behavior Jialin Yi

MACCALLUM RC and BROWNE MW (1993) The use of causal indicators in 5. Ask good questions that can elicit others’ thinking and
covariance structure models: some practical issues. Psychological discussion in team meetings.
Bulletin 114(3), 533–541.
MACNEIL CM (2003) Line managers: facilitators of knowledge sharing in
6. Share success stories that may benefit the company in
teams. Employee Relations 25(3), 294–307. organizational meetings.
MICHAILOVA S and HUSTED K (2003) Knowledge-sharing hostility in Russian 7. Reveal past personal work-related failures or mistakes
firms. California Management Review 45(3), 59–77. in organizational meetings to help others avoid
MOON HK and PARK MS (2002) Effective reward systems for knowledge
sharing. Knowledge Management Review 4(6), 22–25.
repeating these.
MOTOWIDLO SJ (2000) Some basic issues related to contextual perfor- 8. Make presentations in organizational meetings.
mance and organizational citizenship behavior in human resource
management. Human Resource Management Review 10(1), 115–126. Personal interactions
NUNNALLY JC (1978) Psychometric Theory 2nd edn, McGraw-Hill,
New York.
1. Support less-experienced colleagues with time from
OH H (2000) Corporate knowledge management and new challenges for personal schedule.
HRD Paper presented at the Academy of Human Resource Develop- 2. Engage in long-term coaching relationships with
ment Conference, Raleigh-Durham, NC. junior employees.
ORGAN DW (1988) Organizational Citizenship Behavior: The Good Soldier
Syndrome. Lexington Books, Lexington, MA.
3. Spend time in personal conversation (e.g., discussion
PODSAKOFF PM (2003) How to ‘break down’ a Theoretical Construct and its in hallway, over lunch, through telephone) with
Measures. Indiana University, Bloomington. others to help them with their work-related
QUINN JB (1992) Intelligent Enterprise: A Knowledge and Service Based problems.
Paradigm for Industry. The Free Press, New York.
RYU S, HO SH and HAN I (2003) Knowledge sharing behavior of physicians
4. Keep others updated with important organizational
in hospitals. Expert Systems with Applications 25(1), 113–122. information through personal conversation.
SCHWAB DP (1980) Construct validity in organizational behavior. In 5. Share passion and excitement on some specific sub-
Research in Organizational Behavior (STAW BM and CUMMINGS LL, Eds), jects with others through personal conversation.
Vol. 2, pp 3–43, JAI Press, Greenwich, CT.
SMITH CA, ORGAN DW and NEAR JP (1983) Organizational citizenship
6. Share experiences that may help others avoid risks and
behavior: its nature and antecedents. Journal of Applied Psychology trouble through personal conversation.
68(4), 653–663. 7. Have online chats with others to help them with their
SMITH HA and MCKEEN JD (2003) Instilling a knowledge-sharing culture. work-related problems.
Retrieved 20 April 2004, from http://www.business.queensu.ca/kbe.
SPECTOR PE (1992) Summated Rating Scale Construction. Sage Publica-
8. Spend time in e-mail communication with others to
tions, Newbury Park, CA. help them with their work-related problems.
SZULANSKI G (1996) Exploring internal stickness: impediments to the
transfer of best practice within the firm. Strategic Management Journal Communities of practice
17(Winter Special Issue), 27–43.
ZARRAGA C and BONACHE J (2003) Assessing the team environment for 1. Meet with community* members to create innovative
knowledge sharing: an empirical analysis. International Journal of solutions for problems that occur in work.
Human Resource Management 14(7), 1227–1245.
*Community: an informal network of people within or
across organizations who voluntarily share common
Appendix practice, expertise, and interests on specific topics. It is
neither an organizational unit nor a team.
Knowledge sharing behavior scale items
2. Meet with community members to share own experi-
Written contributions
ence and practice on specific topics with common
1. Submit documents and reports. interests.
2. Publish papers in company journals, magazines, or 3. Meet with community members to share success
newsletters. and failure stories on specific topics with common
3. Share documentation from personal files related to interests.
current work. 4. Meet with community members to work to encourage
4. Contribute ideas and thoughts to company online excellence in community’s practice.
databases. 5. Support personal development of new community
5. Keep others updated with important organizational members.
information through online discussion boards. 6. Send related information to members through com-
munity e-mail list.
Organizational communications
7. Share ideas and thoughts on specific topics through
1. Express ideas and thoughts in organizational meetings. company supported online community-of-practice
2. Participate fully in brainstorming sessions. system.
3. Propose problem-solving suggestions in team meetings. *The five-point frequency responses are: 1 ¼ never,
4. Answer questions of others in team meetings. 2 ¼ rarely, 3 ¼ sometimes, 4 ¼ often, 5 ¼ always.

Knowledge Management Research & Practice


A measure of knowledge sharing behavior Jialin Yi 81

About the author


Jialin Yi, Ph.D. Senior E-learning Instructional Designer. master of management in Industrial Engineering and a
She got her doctoral degree from the Instructional bachelor of engineering in International Trade from
Systems Technology department at Indiana University Xi’an Jiaotong University in China. She also worked in
Bloomington, where she also taught computer skills Agilent Technologies as an E-Learning Specialist for 15
courses for 3 years. Her research interests are knowledge months. She has been working in Northwestern Memor-
management, performance evaluation and improvement, ial Hospital as a Senior E-Learning Instructional Designer
training techniques, instructional design, and distance since March 2007. She had academic and practical
learning. Jialin earned a master of science in Instructional experience in areas of economics, management, indus-
Technology from Indiana University Bloomington, a trial engineering, and education.

Knowledge Management Research & Practice

You might also like