Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
146 views6 pages

Certaintrust: A Trust Model For Users and Agents: Sebastian Ries

c trust

Uploaded by

kinza
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
146 views6 pages

Certaintrust: A Trust Model For Users and Agents: Sebastian Ries

c trust

Uploaded by

kinza
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

CertainTrust: A Trust Model For Users And Agents


Sebastian Ries
Department of Computer Science
Darmstadt University of Technology
Hochschulstrasse 10
64289 Darmstadt, Germany
[email protected]

ABSTRACT large number of smart devices, e.g., PDAs, mobiles, intelli-


One of the challenges for ubiquitous computing and P2P sys- gent clothes, etc., which come with different capabilities con-
tems is to find reliable partners for interactions. We believe sidering communication channels, storage or battery power.
that this problem can be solved by assigning trust values to Both, the basic idea of ubiquitous computing and the het-
entities and allowing them to state opinions about the trust- erogeneity of these devices enforce interaction with and dele-
worthiness of others. In this paper, we develop a new trust gation to other devices to enfold the complete potential of an
model, called CertainTrust, which can easily be interpreted ubiquitous computing infrastructure. Ubiquitous computing
and adjusted by users and software agents. A key feature of environments are unstructured and many service providers
CertainTrust is that it is capable of expressing the certainty are only locally or spontaneously available. Therefore, we
of a trust opinion, depending on the context of use. We cannot expect to know for sure, if those devices will behave
show how the trust values can be expressed using different as expected and cooperate, or not.
representations (one for users and one for software agents) On the one hand, the interactions with devices, which are
and present an automatic mapping to change between the not possessed or controlled by ourselves, include uncertainty
representations. and risk, since a safe prediction of the behavior of those de-
vices is not possible. On the other hand, the interactions
with reliable partners are the basis for the services ubiqui-
Categories and Subject Descriptors tous computing environments can provide. But how to select
H.1.2 [Information Systems]: Models—Human-centered reliable interaction partners and delegatees who behave as
computing; I.2.11 [Computing Methodologies]: Distributed expected? Selecting only tamper-proof devices, which be-
Artificial Intelligence long to the same manufacturer, requires the manufactures
to be trusted, and unnecessarily reduces the potential of
ubiquitous computing.
General Terms Due to the great number of interactions with many dif-
Design, Security, Theory ferent partners – some might be well-known, others not –
and the claim of ubiquitous computing to become a calm
technology, we need a non-intrusive way to cope with this
Keywords challenge.
trust model, evidence, recommendations, user interface We believe that the concept of trust, which has proofed to
work well in real life, is a promising solution, which allows to
1. INTRODUCTION make well-founded decisions even in the context of risk and
uncertainty. Assuming recognition of entities, e.g. [14], trust
In [1], Bhargava et al. point out that ”trust [...] is per-
allows to express an expectation about the future behavior of
vasive in social systems” and that ”socially based paradigms
an entity based on evidence collected in past engagements.
will play a big role in pervasive-computing environments”.
Since ubiquitous computing enforces a human-centric de-
Pervasive or ubiquitous computing is characterized by a very
sign, trust needs representations, which are meaningful not
∗The author’s work was supported by the German National only to the software agents to enable automatic trust eval-
Science Foundation (DFG) as part of the PhD program ”En- uation, but also to the end user, who needs to be able to
abling Technologies for Electronic Commerce” at Darmstadt reflect about the state of the trust model, and to take part
University of Technology in the decision making process, if necessary.
In this paper, we provide a decentralized trust model,
named CertainTrust, which allows agents to choose trust-
worthy partners for risky engagements. For our trust model,
Permission to make digital or hard copies of all or part of this work for we propose two representations. The first one serves as ba-
personal or classroom use is granted without fee provided that copies are sis for a human trust interface. It allows to represent trust
not made or distributed for profit or commercial advantage and that copies using two independent parameters, consisting of an estimate
bear this notice and the full citation on the first page. To copy otherwise, to
republish, to post on servers or to redistribute to lists, requires prior specific for the probability of trustworthy behavior in a future en-
permission and/or a fee. gagement, and of a parameter expressing the certainty of this
SAC’07 March 11-15, 2007, Seoul, Korea estimate. Since we believe that trust is context-dependent,
Copyright 2007 ACM 1-59593-480-4 /07/0003 ...$5.00.

1599
we also enforce the context-dependency of the certainty pa- Why does certainty need to be context-dependent (per-
rameter of a trust value. The second representation is based haps even subjective)?
on the Bayesian approach, using beta probability density
functions. This approach is well-established to express trust, • In ubiquitous computing environments trust models
and serves as a basis for the trust computation and as an can be used to automate decision making in many dif-
interface for evidence-based feedback integration. Finally, ferent applications. In the context of some applica-
we provide a mapping between both representations, and tions, there might be a great number of interactions,
operators for ’consensus’ and ’discounting’. other applications might be related to high risk, con-
The remainder of this paper is structured as follows. In sidering legal or financial implications. In these con-
Section 2, we summarize our notion of trust. Section 3 texts, it seems reasonable, that the users want to col-
presents the trust model and the operators for trust prop- lect a great number of evidence, before they would
agation. Section 4 presents the related work, and Section think about an opinion to be certain. If forced to make
5 summarizes our contribution and outlines aspects for our a decision about an engagement involving high risk,
future work. one might choose to reject the engagement, although
there is positive but too little evidence.

2. OUR NOTION OF TRUST • In contexts, in which the number of interactions is


From our point of view, trust is the well-founded willing- lower, or the associated risk is lower, the users may
ness for a potentially risky engagement. Trust can be based be satisfied with a lower number of evidence to come
on personal experience, recommendations, and the reputa- to a well-founded decision.
tion assigned to the partner involved in an engagement. We
To model the context-dependency for the certainty of an
model trust as the subjective probability, that an entity be-
opinion, we assume there is a maximal number of expected
haves as expected based on the Bayesian approach as intro-
evidence per context, which corresponds to the maximal
duced in [7].
level of certainty. For example, the maximal number of ex-
pected evidence can be defined as 5, 10, 100, or 1000.
2.1 Properties of Trust
The following properties, which are regularly assigned to 2.3 Trust vs. Reputation
trust, are part of our model. Trust is subjective, which Similar to [7], we see reputation as an opinion of the com-
means the trust of an agent A in an agent C does not need munity about a single agent, whereas trust is the opinion of
to be the same as trust of any other agent B in C, since a single agent about another one. This goes along with the
the behavior of C towards A is not necessarily equal to C’s subjectivity of trust. This way, trust is not as dependant on
behavior towards B. Furthermore, we cannot expect the the constitution of an agent society as reputation. That is,
behavior of A towards C being the same as the behavior if in an agent society each agent has one vote, and there are
of C towards A, therefore trust needs to be modeled asym- more than 50% of agents in this society, which provide opin-
metric. Trust is context-dependent. Obviously, there is a ions stating that all other agents are bad, then all agents
difference in trusting in another agent as provider of mp3- get a bad reputation. Whereas in a trust based system each
files or as provider of an online banking service. There is also agent is allowed to choose which opinions it wants to inte-
a difference in trusting in someone as service provider or as grate in its calculated trust and how these opinions should
recommendation provider. If A trusts B in the context of be weighted. This way, each agent can judge the behavior of
providing recommendations about a good service provider, other agents only by its own opinion. It is not necessary for
e.g., for file-storing, this does not necessarily mean, that A a trust metric to reflect about globally desirable behavior to
trusts in B as a good peer to store files at, and vice versa. produce reasonable trust scores. Since reputation can be a
Trust is non-monotonic, i.e., experience can increase as well basis for trust [13], we allow the inclusion of reputation in
as decrease trust. Therefore, we need to model positive and our trust model. If reputation information is available, an
negative evidence. Trust is not transitive in a mathemati- agent can integrate the reputation in his calculation of trust
cal sense, but the concept of recommendations is very im- in the same way as recommendations.
portant, since recommendations are necessary to introduce
trust in agents with which no or only little direct experience 2.4 Scenario
is available. Moreover, we do not think of trust as finite As a scenario to show how trust can improve service provi-
resource, e.g., as done in flow-based approaches like Eigen- sion in networks, we like to introduce a file-sharing scenario.
Trust [8]. It should be possible to increase trust in agents Peers share files with others. If a peer or an agent has found
without decreasing trust in other agents. another peer, who provides a file it wants to download, the
peer has to decide whether to do it, or not. The risk of down-
2.2 Arguments for a Certainty Value loading a corrupted file depends on the assumption about
Similar to [5, 10, 12], we believe that it is necessary to ex- the possible damage. If we assume a corrupted file does not
press the (un-)certainty or reliability of an opinion. We also lead to further damage, the risk is reduced to wasting band-
believe that the certainty of an opinion increases with the width and CPU time. If we assume the file could contain
number of evidence, on which an opinion is based. Modeling viruses, the risk increases, since it can potentially damage
the certainty of an opinion allows to provide information on our machine. If we use this machine for online-banking, or
how much evidence an opinion is based, and especially to store personal data on it, the risk increases further.
state that there is not any evidence available. Furthermore, The trust in a peer is based on direct experience, e.g., pre-
it is possible to express, that one opinion might be supported vious downloads, and recommendations from other trusted
by more evidence than another one. peers. Those recommendations are weighted by the trust in

1600
the recommenders’ ability to provide recommendations. Af- with which A would consider the proposition ”I believe, that
ter having calculated the trust value, and having estimated B is trustworthy with respect to providing recommendations
the potential risk, we can use this information for decision for the context coni ” to be true. This value is referred to
making. In this paper, we focus on calculating trust, but as the trust value. The value cA B (reci ) is referred to as cer-
the involvement of risk and decision making is necessary to tainty or certainty value. This value expresses, which cer-
motivate the topic. tainty the provider of an opinion assigns to the trust value.
A low certainty value expresses that the trust value can eas-
3. TRUST MODEL - CERTAINTRUST ily change. Adversely, high certainty expresses the trust
Introducing the trust model we start presenting the gen- value is rather fixed. The values for trust and certainty can
eral notation. Let contexts be denoted by coni i ∈ {1, 2, . . . }, be assigned independently of each other. For example, an
e.g., con1 = f ile sharing or con2 = online banking. For opinion oAB (reci ) = (1, 0.1)
HT I
expresses that A expects B
providing recommendations for a context coni , we define to be trustworthy in the context of providing recommen-
a special context, which is denoted as reci . Let agents be dations for coni , but this opinion can easily change, since
denoted by capital letters A, B, . . . . Let propositions be de- A assigns a low certainty value. The interpretation for an
noted by non-capital letters x, y, . . . . The opinion of agent opinion oAx is analogue.

A about the truth of a proposition x, e.g., x=”Agent C be- For the moment, we express both the values for trust and
haves trustworthy in context coni ” (see Fig. 1), will be de- certainty as continuous values in [0, 1]. Since humans are
noted as oA better in assigning discrete (verbal) values then continuous
x (coni ). The opinion of agent A about B’s trust-
worthiness for providing recommendations for a context coni ones, as stated in [4, 7], we want to point out, that both
will be denoted as oA values can easily be mapped to a discrete set of values, e.g.,
B (reci ). If the context is non-ambiguous
or non-relevant, we use oA A to the natural numbers in [0, 10], or to set of labels, as e.g.,
x and oB . The maximal number
of expected evidence (see Section 2.2) is denoted as e(coni ) very untrusted (vt), untrusted (u), undecided (ud), trusted
or e. Since the evidence model is partly derived from ideas (t), and very trusted (vt) for the trust value and uninformed,
presented in [5], and to achieve better comparability, we use rookie, average, and expert for the certainty value.
the same terminology when possible.
For the explanation of the trust model, we do not focus certainty (undecided, (very trusted,
value rookie) expert)
on trust management aspects as collecting and storing of
evidence, risk assignment, and decision making. We assume 1
that the evidence is collected and locally stored, and that expert
recommendations are provided on request by the communi-
cation partners within range. average
The propagation of recommendations is done based on rookie
chains of recommendations (see Fig. 1). We propose special trust
uninformed
0 vu u ud t vt 1 value
operators for consensus (aggregation of opinions) and dis-
counting (weighting of recommendations). For simplicity, Figure 2: Example for a graphical HTI
opinions are assumed to be independent.
To be able to introduce the semantics for the trust value
A Trust of A in B:
A B Trust of B in C:
B C and the certainty with labels, it is necessary that both pa-
o (rec i ) o (coni )
B x rameters are independent of each other. Otherwise, the in-
terpretation of the labels of the trust value would change
in B 1
B1 T ru s
t of B with the certainty value, and vice versa. This would be
t of A 1 in C
Trus counter-intuitive. If the both values are independent, the
A Trust of A in B2 B2 Trust of B2 in C C human trust interface allows the users to easily express and
T ru s nC
t of A 3i
in B t of B to interpret an opinion (see Fig. 2). This is important since
3 B3 Trus
it allows users to set up opinions. Furthermore, it allows the
users to check the current state of the trust model, and to
Figure 1: Trust chains manually adjust an opinion, if they believe it is necessary.

Our model provides two representations for opinions to ex- 3.2 Evidence Model
press trust. The first representation is a pair of trust value We now present the second representation, the evidence
and certainty value which serve as a base for a human trust model which is based on beta probability density functions
interface. The second representation is based on the num- (pdf). The beta distribution Beta(α, β) can be used to
ber of collected evidence and allows us to easily integrate model the posteriori probabilities of binary events. The cor-
feedback and forms the base of the computational model. responding pdf is defined by:

3.1 Human Trust Interface Γ(α + β) α−1


The human trust interface (HTI) represents trust as opin- f (p | α, β) = p (1 − p)β−1 ,
Γ(α)Γ(β) (1)
ions. In the HTI an opinion o is a 2-dimensional expres-
where 0 ≤ p ≤ 1, α > 0, β > 0 .
sion, represented by a 2-tuple o = (t, c)HT I ∈ [0, 1] × [0, 1],
where the superscript refers to the representational model. Furthermore, we use r = α + 1 and s = β + 1, where r ≥ 0
The opinion oA A A
B (reci ) = (tB (reci ), cB (reci ))
HT I
expresses and s ≥ 0 represent the number of positive and negative
the opinion of A about the trustworthiness of B in the con- evidence, respectively. The number of collected evidence is
text reci . The value of tA
B (reci ) represents the probability represented by r + s.

1601
In the evidence model, an opinion o can be modeled using Similar to [5,10,12], the certainty value is intuitively linked
the parameters α and β. We refer to this representation with to the number of collected evidence. That is, a greater num-
o = (α, β)αβ . If the opinion is represented by the parameters ber of collected evidence leads to higher confidence in the
r and s, we use the notation o = (r, s)rs . trust value, and therefore, to a higher certainty value. The
The mode t = mode(α, β) of the distribution Beta(α, β), maximal number of expected evidence e(coni ), as introduced
is given as: in Section 2.2, corresponds to the maximal certainty value.
Similar to [12], we want the certainty to increase adaptively
α−1 r with the collected number of evidence. Therefore, we enforce
t = mode(α, β) = = (2) that the first pieces of evidence increase the certainty value
α+β −2 r+s
more, than later ones. As shown in Fig. 3 the certainty
For any c ∈ R \ 0 holds value as defined in 3.1 fulfills these properties. In absence
of information (r + s = 0), the certainty value is c = 0, and
mode((r, s)rs ) = mode((c · r, c · s)rs ) . (3) c = 1 if the collected number of evidence is equal to the
expected number of evidence. Between the two extremes,
This representation allows for easily integrating feedback the certainty value increases adaptively. If the number of
in the trust model. Assuming that feedback f b can be ex- collected evidence is greater than the number of expected
pressed as real number in [−1; 1], where ’−1’ expresses a evidence, there is a normalization, which preserves the trust
negative experience and ’1’ a positive one, the update of value and scales the certainty to c = 1 (see Eq. 6).
an opinion, can be done by recalculating the parameters Furthermore, we can see from Fig. 3, that there is a slight
rnew = rold + 0.5 ∗ (1 + f b) and snew = sold + 0.5 ∗ (1 − f b) dependency between the trust value t and the certainty value
(cf. [6]). This representation enables to update the trust c. Since the final user interface is not continuous, but based
model without user interaction, if the feedback can be gen- on a small set of discrete values, this dependency can be
erated automatically. Furthermore, the software agents can neglected. Therefore, we consider the trust value and the
use all statistical information, which can be derived from the certainty to be independent in the HTI, as demanded in 3.1.
beta distribution, such as mean value, variance, as basis for
decision making. 10 c=1 100 c=1

8 c=0.8 80
3.3 Mapping Between Both Representations 6
c=0.8
r+s c=0.6 r+s 60
Trust value t of an opinion o = (α, β)αβ is defined as the 4 40 c=0.6
c=0.4
mode of the corresponding beta distribution. The certainty 2 20 c=0.4
c=0.2
value c of an opinion o = (α, β)αβ in a context coni is defined 0 0
as follows: The maximal number of expected evidence can 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
t t
be denoted by e(coni ) = αmax + βmax − 2, where αmax and
βmax fulfill: Figure 3: Iso-certainty lines for different maximal
numbers of expected evidence: e = 10 (left), e = 100
α αmax (right)
meancoll := = =: meanmax (4)
α+β αmax + βmax
We now provide a mathematical interpretation of the cer-
Then the certainty c is calculated as:
tainty parameter (see Fig. 4). Let p denote the probability
of the experiment. The area over an arbitrary interval un-
f (meancoll | α, β) − 1 der the curve of the pdf can be interpreted as the probability
c= (5)
f (meanmax | αmax , βmax ) − 1 that p is in this interval. Consider an interval with width w
centered on the mean value of a pdf. The area in this inter-
Definition 3.1 (Mapping). val represents the probability of the mean value. For opin-
It holds (α, β)αβ = (t, c)HT I , iff t = mode(α, β) and the ions with the same mean value, this probability increases
certainty c fulfills Eq. 5. with an increasing amount of evidence. The certainty is
the result of the comparison of this specific area based on
Justification The mapping provides the translation be- the maximal number of expected evidence, to the same area
tween both representations. Therefore, the interpretation based on the number of collected evidence. If we approxi-
of an opinion in the HTI by users has to be as close as mate those to areas by rectangles, with width w and height
possible to the interpretation of the same opinion in the ev- h = f (α/(α + β) | α, β) − 1, the area can be denoted as
idence model by a software agent, and vice versa. This way, A = w · h. If we set Amax as the area which corresponds
it is possible that the user is able to interpret and adjust to the maximal number of expected evidence, and Acoll as
opinions, which are based on the feedback collected by the the area which corresponds to the collected number of ev-
software agent, correctly. idence, then c = Acoll /Amax = (w · hcoll )/(w · hmax ) =
Assuming, that a user has to set up the parameters for hcoll /hmax = (f (meancoll | αcoll , βcoll ) − 1)/(f (meanmax |
the trust value and certainty based on a countable number αmax , βmax ) − 1). To justify the approximation of the area
of experiments, then it seems to be intuitive that the user with a rectangle consider w to be small enough.
sets the trust value, which estimates the probability of for Normalization If the opinion o = (r, s)rs of an agent
the expected event, close to the observed relative frequency. is based on a greater number of evidence than the maxi-
Since the mode of a pdf is equal to the relative frequency of mal number of expected evidence, the collected number of
the observed event, the trust value t is close to the intuitive evidence will be scaled to the allowed maximum (see Eq.
interpretation of the user. 6). The normalization preserves the mode of the pdf (see

1602
6
w
fcoll weight (i.e., a larger discount) to the recommendation of B
5
fmax than C.
Furthermore, the discounting factor increases with the
f (p | α, β)
4 Amax
number of collected evidence. That is, if A and C have
hmax
3 the same ratio of positive and negative evidence with B,
2
but A has more evidence in total, then A has more evidence
A
hcoll coll to believe that B will behave as in the past. Therefore, A
1 gives the opinion of B a stronger weight than C does.
0
0 0.2 0.4
p 0.6 0.8 1 3.5 Trust Propagation in HTI
For the propagation of trust, we transfer the representa-
Figure 4: Visualization for the mathematical inter- tion of a recommendation to the evidence-based represen-
pretation tation and use the operator for consensus and discounting
as defined above. The consensus operator increases the cer-
tainty of the resulting opinion, if multiple recommendations
Eq. 3), and therefore, does not change the trust value. The are provided. The discounting operator only decreases the
normalized opinion norm(o) will be used as input for the certainty of a recommendation, but preserves the trust value
discounting described above. (mode) of a recommendation. Assume that oAB x = oA B
B ⊗ ox ,
then it holds cAB ≤ cB
and t AB
= t B
.
( x x x x

(r, s)rs if r + s ≤ e ,
norm((r, s)rs ) = (6) 4. RELATED WORK
r
( r+s · e, s
r+s
· e)rs else .
Trust is addressed by a growing group of researchers. Fo-
cusing on trust modeling, there is a multitude of different
3.4 Trust Propagation in the Evidence Model approaches, considering the representational models of trust,
For trust propagation we define two operators, similar to and the algorithms for handling recommendations [11].
the ones defined by Jøsang in [5]. We also call our operators The seminal work in the field has been done by Marsh [9].
’consensus’ for the aggregation of opinions and ’discounting’ His trust model is based on the social aspects of trust. It
for the recommendation of an opinion. The consensus oper- includes importance and utility of a situation in the compu-
ator is identical with the one presented in [5]. Since the dis- tational model. The decision making is threshold based and
counting operator presented by Jøsang is motivated based considers trust as well as risk. The main drawbacks of this
on the belief model, we decided to define a new operator, model are that it models trust one-dimensional, and that it
which is motivated based on the evidence model. focuses on trust based on direct experience, but does not
deal with recommendations.
Definition 3.2 (Consensus). Let oA A A rs
x = (rx , sx )
There are several other approaches which model trust one-
and oBx = (r B
x , s B rs
x ) be the opinions of A and B about the
dimensional, e.g., TidalTrust [4] and EigenTrust [8]. In those
truth of the proposition x. The opinion oA,B x = (rxA,B , sA,B
x )rs
models, trust is represented by a single trust value, which
is modeled as the opinion of an imaginary agent which made
does not allow to express the certainty or the reliability of
the experiences of A and B, and is defined as:
this trust value. Therefore, those models cannot express
whether an opinion is based on a single piece or multiple
oA,B
x = oA B A B A B rs
x ⊕ ox = (rx + rx , sx + sx ) (7) pieces of evidence. This leads to a loss of information, when
′ ′ recommendations are aggregated to a single value. Further-
The ⊕ symbol denotes the consensus operator. The op-
more, problems may arise when interpreting recommenda-
erator can easily be extended for the consensus between mul-
tions. For example, if, in TidalTrust, an agent receives only
tiple opinions.
recommendations from lowly trusted recommenders, the ag-
Justification The result of the consensus is the opinion gregated trust value does not reflect, that it is based on lowly
of an agent, who has made the observations done by A and trusted recommenders.
the observations done by B. If the aggregated trust value would be decreased, to ex-
press the low trust in the recommender, it would be hardly
Definition 3.3 (Discounting). Let oA A A rs
B = (rB , sB ) possible to distinguish this opinion from one, which is pro-
and oBx = (r B
x , s B rs
x ) . We denote the opinion of A about x vided by a highly trusted recommender who has recom-
based on the recommendation of B as oAB x and define it as: mended a low trust value.
Other approaches model trust with two or three dimen-
sions. Two dimensional trust models are often based on the
oAB
x = oA B A B A B rs
B ⊗ox = (dB rx , dB sx ) , where dA A A
B = tB cB . (8) Bayesian approach, e.g., [2, 3, 5]. Those models do not have
The ′ ⊗′ symbol denotes the discounting operator. In a an explicit parameter for certainty or uncertainty. It has to
chain of recommendations, we start at the end of the chain, be derived from the beta probability density function. As
e.g., oABC
x = oA B C
B ⊗ (oC ⊗ ox ).
stated in [7], those models are often too complicated to be
well-understood by average users.
Justification The discounting reduces the number of evi- The trust models presented in [2,3,5] also allow for repre-
dence taken into account, since dAB ∈ [0; 1]. The discounting sentations as belief model. The belief model approaches use
factor dAB increases with a number of positive evidence. That the triple belief b, disbelief d, and uncertainty u to repre-
is, if A and C have the same amount of total evidence with sent trust. The drawback of belief models is that the three
B, but A has more positive evidence, then A gives a stronger parameters cannot be assigned independently, e.g., in [5]

1603
they are interrelated by b + d + u = 1. Thus, the pres- Our future work will include the development of trust
ence of uncertainty influences both belief and disbelief. It is management and decision making strategies. Those are nec-
non-trivial for users to express, e.g., a medium belief with essary to be able to evaluate the trust model in a simulation,
different levels of uncertainty. In our model, it is possible and to enhance the attack-resistance of the model. Further-
to independently choose the values for trust and certainty. more, we will refine the discrete representation for the hu-
The relation between the belief b and disbelief d as defined man trust interface based on user studies.
in [5] and our trust value can be denoted as t = b/(b + d).
Approaches like the ’Subjective Logic’ [5] are not capa- 6. REFERENCES
ble of expressing uncertainty context-dependent. In [5], the
[1] B. Bhargava, L. Lilien, A. Rosenthal, and M. Winslet.
uncertainty u is defined as u = 2/(r + s + 2). Therefore,
Pervasive trust. IEEE Intelligent Systems,
uncertainty depends only on the number of collected evi-
19(5):74–88, 2004.
dence, but not on the context. If we choose the maximal
number of expected evidence as e = 10 or e = 20, then the [2] V. Cahill et al. Using trust for secure collaboration in
uncertainty in ’Subjective Logic’ behaves similar to a pa- uncertain environments. IEEE Pervasive Computing,
rameter (1 − c), which can be used to express uncertainty in 2/3:52–61, July 2003.
our model. If we think of contexts related to a high risk or [3] M. Carbone, M. Nielsen, and V. Sassone. A formal
a high frequency of interaction, and assign, e.g., the maxi- model for trust in dynamic networks. In Proc. of IEEE
mal number of expected evidence e = 100 to this context, International Conference on Software Engineering and
the uncertainty in ’Subjective Logic’ does not any longer Formal Methods, September 2003. IEEE Computer
behave as expected, since opinions based on 18 or more col- Society.
lected evidence (r + s ≥ 18) have only very little uncertainty [4] J. Golbeck. Computing and Applying Trust in
(u ≤ 0.1). Web-Based Social Networks. PhD thesis, University of
The trust models presented in [10,12] introduce reliability Maryland, College Park, 2005.
as a concept which is similar to our concept of certainty. [5] A. Jøsang. A logic for uncertain probabilities.
They also define a context-dependent value similar to the International Journal of Uncertainty, Fuzziness and
maximal number of expected evidence. Knowledge-Based Systems, 9(3):279–212, 2001.
In [10], the trust model is based on the Bayesian approach, [6] A. Jøsang and R. Ismail. The beta reputation system.
as ours. The maximal number of expected evidence e corre- In Proceedings of the 15th Bled Conference on
sponds to m, which is described as the ”minimal number of Electronic Commerce, June 2002.
encounters necessary to achieve a given level of confidence [7] A. Jøsang, R. Ismail, and C. Boyd. A survey of trust
[...]”. The reliability w is defined to increase linear with the and reputation systems for online service provision. In
number of collected evidence from 0 (if no evidence is avail- Decision Support Systems, 2005.
able) to 1 (if the number of collected evidence is greater than [8] S. D. Kamvar, M. T. Schlosser, and H. Garcia-Molina.
or equal to m). But, this linear approach is stated to be an The eigentrust algorithm for reputation management
first order approximation [10]. in p2p networks. In Proc. of the 12th international
In the trust model presented in [12], the intimate level of conference on World Wide Web, pages 640–651, New
interactions, is close to the concept of the maximal num- York, USA, 2003. ACM Press.
ber of expected evidence. It is also content-dependent. The [9] S. Marsh. Formalising Trust as a Computational
number of outcomes factor (No) ∈ [0, 1], increases adaptively Concept. PhD thesis, University of Stirling, 1994.
with the number of collected evidence. To achieve the adap- [10] L. Mui, M. Mohtashemi, and A. Halberstadt. A
tive behavior as described in 3.3, Sabater et al. put the ratio computational model of trust and reputation for
between the collected and the expected number of evidence e-businesses. In Proceedings of the 35th Annual Hawaii
in a sinus-function, which seems to be done ad hoc. International Conference on System Sciences
(HICSS’02)-Volume 7, page 188, Washington, DC,
5. CONCLUSION & FUTURE WORK USA, 2002. IEEE Computer Society.
In this paper we developed a trust model, which allows to [11] S. Ries, J. Kangasharju, and M. Mühlhäuser. A
represent trust in a way, which can be interpreted and up- classification of trust systems. In Proceedings of the
dated by software agents as well as by users. We provided a International Workshop on MObile and NEtworking
new way to express the certainty of an opinion in contexts, Technologies for social applications, 2006.
which are associated with different levels of risk, or frequency [12] J. Sabater and C. Sierra. Reputation and social
in interaction. In the HTI the values for trust and certainty network analysis in multi-agent systems. In
can be interpreted independently, which allows to introduce Proceedings of the 1st International Joint Conference
the semantics of an opinion based on labels. Therefore, the on Autonomous Agents and Multiagent Systems, pages
user is able to easily control the state of the trust model 475–482, New York, NY, USA, 2002. ACM Press.
and to adjust opinions, if necessary. The evidence model [13] J. Sabater and C. Sierra. Review on computational
enables software agents to update an opinion, when new ev- trust and reputation models. Artificial Intelligence
idence is available, and to reason about the trustworthiness Review, 24(1):33–60, September 2005.
of an interaction partner. Furthermore, we showed that our [14] J.-M. Seigneur, S. Farrell, C. D. Jensen, E. Gray, and
mapping between the HTI and the evidence model has an Y. Chen. End-to-end trust starts with recognition. In
intuitive interpretation and is mathematically founded. We First International Conference on Security in
provided two operators for trust propagation, which are mo- Pervasive Computing, Mar. 2003.
tivated based on the evidence model and have an intuitive
interpretation in the HTI representation.

1604

You might also like