Certaintrust: A Trust Model For Users and Agents: Sebastian Ries
Certaintrust: A Trust Model For Users and Agents: Sebastian Ries
∗
Sebastian Ries
Department of Computer Science
Darmstadt University of Technology
Hochschulstrasse 10
64289 Darmstadt, Germany
[email protected]
1599
we also enforce the context-dependency of the certainty pa- Why does certainty need to be context-dependent (per-
rameter of a trust value. The second representation is based haps even subjective)?
on the Bayesian approach, using beta probability density
functions. This approach is well-established to express trust, • In ubiquitous computing environments trust models
and serves as a basis for the trust computation and as an can be used to automate decision making in many dif-
interface for evidence-based feedback integration. Finally, ferent applications. In the context of some applica-
we provide a mapping between both representations, and tions, there might be a great number of interactions,
operators for ’consensus’ and ’discounting’. other applications might be related to high risk, con-
The remainder of this paper is structured as follows. In sidering legal or financial implications. In these con-
Section 2, we summarize our notion of trust. Section 3 texts, it seems reasonable, that the users want to col-
presents the trust model and the operators for trust prop- lect a great number of evidence, before they would
agation. Section 4 presents the related work, and Section think about an opinion to be certain. If forced to make
5 summarizes our contribution and outlines aspects for our a decision about an engagement involving high risk,
future work. one might choose to reject the engagement, although
there is positive but too little evidence.
1600
the recommenders’ ability to provide recommendations. Af- with which A would consider the proposition ”I believe, that
ter having calculated the trust value, and having estimated B is trustworthy with respect to providing recommendations
the potential risk, we can use this information for decision for the context coni ” to be true. This value is referred to
making. In this paper, we focus on calculating trust, but as the trust value. The value cA B (reci ) is referred to as cer-
the involvement of risk and decision making is necessary to tainty or certainty value. This value expresses, which cer-
motivate the topic. tainty the provider of an opinion assigns to the trust value.
A low certainty value expresses that the trust value can eas-
3. TRUST MODEL - CERTAINTRUST ily change. Adversely, high certainty expresses the trust
Introducing the trust model we start presenting the gen- value is rather fixed. The values for trust and certainty can
eral notation. Let contexts be denoted by coni i ∈ {1, 2, . . . }, be assigned independently of each other. For example, an
e.g., con1 = f ile sharing or con2 = online banking. For opinion oAB (reci ) = (1, 0.1)
HT I
expresses that A expects B
providing recommendations for a context coni , we define to be trustworthy in the context of providing recommen-
a special context, which is denoted as reci . Let agents be dations for coni , but this opinion can easily change, since
denoted by capital letters A, B, . . . . Let propositions be de- A assigns a low certainty value. The interpretation for an
noted by non-capital letters x, y, . . . . The opinion of agent opinion oAx is analogue.
A about the truth of a proposition x, e.g., x=”Agent C be- For the moment, we express both the values for trust and
haves trustworthy in context coni ” (see Fig. 1), will be de- certainty as continuous values in [0, 1]. Since humans are
noted as oA better in assigning discrete (verbal) values then continuous
x (coni ). The opinion of agent A about B’s trust-
worthiness for providing recommendations for a context coni ones, as stated in [4, 7], we want to point out, that both
will be denoted as oA values can easily be mapped to a discrete set of values, e.g.,
B (reci ). If the context is non-ambiguous
or non-relevant, we use oA A to the natural numbers in [0, 10], or to set of labels, as e.g.,
x and oB . The maximal number
of expected evidence (see Section 2.2) is denoted as e(coni ) very untrusted (vt), untrusted (u), undecided (ud), trusted
or e. Since the evidence model is partly derived from ideas (t), and very trusted (vt) for the trust value and uninformed,
presented in [5], and to achieve better comparability, we use rookie, average, and expert for the certainty value.
the same terminology when possible.
For the explanation of the trust model, we do not focus certainty (undecided, (very trusted,
value rookie) expert)
on trust management aspects as collecting and storing of
evidence, risk assignment, and decision making. We assume 1
that the evidence is collected and locally stored, and that expert
recommendations are provided on request by the communi-
cation partners within range. average
The propagation of recommendations is done based on rookie
chains of recommendations (see Fig. 1). We propose special trust
uninformed
0 vu u ud t vt 1 value
operators for consensus (aggregation of opinions) and dis-
counting (weighting of recommendations). For simplicity, Figure 2: Example for a graphical HTI
opinions are assumed to be independent.
To be able to introduce the semantics for the trust value
A Trust of A in B:
A B Trust of B in C:
B C and the certainty with labels, it is necessary that both pa-
o (rec i ) o (coni )
B x rameters are independent of each other. Otherwise, the in-
terpretation of the labels of the trust value would change
in B 1
B1 T ru s
t of B with the certainty value, and vice versa. This would be
t of A 1 in C
Trus counter-intuitive. If the both values are independent, the
A Trust of A in B2 B2 Trust of B2 in C C human trust interface allows the users to easily express and
T ru s nC
t of A 3i
in B t of B to interpret an opinion (see Fig. 2). This is important since
3 B3 Trus
it allows users to set up opinions. Furthermore, it allows the
users to check the current state of the trust model, and to
Figure 1: Trust chains manually adjust an opinion, if they believe it is necessary.
Our model provides two representations for opinions to ex- 3.2 Evidence Model
press trust. The first representation is a pair of trust value We now present the second representation, the evidence
and certainty value which serve as a base for a human trust model which is based on beta probability density functions
interface. The second representation is based on the num- (pdf). The beta distribution Beta(α, β) can be used to
ber of collected evidence and allows us to easily integrate model the posteriori probabilities of binary events. The cor-
feedback and forms the base of the computational model. responding pdf is defined by:
1601
In the evidence model, an opinion o can be modeled using Similar to [5,10,12], the certainty value is intuitively linked
the parameters α and β. We refer to this representation with to the number of collected evidence. That is, a greater num-
o = (α, β)αβ . If the opinion is represented by the parameters ber of collected evidence leads to higher confidence in the
r and s, we use the notation o = (r, s)rs . trust value, and therefore, to a higher certainty value. The
The mode t = mode(α, β) of the distribution Beta(α, β), maximal number of expected evidence e(coni ), as introduced
is given as: in Section 2.2, corresponds to the maximal certainty value.
Similar to [12], we want the certainty to increase adaptively
α−1 r with the collected number of evidence. Therefore, we enforce
t = mode(α, β) = = (2) that the first pieces of evidence increase the certainty value
α+β −2 r+s
more, than later ones. As shown in Fig. 3 the certainty
For any c ∈ R \ 0 holds value as defined in 3.1 fulfills these properties. In absence
of information (r + s = 0), the certainty value is c = 0, and
mode((r, s)rs ) = mode((c · r, c · s)rs ) . (3) c = 1 if the collected number of evidence is equal to the
expected number of evidence. Between the two extremes,
This representation allows for easily integrating feedback the certainty value increases adaptively. If the number of
in the trust model. Assuming that feedback f b can be ex- collected evidence is greater than the number of expected
pressed as real number in [−1; 1], where ’−1’ expresses a evidence, there is a normalization, which preserves the trust
negative experience and ’1’ a positive one, the update of value and scales the certainty to c = 1 (see Eq. 6).
an opinion, can be done by recalculating the parameters Furthermore, we can see from Fig. 3, that there is a slight
rnew = rold + 0.5 ∗ (1 + f b) and snew = sold + 0.5 ∗ (1 − f b) dependency between the trust value t and the certainty value
(cf. [6]). This representation enables to update the trust c. Since the final user interface is not continuous, but based
model without user interaction, if the feedback can be gen- on a small set of discrete values, this dependency can be
erated automatically. Furthermore, the software agents can neglected. Therefore, we consider the trust value and the
use all statistical information, which can be derived from the certainty to be independent in the HTI, as demanded in 3.1.
beta distribution, such as mean value, variance, as basis for
decision making. 10 c=1 100 c=1
8 c=0.8 80
3.3 Mapping Between Both Representations 6
c=0.8
r+s c=0.6 r+s 60
Trust value t of an opinion o = (α, β)αβ is defined as the 4 40 c=0.6
c=0.4
mode of the corresponding beta distribution. The certainty 2 20 c=0.4
c=0.2
value c of an opinion o = (α, β)αβ in a context coni is defined 0 0
as follows: The maximal number of expected evidence can 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
t t
be denoted by e(coni ) = αmax + βmax − 2, where αmax and
βmax fulfill: Figure 3: Iso-certainty lines for different maximal
numbers of expected evidence: e = 10 (left), e = 100
α αmax (right)
meancoll := = =: meanmax (4)
α+β αmax + βmax
We now provide a mathematical interpretation of the cer-
Then the certainty c is calculated as:
tainty parameter (see Fig. 4). Let p denote the probability
of the experiment. The area over an arbitrary interval un-
f (meancoll | α, β) − 1 der the curve of the pdf can be interpreted as the probability
c= (5)
f (meanmax | αmax , βmax ) − 1 that p is in this interval. Consider an interval with width w
centered on the mean value of a pdf. The area in this inter-
Definition 3.1 (Mapping). val represents the probability of the mean value. For opin-
It holds (α, β)αβ = (t, c)HT I , iff t = mode(α, β) and the ions with the same mean value, this probability increases
certainty c fulfills Eq. 5. with an increasing amount of evidence. The certainty is
the result of the comparison of this specific area based on
Justification The mapping provides the translation be- the maximal number of expected evidence, to the same area
tween both representations. Therefore, the interpretation based on the number of collected evidence. If we approxi-
of an opinion in the HTI by users has to be as close as mate those to areas by rectangles, with width w and height
possible to the interpretation of the same opinion in the ev- h = f (α/(α + β) | α, β) − 1, the area can be denoted as
idence model by a software agent, and vice versa. This way, A = w · h. If we set Amax as the area which corresponds
it is possible that the user is able to interpret and adjust to the maximal number of expected evidence, and Acoll as
opinions, which are based on the feedback collected by the the area which corresponds to the collected number of ev-
software agent, correctly. idence, then c = Acoll /Amax = (w · hcoll )/(w · hmax ) =
Assuming, that a user has to set up the parameters for hcoll /hmax = (f (meancoll | αcoll , βcoll ) − 1)/(f (meanmax |
the trust value and certainty based on a countable number αmax , βmax ) − 1). To justify the approximation of the area
of experiments, then it seems to be intuitive that the user with a rectangle consider w to be small enough.
sets the trust value, which estimates the probability of for Normalization If the opinion o = (r, s)rs of an agent
the expected event, close to the observed relative frequency. is based on a greater number of evidence than the maxi-
Since the mode of a pdf is equal to the relative frequency of mal number of expected evidence, the collected number of
the observed event, the trust value t is close to the intuitive evidence will be scaled to the allowed maximum (see Eq.
interpretation of the user. 6). The normalization preserves the mode of the pdf (see
1602
6
w
fcoll weight (i.e., a larger discount) to the recommendation of B
5
fmax than C.
Furthermore, the discounting factor increases with the
f (p | α, β)
4 Amax
number of collected evidence. That is, if A and C have
hmax
3 the same ratio of positive and negative evidence with B,
2
but A has more evidence in total, then A has more evidence
A
hcoll coll to believe that B will behave as in the past. Therefore, A
1 gives the opinion of B a stronger weight than C does.
0
0 0.2 0.4
p 0.6 0.8 1 3.5 Trust Propagation in HTI
For the propagation of trust, we transfer the representa-
Figure 4: Visualization for the mathematical inter- tion of a recommendation to the evidence-based represen-
pretation tation and use the operator for consensus and discounting
as defined above. The consensus operator increases the cer-
tainty of the resulting opinion, if multiple recommendations
Eq. 3), and therefore, does not change the trust value. The are provided. The discounting operator only decreases the
normalized opinion norm(o) will be used as input for the certainty of a recommendation, but preserves the trust value
discounting described above. (mode) of a recommendation. Assume that oAB x = oA B
B ⊗ ox ,
then it holds cAB ≤ cB
and t AB
= t B
.
( x x x x
(r, s)rs if r + s ≤ e ,
norm((r, s)rs ) = (6) 4. RELATED WORK
r
( r+s · e, s
r+s
· e)rs else .
Trust is addressed by a growing group of researchers. Fo-
cusing on trust modeling, there is a multitude of different
3.4 Trust Propagation in the Evidence Model approaches, considering the representational models of trust,
For trust propagation we define two operators, similar to and the algorithms for handling recommendations [11].
the ones defined by Jøsang in [5]. We also call our operators The seminal work in the field has been done by Marsh [9].
’consensus’ for the aggregation of opinions and ’discounting’ His trust model is based on the social aspects of trust. It
for the recommendation of an opinion. The consensus oper- includes importance and utility of a situation in the compu-
ator is identical with the one presented in [5]. Since the dis- tational model. The decision making is threshold based and
counting operator presented by Jøsang is motivated based considers trust as well as risk. The main drawbacks of this
on the belief model, we decided to define a new operator, model are that it models trust one-dimensional, and that it
which is motivated based on the evidence model. focuses on trust based on direct experience, but does not
deal with recommendations.
Definition 3.2 (Consensus). Let oA A A rs
x = (rx , sx )
There are several other approaches which model trust one-
and oBx = (r B
x , s B rs
x ) be the opinions of A and B about the
dimensional, e.g., TidalTrust [4] and EigenTrust [8]. In those
truth of the proposition x. The opinion oA,B x = (rxA,B , sA,B
x )rs
models, trust is represented by a single trust value, which
is modeled as the opinion of an imaginary agent which made
does not allow to express the certainty or the reliability of
the experiences of A and B, and is defined as:
this trust value. Therefore, those models cannot express
whether an opinion is based on a single piece or multiple
oA,B
x = oA B A B A B rs
x ⊕ ox = (rx + rx , sx + sx ) (7) pieces of evidence. This leads to a loss of information, when
′ ′ recommendations are aggregated to a single value. Further-
The ⊕ symbol denotes the consensus operator. The op-
more, problems may arise when interpreting recommenda-
erator can easily be extended for the consensus between mul-
tions. For example, if, in TidalTrust, an agent receives only
tiple opinions.
recommendations from lowly trusted recommenders, the ag-
Justification The result of the consensus is the opinion gregated trust value does not reflect, that it is based on lowly
of an agent, who has made the observations done by A and trusted recommenders.
the observations done by B. If the aggregated trust value would be decreased, to ex-
press the low trust in the recommender, it would be hardly
Definition 3.3 (Discounting). Let oA A A rs
B = (rB , sB ) possible to distinguish this opinion from one, which is pro-
and oBx = (r B
x , s B rs
x ) . We denote the opinion of A about x vided by a highly trusted recommender who has recom-
based on the recommendation of B as oAB x and define it as: mended a low trust value.
Other approaches model trust with two or three dimen-
sions. Two dimensional trust models are often based on the
oAB
x = oA B A B A B rs
B ⊗ox = (dB rx , dB sx ) , where dA A A
B = tB cB . (8) Bayesian approach, e.g., [2, 3, 5]. Those models do not have
The ′ ⊗′ symbol denotes the discounting operator. In a an explicit parameter for certainty or uncertainty. It has to
chain of recommendations, we start at the end of the chain, be derived from the beta probability density function. As
e.g., oABC
x = oA B C
B ⊗ (oC ⊗ ox ).
stated in [7], those models are often too complicated to be
well-understood by average users.
Justification The discounting reduces the number of evi- The trust models presented in [2,3,5] also allow for repre-
dence taken into account, since dAB ∈ [0; 1]. The discounting sentations as belief model. The belief model approaches use
factor dAB increases with a number of positive evidence. That the triple belief b, disbelief d, and uncertainty u to repre-
is, if A and C have the same amount of total evidence with sent trust. The drawback of belief models is that the three
B, but A has more positive evidence, then A gives a stronger parameters cannot be assigned independently, e.g., in [5]
1603
they are interrelated by b + d + u = 1. Thus, the pres- Our future work will include the development of trust
ence of uncertainty influences both belief and disbelief. It is management and decision making strategies. Those are nec-
non-trivial for users to express, e.g., a medium belief with essary to be able to evaluate the trust model in a simulation,
different levels of uncertainty. In our model, it is possible and to enhance the attack-resistance of the model. Further-
to independently choose the values for trust and certainty. more, we will refine the discrete representation for the hu-
The relation between the belief b and disbelief d as defined man trust interface based on user studies.
in [5] and our trust value can be denoted as t = b/(b + d).
Approaches like the ’Subjective Logic’ [5] are not capa- 6. REFERENCES
ble of expressing uncertainty context-dependent. In [5], the
[1] B. Bhargava, L. Lilien, A. Rosenthal, and M. Winslet.
uncertainty u is defined as u = 2/(r + s + 2). Therefore,
Pervasive trust. IEEE Intelligent Systems,
uncertainty depends only on the number of collected evi-
19(5):74–88, 2004.
dence, but not on the context. If we choose the maximal
number of expected evidence as e = 10 or e = 20, then the [2] V. Cahill et al. Using trust for secure collaboration in
uncertainty in ’Subjective Logic’ behaves similar to a pa- uncertain environments. IEEE Pervasive Computing,
rameter (1 − c), which can be used to express uncertainty in 2/3:52–61, July 2003.
our model. If we think of contexts related to a high risk or [3] M. Carbone, M. Nielsen, and V. Sassone. A formal
a high frequency of interaction, and assign, e.g., the maxi- model for trust in dynamic networks. In Proc. of IEEE
mal number of expected evidence e = 100 to this context, International Conference on Software Engineering and
the uncertainty in ’Subjective Logic’ does not any longer Formal Methods, September 2003. IEEE Computer
behave as expected, since opinions based on 18 or more col- Society.
lected evidence (r + s ≥ 18) have only very little uncertainty [4] J. Golbeck. Computing and Applying Trust in
(u ≤ 0.1). Web-Based Social Networks. PhD thesis, University of
The trust models presented in [10,12] introduce reliability Maryland, College Park, 2005.
as a concept which is similar to our concept of certainty. [5] A. Jøsang. A logic for uncertain probabilities.
They also define a context-dependent value similar to the International Journal of Uncertainty, Fuzziness and
maximal number of expected evidence. Knowledge-Based Systems, 9(3):279–212, 2001.
In [10], the trust model is based on the Bayesian approach, [6] A. Jøsang and R. Ismail. The beta reputation system.
as ours. The maximal number of expected evidence e corre- In Proceedings of the 15th Bled Conference on
sponds to m, which is described as the ”minimal number of Electronic Commerce, June 2002.
encounters necessary to achieve a given level of confidence [7] A. Jøsang, R. Ismail, and C. Boyd. A survey of trust
[...]”. The reliability w is defined to increase linear with the and reputation systems for online service provision. In
number of collected evidence from 0 (if no evidence is avail- Decision Support Systems, 2005.
able) to 1 (if the number of collected evidence is greater than [8] S. D. Kamvar, M. T. Schlosser, and H. Garcia-Molina.
or equal to m). But, this linear approach is stated to be an The eigentrust algorithm for reputation management
first order approximation [10]. in p2p networks. In Proc. of the 12th international
In the trust model presented in [12], the intimate level of conference on World Wide Web, pages 640–651, New
interactions, is close to the concept of the maximal num- York, USA, 2003. ACM Press.
ber of expected evidence. It is also content-dependent. The [9] S. Marsh. Formalising Trust as a Computational
number of outcomes factor (No) ∈ [0, 1], increases adaptively Concept. PhD thesis, University of Stirling, 1994.
with the number of collected evidence. To achieve the adap- [10] L. Mui, M. Mohtashemi, and A. Halberstadt. A
tive behavior as described in 3.3, Sabater et al. put the ratio computational model of trust and reputation for
between the collected and the expected number of evidence e-businesses. In Proceedings of the 35th Annual Hawaii
in a sinus-function, which seems to be done ad hoc. International Conference on System Sciences
(HICSS’02)-Volume 7, page 188, Washington, DC,
5. CONCLUSION & FUTURE WORK USA, 2002. IEEE Computer Society.
In this paper we developed a trust model, which allows to [11] S. Ries, J. Kangasharju, and M. Mühlhäuser. A
represent trust in a way, which can be interpreted and up- classification of trust systems. In Proceedings of the
dated by software agents as well as by users. We provided a International Workshop on MObile and NEtworking
new way to express the certainty of an opinion in contexts, Technologies for social applications, 2006.
which are associated with different levels of risk, or frequency [12] J. Sabater and C. Sierra. Reputation and social
in interaction. In the HTI the values for trust and certainty network analysis in multi-agent systems. In
can be interpreted independently, which allows to introduce Proceedings of the 1st International Joint Conference
the semantics of an opinion based on labels. Therefore, the on Autonomous Agents and Multiagent Systems, pages
user is able to easily control the state of the trust model 475–482, New York, NY, USA, 2002. ACM Press.
and to adjust opinions, if necessary. The evidence model [13] J. Sabater and C. Sierra. Review on computational
enables software agents to update an opinion, when new ev- trust and reputation models. Artificial Intelligence
idence is available, and to reason about the trustworthiness Review, 24(1):33–60, September 2005.
of an interaction partner. Furthermore, we showed that our [14] J.-M. Seigneur, S. Farrell, C. D. Jensen, E. Gray, and
mapping between the HTI and the evidence model has an Y. Chen. End-to-end trust starts with recognition. In
intuitive interpretation and is mathematically founded. We First International Conference on Security in
provided two operators for trust propagation, which are mo- Pervasive Computing, Mar. 2003.
tivated based on the evidence model and have an intuitive
interpretation in the HTI representation.
1604