Motivation (in a nutshell : Model of Trust
In the present world of information exchange, numerous heterogeneous but cooperative agents are involved in a globally connected network. The locational and operational diversity of these agents make confidentiality, integrity and availability of systems and information resources increasingly critical in our everyday life. To protect such resources and to ensure that they behave according to stated requirements, it is therefore important that we are able to determine the appropriate security policies. The notion of trust plays a crucial role for the proper formulation of security policies since we expect agents and systems to work according to our sociological expectation of trust in terms of confidentiality, integrity and availability.
Almost all existing models of trust, that allow reasoning about trust relationships, take a binary view of trust - complete trust or no trust at all. This prevents one from rationally evaluating the trust in systems that are composed of different sub-systems each of which are either trusted or not trusted.
Consider, for example, the operational information base in a large corporation. Typically, this is generated with the accumulation of information from several sources. Some of these sources are under the direct administrative control of the corporation and thus are considered trustworthy. Other sources are “friendly” sources and information originating directly from them are also considered trustworthy. However, these “friendly” sources may have derived information from their own sources which the corporation does not have any first hand knowledge about. If such third-hand information is made available to the corporation, then the corporation has no real basis for determining the quality (in terms of trustworthiness) of that information. It will be rather naive for the corporation to trust this information to the same extent that it trusts information from sources under its direct control. Similarly not trusting this information at all will be too simplistic. The existing binary models of trust where trust has only two values, “no trust” and “complete trust” will, nonetheless, categorize the trust value to one of these two levels. Hence the following questions can not be answered satisfactory.
Can the composite information be trusted at all?
If it can be trusted, should there be any constraint on the trust or should be it complete, unconstrained trust?
Therefore, there is a need to have a formal model of trust which
is more inclusive than the current binary model.
has a notion of degrees of trust.
has procedures to compare information at different degrees of trust.
has procedure for trust composability, that is, define methods that allow one to combine information belonging to different degrees of trust and determine the degree of trust of the resulting information.
has processes and procedures to establish and manage trust.
A New Model of Trust: Motivated by the above observations a new model of trust is proposed. In this model,
Trust is defined as a vector of numeric values.
Each element of the vector is a parameter in determining the value of trust. Three such parameters are identified though there is no claim that trust depends only on these parameters.
Methods are proposed to determine the values corresponding to the above parameters.
Substituting values for each of these parameters in the trust vector provides a value for trust. This value now represents trust of a certain degree.
The value ranges from [-1, 1]. Positive region expresses trust, negative region expresses distrust and neutrality about trust is expressed by zero.
Operators are defined to map a trust vector to a trust value within this range and also from a trust value back to a trust vector.
The concept of dynamic nature of trust i.e. how trust (or distrust) changes over time, has been incorporated in the model.
The notion of "trust depending on trust itself" i.e. a trust relationship established at some point of time in the past influences the computation of trust at the current time, is formalized.
Comparison operator for trust vectors are defined. This allows us to make a decision about relative trustworthiness of two or more entities.
Mechanism to combine trusts of different degrees to form a single trust relationship is proposed.
Group trust for truster group as well as trustee group are defined.
Indrajit Ray and Sudip Chakraborty. A Vector Model of Trust for Developing Trustworthy Systems. In Proceedings of 9th European Symposium on Research in Computer Security (ESORICS'04), Sophia Antipolis, France, September 13-15, 2004.
Indrajit Ray, Sudip Chakraborty, and Indrakshi Ray. VTrust: A Trust Management System Based on
a Vector Model of Trust. In
Proceedings of 1st International Conference on Information Systems
Security (ICISS'05), Kolkata, India, December 19-21, 2005.
Anna C. Squicciarini, Elisa Bertino, Elena Ferrari, and Indrakshi Ray. Achieving Privacy with an Ontology-Based Approach in Trust Negotiations.
IEEE Transactions on Dependable and
Secure Computing, 3(1), January-March, 2006.
Siv Hilde Houmb, Indrakshi Ray, and Indrajit Ray. Estimating the Relative Trustworthiness of Information
Sources in Security Solution Evaluation.
In Proceedings of 4th International
Conference on Trust Management, Pisa, Italy, May 2006.
Sudip Chakraborty and Indrajit Ray. TrustBAC - Integrating Trust Relationships
into the RBAC Model for Access Control in Open Systems.
In Proceedings of 11th ACM Symposium
on Access Control Models and Technologies (SACMAT'06), Lake Tahoe,
CA, USA, June 7-9, 2006.
Sudip Chakraborty, and Indrajit Ray. Allowing Finer Control Over Privacy Using
Trust as a Benchmark. In
7th Annual IEEE Information Assurance Workshop (IAW'06), United
States Military Academy, West Point, NY, June 21-23, 2006.
Indrajit Ray and Sudip Chakraborty. A Framework for Flexible Access Control in
Digital Library Systems. In
20th Annual IFIP WG 11.3 Working Conference on Data and Applications
Security (DBSec'06), SAP Labs, Sophia Antipolis, France, July
31-August 2, 2006.