Interoperability in Complex Distributed Systems: January 2011
Interoperability in Complex Distributed Systems: January 2011
net/publication/221224368
CITATIONS READS
24 881
4 authors, including:
Nikolaos Georgantas
National Institute for Research in Computer Science and Control
110 PUBLICATIONS 2,371 CITATIONS
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
FIESTA-IoT: Federated Interoperable Semantic IoT Testbeds and Applications View project
All content following this page was uploaded by Massimo Paolucci on 10 June 2014.
Abstract. Distributed systems are becoming more complex in terms of both the
level of heterogeneity encountered coupled with a high level of dynamism of
inria-00629057, version 1 - 4 Oct 2011
such systems. Taken together, this makes it very difficult to achieve the crucial
property of interoperability that is enabling two arbitrary systems to work
together relying only on their declared service specification. This chapter
examines this issue of interoperability in considerable detail, looking initially at
the problem space, and in particular the key barriers to interoperability, and
then moving on to the solution space, focusing on research in the middleware
and semantic interoperability communities. We argue that existing approaches
are simply unable to meet the demands of the complex distributed systems of
today and that the lack of integration between the work on middleware and
semantic interoperability is a clear impediment to progress in this area. We
outline a roadmap towards meeting the challenges of interoperability including
the need for integration across these two communities, resulting in middleware
solutions that are intrinsically based on semantic meaning. We also advocate a
dynamic approach to interoperability based on the concept of emergent
middleware.
Keywords: Interoperability, complex distributed systems, heterogeneity,
adaptive distributed systems, middleware, semantic interoperability
1 Introduction
Hence, we advocate that the two fields embrace each other’s results, and that from
this, fundamentally different solutions will emerge in order to drop the
interoperability barrier.
2. 1 Data Heterogeneity
Different systems choose to represent data in different ways, and such data
representation heterogeneity is typically manifested at two levels. The simplest form
of data interoperability is at the syntactic level where two different systems may use
two very different formats to express the same information. Consider a vendor
application for the sale of goods; one vendor may price an item using XML, while
another may serialize its data using a Java-like syntax. So the simple information that
the item costs £1 may result in the two different representations as shown in Fig. 1(a).
<price>
<value> 1 </value> price(1,GBP)
<currency> GBP </currency>
</price>
a) Representing price in XML and tuple data
<price> <cost>
<value> 1 </value> <amount> 1 </ amount >
<currency> GBP </currency> <denomination> £</ denomination >
</price> </cost>
b) Heterogeneous Currency Data
Fig. 1. Examples of Data Heterogeneity
Aside from the syntactic level interoperability, there is a greater problem with the
“meaning” of the tokens in the messages. Even if the two components use the same
syntax, say XML, there is no guarantee that the two systems recognize all the nodes in
the parsing trees or even that the two systems interpret all these nodes in a consistent
way. Consider the two XML structures in the example in Fig. 1(b). Both structures
are in XML and they (intuitively) carry the same meaning. Any system that
recognizes the first structure will also be able to parse the second one, but will fail to
recognize the similarity between them unless the system realizes that price≡cost, that
value≡amount, that currency≡denomination and of course that GBP≡£ (where ≡
means equivalent). The net result of using XML is that both systems will be in the
awkward situation of parsing each other’s message, but not knowing what to do with
the information that they just received.
The deeper problem of data heterogeneity is the semantic interoperability problem
whereby all systems provide the same interpretation to data. The examples provided
above, show one aspect of data interoperability, namely the recognition that two
different labels represent the same object. This is in the general case an extremely
difficult problem which is under active research [1], though in many cases it can
inria-00629057, version 1 - 4 Oct 2011
Developers can choose to implement their distributed applications and services upon a
wide range of middleware solutions that are now currently available. In particular,
these consist of heterogeneous discovery protocols which are used to locate or
advertise the services in use, and heterogeneous interaction protocols which perform
the direct communication between the services. Fig. 2 illustrates a collection
implemented upon these different technologies. Application 1 is a mobile sport news
application, whereby news stories of interest are presented to the user based on their
current location. Application 2 is a jukebox application that allows users to select and
play music on an audio output device at that location. Finally, application 3 is a chat
application that allows two mobile users to interact with one another. In two locations
(a coffee bar and a public house) the same application services are available to the
user, but their middleware implementations differ. For example, the Sport News
service is implemented as a publish-subscribe channel at the coffee bar and as a
SOAP service in the pubic house. Similarly, the chat applications and jukebox
services are implemented using different middleware types. The service discovery
protocols are also heterogeneous, i.e., the services available at the public house are
discoverable using SLP and the services at the coffee bar can be found using both
UPnP and SLP. For example, at the coffee bar the jukebox application must first find
its corresponding service using UPnP and then use SOAP to control functionality.
When it moves to the public house, SLP and CORBA must be used.
Fig. 2. Legacy services implemented using heterogeneous middleware
inria-00629057, version 1 - 4 Oct 2011
Interoperability challenges at the application level might arise due to the different
ways the application developers might choose to implement the program
functionality, including different use of the underlying middleware. As a specific
example, a merchant application could be implemented using one of two approaches
for the consumer to obtain information about his wares:
A single GetInfo() remote method, which returns the information about the
product price and quantity available needed by the consumer.
Two separate remote methods GetPrice(), and GetQuantity()
returning the same information.
A client developer can then code the consumer using either one of the approaches
described above, and this would lead to different sequences of messages between the
consumer and merchant. Additionally, application level heterogeneity can also be
caused due to the differences between the underlying middlewares. For example,
when using a Tuple Space, the programmer can use the rich search semantics
provided by it, which are not available in other types of middleware, e.g., for RPC
middleware a Naming Service or discovery protocol must then be used for equivalent
capabilities.
3. 1 Introduction
1 http://www.oracle.com/technetwork/java/jms/index.html
would be required to be implemented upon the same middleware). In the more
general sense of achieving universal interoperability and dynamic interoperability
between spontaneous communicating systems they have failed. Within the field of
distributed software systems, any approach that assumes a common middleware or
standard is destined to fail due to the following reasons:
1. A one size fits all standard/middleware cannot cope with the extreme
heterogeneity of distributed systems, e.g. from small scale sensor applications
through to large scale Internet applications. CORBA and Web Services [12]
both present a common communication abstraction i.e. distributed objects or
service orientation. However, the list of diverse middleware types already
illustrates the need for heterogeneous abstractions.
2. New distributed systems and application emerge fast, while standards
development is a slow, incremental process. Hence, it is likely that new
technologies will appear that will make a pre-existing interoperability standard
obsolete, c.f. CORBA versus Web Services (neither can talk to the other).
inria-00629057, version 1 - 4 Oct 2011
Universally Interoperable Core (UIC) [13] was an early solution to the middleware
interoperability problem; in particular it was an adaptive middleware whose goal was
to support interactions from a mobile client system to one or more types of distributed
object solutions at the server side, e.g., CORBA, SOAP and Java RMI. The UIC
implementation was based upon the architectural strategy pattern of the dynamicTAO
system [14]; namely, a skeleton of abstract components that form the base
architecture is then specialised to the specific properties of particular middleware
inria-00629057, version 1 - 4 Oct 2011
effort required to build direct bridges between all of the different middleware
protocols.
Enterprise Service Buses (ESB) can be seen as a special type of software bridge;
they specify a service-oriented middleware with a message-oriented abstraction layer
atop different messaging protocols (e.g., SOAP, JMS, SMTP). Rather than provide a
direct one-to-one mapping between two messaging protocols, a service bus offers an
intermediary message bus. Each service (e.g. a legacy database, JMS queue, Web
Service etc.) maps its own message onto the bus using a piece of code, to connect and
map, deployed on the peer device. The bus then transmits the intermediary messages
to the corresponding endpoints that reverse the translation from the intermediary to
the local message type. Hence traditional bridges offer a 1-1 mapping; ESBs offer an
N-1-M mapping. Example ESBs are Artix3 and IBM Websphere Message Broker4.
Bridging solutions have shown techniques whereby two protocols can be mapped
onto one another. These can either use a one-to-one mapping or an intermediary
bridge; the latter allowing a range of protocols to easily bridge between one another.
This is one of the fundamental techniques to achieve interoperability. Furthermore,
the bridge is usually a known element that each of the end systems must be aware of
and connect to in advance-again this limits the potential for two legacy-based
applications to interoperate.
2 http://soap2corba.sourceforge.net
3 http://web.progress.com/en/sonic/artix-index.html
4 http://www-01.ibm.com/software/integration/wbimessagebroker/
3.6 Transparent Interoperability
the translation process are deployed. They could be deployed separately or together
on one or more of the peers (but in separate processes transparent to the application);
however, they are commonly deployed across one or more infrastructure servers.
applications that will interoperate with all discovery protocols cf. ReMMoC), or
it can be utilised as a transparent interoperability solution, i.e., it can be deployed
in the infrastructure, or any available device in the network and it will translate
discovery functions between the protocols in the environment. SeDiM provides a
skeleton abstraction for implementing discovery protocols which can then be
specialised with concrete middleware. These configurations can then be
‘substituted’ in an interoperability platform or utilised as one side of a bridge.
Logical mobility is characterised by mobile code being transferred from one device
and executed on another. The approach to resolve interoperability is therefore
straightforward; a service advertises its behaviour and also the code to interact with it.
When a client discovers the service it will download this software and then use it.
Note, such an approach relies on the code being useful somewhere, i.e., it could fit
into a middleware as in the substitution approach, provide a library API for the
application to call, or it could provide a complete application with GUI to be used by
the user. The overall pattern is shown in Fig. 7. The use of logical mobility provides
an elegant solution to the problem of heterogeneity; applications do not need to know
in advance the implementation details of the services they will interoperate with,
rather they simply use code that is dynamically available to them at run-time.
However, there are fewer examples of systems that employ logical mobility to resolve
interoperability because logical mobility is the weakest of the interoperability
approaches; it relies on all applications conforming to the common platform for
executable software to be deployed. We now discuss two of these examples.
host uses SATIN to lookup the required application services; the interaction
capabilities are then downloaded to allow the client to talk to the service.
Jini [23] is a Java based service discovery platform that provides an infrastructure
for delivering services and creating spontaneous interactions between clients and
services regardless of their hardware or software implementation. New services can
be added to the network, old services removed and clients can discover available
services all without external network administration. When an application discovers
the required service, the service proxy is downloaded to their virtual machine so that
it can then use this service. A proxy may take a number of forms: i) the proxy object
may encapsulate the entire service (this strategy is useful for software services
requiring no external resources); ii) the downloaded object is a Java RMI stub, for
invoking methods on the remote service; and iii) the proxy uses a private
communication protocol to interact with the service's functionality. Therefore, the Jini
architecture allows applications to use services in the network without knowing
anything about the wire protocol that the service uses or how the service is
implemented.
4.1 Introduction
Historically the problem has been well known in the database community where there
is often the need to access information on different database which do not share the
same data schema. More recently, with the advent of the open architectures, such as
Web Services, the problem is to guarantee interoperability at all levels. We now look
at semantics-based solutions to achieving interoperability: first, the Semantic Web
Services efforts, second their application to middleware solutions, and third the
database approaches.
composition since, for two services to work together, they need to share a consistent
interpretation of the data that they exchange. To this extent a number of efforts, which
are generically known as Semantic Web Services, attempt to enrich the Web Services
description languages with a description of the semantics of the data exchanged in the
input and output messages of the operations performed by services. The result of
these efforts are a set of languages that describe both the orchestration of the services'
operations, in the sense of the possible sequences of messages that the services can
exchange as well as the meanings of these messages with respect to some reference
ontology.
OWL-S [26] and its predecessor DAML-S [25] have been the first efforts to exploit
Semantic Web ontologies to enrich descriptions of services. The scope of OWL-S is
quite broad, with the intention to support both service discovery through a
representation of the capabilities of services, as well as service composition and
invocation through a representation of the semantics of the operations and the
messages of the service. As shown in Fig. 8, services in OWL-S are described at three
different levels. The Profile describes the capabilities of the service in terms of the
information transformation produced by the service, as well as the state
transformation that the service produces; the Process (Model) that describes the
workflow of the operations performed by the service, as well as the semantics of these
operations, and the Grounding that grounds the abstract process descriptions to the
concrete operation descriptions in WSDL.
In more detail, the information transformation described in the Profile is
represented by the set of inputs that the service expects and outputs that it is expected
to produce, while the state transformation is represented by a set of conditions
(preconditions) that need to hold for the service to execute correctly and the results
that follow the execution of the service. For example, a credit card registration
service may produce an information transformation that takes personal information as
input, and returns the issued credit card number as output; while the state
transformation may list a number of (pre)conditions that the requester needs to satisfy,
and produce the effect that the requester is issued the credit card corresponding to the
number reported in output.
The Process Model and Grounding relate more closely to the invocation of the
inria-00629057, version 1 - 4 Oct 2011
service and therefore address more directly the problem of data interoperability. The
description of processes in OWL-S is quite complicated, but in a nutshell they
represent a transformation very similar to the transformation described by the Profile
in the sense that they have inputs, outputs, preconditions and results that describe the
information transformation as well as the state transformation which results from the
execution of the process. Furthermore, processes are divided into two categories:
atomic processes that describe atomic actions that the service can perform, and
composite processes that describe the workflow control structure.
Of course the next data interoperability question is ``what if there is not such a shared
ontology?''
SA-WSDL. Semantic Web Services reached the standardization level with SA-
WSDL [27], which defines a minimal semantic extension of WSDL. SA-WSDL
builds on the WSDL distinction between the abstract description of the service, which
includes the WSDL 2.0 attributes Element Declaration, Type Definition and Interface,
and the concrete description that includes Binding and Service attributes which
directly link to the protocol and the port of the service. The objective of SA-WSDL is
to provide an annotation mechanism for abstract WSDL. To this extent it extends
WSDL with new attributes:
1. modelReference, to specify the association between a WSDL or XML
Schema component and a concept in some semantic model;
2. liftingSchemaMapping and loweringSchemaMapping, that are added to
XML Schema element declarations and type definitions for specifying
mappings between semantic data and XML.
The modelReference attribute has the goal of defining the semantic type of the WSDL
attribute to which it applies; the lifting and lowering schema mappings have a role
similar to the mappings in OWL-S since their goal is to map the abstract semantic to
the concrete WSDL specification. For example, when applied to an input message, the
model reference would provide the semantic type of the message, while the
loweringSchemaMapping would describe how the ontological type is transformed into
the input message.
A number of important design decisions were made with SA-WSDL to increase its
applicability. First, rather than defining a language that spans across the different
levels of the WS stack, the authors of SA-WSDL have limited their scope to
augmenting WSDL, which considerably simplifies the task of providing a semantic
representation of services (but also limits expressiveness). Specifically, there is no
intention in SA-WSDL to support the orchestration of operations. Second, there is a
deliberate lack of commitment to the use of OWL [28] as an ontology language or to
any other particular semantic representation technology. Instead, SAWSDL provides
a very general annotation mechanism that can be used to refer to any form of semantic
markup. The annotation referents could be expressed in OWL, in UML, or in any
other suitable language. Third, an attempt has been made to maximize the use of
available XML technology from XML schema, to XML scripts, to XPath, with the
attempt to lower the entrance barrier to early adopters.
Analysis of SA-WSDL. Despite these design decisions that seem to suggest a
sharp distinction from OWL-S, SA-WSDL shares features with OWL-S' WSDL
grounding. In particular, both approaches provide semantic annotation attributes for
WSDL, which are meant to be used in similar ways. It is therefore natural to expect
that SAWSDL may facilitate the specification of the Grounding of OWL-S Web
Services, a proposal in this direction has been put forward in [29]. The apparent
simplicity of the approach is somewhat deceiving. First, SA-WSDL requires a
solution to the two main problems of the semantic representation of Web Services:
namely the generation and exploitation of ontologies, and the mapping between the
ontology and the XML data that is transmitted through the wire. Both processes are
very time consuming. Second, there is no obligation what-so-ever to define a
modelReference or a schemaMapping for any of the attributes of the abstract WSDL,
inria-00629057, version 1 - 4 Oct 2011
with the awkward result that it is possible to define the modelReference of a message
but not how such model maps to the message, therefore it is impossible to map the
abstract input description to the message to send to the service, or given the message
of the service to derive its semantic representation. Conversely, when
schemaMapping is given, but not the modelReference, the mapping is know but not
the expected semantics of the message, with the result that it is very difficult to reason
on the type of data to send or to expect from a service.
Web Service Modelling Ontology (WSMO) aims at providing a comprehensive
framework for the representation and execution of services based on semantic
information. Indeed, WSMO has been defined in conjunction with WSML (Web
Service Modelling Language) [30], which provides the formal language for service
representation, and WSMX (Web Service Modelling eXecution environment) [31]
which provides a reference implementation for WSMO. WSMO adopts a very
different approach to the modelling of Web Services than OWL-S and in general the
rest of the WS community. Whereas the Web Service Representation Framework
concentrates on the support of the different operations that can be done with Web
Services, namely discovery with the Service Profile as well as UDDI [32],
composition with the Process Model as well as BPEL4WS [33] and WS-CDL [34],
and invocation with the Service Grounding, WSDL or SA-WSDL, WSMO provides a
general framework for the representation of services that can be utilized to support the
operations listed above, but more generally to reason about services and
interoperability. To this extent it identifies four core elements:
Web Services: which are the computational entities that provide access to the
services. In turn their description needs to specify their capabilities,
interfaces and internal mechanisms.
Goals: that model the user view in the Web Service usage process.
Ontologies provide the terminology used to describe Web Services and
Goals in a machine processable way that allow other components and
applications to take actual meaning into account.
Mediators: that handle interoperability problems between different WSMO
elements. We envision mediators as the core concept to resolve
incompatibilities on the data, process and protocol level.
What is striking about WSMO with respect to the rest of the WS efforts (semantic
and not) is the representation of goals and mediators as “first class citizens”. Both
goals and mediators are represented as ``by product'' by the rest of the WS
community. Specifically, in other efforts the users' goals are never specified, rather
they are manifested through the requests that are provided to a service registry such as
UDDI or to a service composition engine; on the other side mediators are either a type
of service and therefore indistinguishable from other services, or generated on the fly
through service composition to deal with interoperability problems. Ontologies are
also an interesting concept in WSMO, because WSMO does not limit itself to use
existing ontology languages, as in the case of OWL-S that is closely tied to OWL, nor
it is completely agnostic as in the case of SA-WSDL. Rather WSMO relies on
WSML which defines a family of ontological languages which are distinguished by
logic assumptions and expressivity constraints. The result is that some WSML sub-
languages are consistent (to some degree) with OWL, while others are inconsistent
inria-00629057, version 1 - 4 Oct 2011
heterogeneous devices that populate pervasive environments. The idea behind this
framework is that information about the pervasive environments (i.e., context
information) is stored in knowledge bases on the Web. This allows different pervasive
environments to be semantically connected and to seamlessly pass user information
(e.g., files/contact information), which allows users to receive relevant services.
Based on these knowledge bases, the middleware supports the dynamic composition
of pervasive services modelled as Web Services. These composite services are then
shared across various pervasive environments via the Web.
The Ebiquity group describes a semantic service discovery and composition
protocol for pervasive computing. The service discovery protocol, called GSD
(Group-based Service Discovery) [38], groups service advertisements using an
ontology of service functionalities. In this protocol, service advertisements are
broadcasted to the network and cached by the networked nodes. Then, service
discovery requests are selectively forwarded to some nodes of the network using the
group information propagated with service advertisements. Based on the GSD service
discovery protocol, the authors define a service composition functionality for
infrastructure-less mobile environments [39]. Composition requests are sent to one of
the composition managers of the environment, which performs a distributed discovery
of the required component services.
The combined work in [40] and [41] introduces an efficient, semantic, QoS-aware
service-oriented middleware for pervasive computing. The authors propose a
semantic service model to support interoperability between existing semantic but also
plain syntactic service description languages. The model further supports formal
specification of service conversations as finite state automata, which enables
automated reasoning about service behaviour independently of the underlying
conversation specification language. Moreover, the model supports the specification
of service non-functional properties to meet the specific requirements of pervasive
applications. The authors further propose an efficient semantic service registry. This
registry supports a set of conformance relations for matching both syntactic and rich
semantic service descriptions, including non-functional properties. Conformance
relations evaluate the semantic distance between service descriptions and rate services
with respect to their suitability for a specific client request, so that selection can be
made among them. Additionally, the registry supports efficient reasoning on semantic
service descriptions by semantically organizing such descriptions and minimizing
recourse to ontology-based reasoning, which makes it applicable to highly interactive
pervasive environments. Lastly, the authors propose flexible QoS-aware service
composition towards the realization of user-centric tasks abstractly described on the
user's handheld. Flexibility is enabled by a set of composition algorithms that may be
run alternatively according to the current resource constraints of the user's device.
These algorithms support integration of services with complex behaviours into tasks
also specified with a complex behaviour; and this is done efficiently relying on
efficient formal techniques. The algorithms further support the fulfilment of the QoS
requirements of user tasks by aggregating the QoS provided by the composed
networked services.
The above surveyed solutions are indicative of how ontologies have been
integrated into middleware for describing semantics of services in pervasive
environments. Semantics of services, users and the environment are put into semantic
inria-00629057, version 1 - 4 Oct 2011
descriptions, matched for service discovery, and composed for achieving service
compositions. Focus is mainly on functional properties, while non-functional ones
have been less investigated. Then, efficiency is a key issue for the resource-
constrained pervasive environments, as reasoning based on ontologies is costly in
terms of computation.
The discussion about ontologies above immediately raises the question of whether
and to what extent ontologies just push the interoperability problem somewhere else.
Ultimately, what guarantees that the interoperability problems that we observe at the
data structure level do not appear again at the ontology level? Suppose that different
middlewares refer to different ontologies, how can they interoperate?
The ideal way to address this problem is to construct an alignment ontology, such
as SUMO5, which provide a way to relate concepts in the different ontologies.
Essentially, the alignment ontology provides a mapping that translates one ontology
into the other. Of course, the creation of alignment ontologies not only requires
efforts, but more importantly, it requires a commitment so that the aligning ontology
is consistent with all ontologies to be aligned.
Such alignment ontologies, when possible, are very difficult to build and very
expensive. To address this problem, in the context of the semantic web there is a very
active subfield that goes under the label of Ontology Matching [48][49] which
develops algorithms and heuristics to infer the relation between concepts in different
ontologies. The result of an ontology matcher is a set of relations between concepts in
different ontologies, and a level of confidence that that these relations hold. For
example, an ontology matcher may infer that the concept Price in one ontology is
equivalent to Cost in another ontology with a confidence of 0.95. In a sense, the
confidence value assigned by the ontology matcher is a measure of the quality of the
inria-00629057, version 1 - 4 Oct 2011
relations specified.
Ontology matching provides a way to address the problem of using different
ontologies without pushing the data interoperability problem somewhere else. But
this solution comes at a cost of the confidence on the on the interoperability solution
adopted and ultimately on the overall system.
5 Analysis
The results of the state of the art investigation in Sections 3 and 4 shows two
important things; first, there is a clear disconnect between the main stream
middleware work and the work on application, data, and semantic interoperability;
second, none of the current solutions addresses all of the requirements of dynamic
pervasive systems as highlighted in the interoperability barriers in Section 2.
With respect to the first problem, it is clear that two different communities evolved
independently. The first one, addressing the problems of middleware, has made a
great deal of progress toward middleware that support sophisticated discovery and
interaction between services and components. The second one, addressing the
problem of semantic interoperability between services, however, inflexibly assuming
Web Services as the underlying middleware; or the problem of semantic
interoperability between data intensive components such as databases. The section on
semantic middleware shows that ultimately the two communities are coming together,
but a great deal of work is still required to merge the richness of the work performed
on both sides.
SD = Discovery
I = Interaction
D= Data
A = Application
N=Non-functional
SD I D A N Transparency
CORBA X CORBA for all
Web Services X WSDL & SOAP for all
ReMMoC X X Client-side middleware
UIC X Client-side middleware
WSIF X Client-side middleware
inria-00629057, version 1 - 4 Oct 2011
With respect to the second problem, namely addressing the interoperability barriers
from Section 2 we pointed out that in such systems endpoints are required to
spontaneously discover and interact with one another and therefore these three
fundamental dimensions are used to evaluate the different solutions:
1. Does the approach resolve (or attempt to resolve) differences between discovery
protocols employed to advertise the heterogeneous systems? [Discovery
column]
2. Does the approach resolve (or attempt to resolve) differences between
interaction protocols employed to allow communication with a system?
[Interaction column]
3. Does the approach resolve (or attempt to resolve) data differences between the
heterogeneous systems? [Data column]
4. Does the approach resolve (or attempt to resolve) the differences in terms of
application behaviour and operations? [Application column]
5. Does the approach resolve (or attempt to resolve) the differences in terms of
non-functional properties of the heterogeneous system? [Non-functional
column]
The summary of this evaluation is in Table 1 (an x indicates: resolves or attempts
to). This shows that no solution attempts to resolve all five dimensions of
interoperability. Those that concentrate on application and data e.g. Semantic Web
Services rely upon a common standard (WSDL) and conformance by all parties to use
this with semantic technologies. Hence, transparent interoperability between
dynamically communicating parties cannot be guaranteed. Semantic Web Services
have a very broad scope, including discovery interaction and data interoperability, but
these provide only a primitive support and languages to express the data dimension in
the context of middleware solutions.
The transparency column shows that only the transparent interoperability solutions
achieve interoperability transparency between all parties (however only for a subset of
the dimensions). The other entries show the extent to which the application endpoint
inria-00629057, version 1 - 4 Oct 2011
(client, server, peer, etc.) sees the interoperability solution. ReMMoC, UIC and WSIF
rely on clients building the applications on top of the interoperability middleware; the
remainder rely on all parties in the distributed system committing to a particular
middleware or approach.
References
1. Bouquet, P., Stoermer, H., Niederee, C., Mana, A.: Entity Name System: The Backbone of
an Open and Scalable Web of Data. In: Proceedings of the IEEE International Conference
on Semantic Computing (ICSC 2008), pp. 554-561 (2008).
2. Van Steen, M., Tanenbaum, A.: Distributed Systems: Principles and Paradigms. Prentice-
Hall (2001)
3. Object Management Group.: The common object request broker: Architecture and
specification Version 2.0. OMG Technical Report (1995)
4. Microsoft Corporation.: Distributed Component Object Model (DCOM) Remote Protocol
Specification.
http://msdn.microsoft.com/en-gb/library/cc201989%28PROT.10%29.aspx
5. Srinivasan. R.: RPC: Remote Procedure Call Protocol Specification Version 2. Network
Working Group RFC1831, http://tools.ietf.org/html/rfc1831 (1995)
6. Microsoft Corporation.: Microsoft Message
Queuing.http://www.microsoft.com/windowsserver2003/technologies/msmq/
7. Carzaniga, A., Rosenblum, D., Wolf, A.:Design and Evaluation of a Wide-Area Event
Notification Service. ACM Transactions on Computer Systems, 19:3, 332-383 (2001)
8. Gelernter, D.:Generative communication in Linda. ACM Transactions on Programming
Language and Systems, 7:1, 80-112 (1985)
9. Wyckoff, P., McLaughry, S., Lehman, T., Ford, D: Tspaces. IBM Systems Journal, 37:3,
454-474 (1998)
10. Davies, N., Friday, A., Wade, S., Blair, G.: L2imbo: A Distributed Systems Platform for
Mobile Computing. ACM Mobile Networks and Applications (MONET), 3:2, 143-156
(1998)
11. Murphy, A., Picco, G., Roman, G.: LIME: A Middleware for logical and Physical
Mobility. In: 21st International Conference on Distributed Computing Systems (ICDCS-
21), pp. 524-533 (2001)
12. Booth D., Haas, H., McCabe, F., Newcomer, E., Champion, M., Ferris, C., Orchard, D.:
Web Services Architecture. W3C Working Group Note, http://www.w3.org/TR/ws-arch/
(2004)
13. Roman, M., Kon, F., Campbell, R.: Reflective Middleware: From Your Desk to Your
Hand. IEEE Distributed Systems Online, 2:5 (2001)
14. Kon F., Román, M., Liu, P., Mao, J., Yamane, T., Magalhães, L., Campbell, R.:
Monitoring, Security, and Dynamic Configuration with the dynamicTAO Reflective ORB.
In: Proceedings of the 2nd International ACM/IFIP Middleware Conference, pp. 121-143
(2000).
15. Grace, P., Blair, G., Samuel, S.: A Reflective Framework for Discovery and Interaction in
Heterogeneous Mobile Environments. ACM SIGMOBILE Mobile Computing and
Communications Review, 9:1, 2-14 (2005)
16. Duftler, M., Mukhi, N., Slominski, S., Weerawarana, S.: Web Services Invocation
Framework (WSIF). In: Proceedings of OOPSLA 2001 Workshop on Object Oriented
Web Services, Tampa, Florida (2001).
17. Object Management Group.: COM/CORBA Interworking Specification Part A & B. OMG
Technical Report orbos/97-09-07 (1997)
18. Bromberg, Y., Issarny, V.: INDISS: Interoperable Discovery System for Networked
Services. In: Proceedings of the IFIP/ACM/Usenix International Middleware Conference,
Grenoble, France, pp. 164-183 (2005)
19. Nakazawa, J., Tokuda, H., Edwards, W., Ramachandran, U.: A Bridging Framework for
Universal Interoperability in Pervasive Systems. In: Proceedings of 26th IEEE
inria-00629057, version 1 - 4 Oct 2011
Through Semantic Web. Information Systems and e-Business Management Journal, 4:4,
421-439 (2005)
38. Chakraborty, D., Joshi, A., Finin, T.: Toward Distributed Service Discovery in Pervasive
Computing Environments. IEEE Transactions on Mobile Computing, 5:2, 97–112 (2006)
39. Chakraborty, D., Joshi, A., Finin, T., Yesha, Y.: Service Composition for Mobile
Environments. Journal on Mobile Networking and Applications, Special Issue on Mobile
Services, 10: 4, 435-451 (2005)
40. Ben Mokhtar, S., Georgantas, N., Issarny, V.: COCOA: COnversation-based Service
Composition in PervAsive Computing Environments with QoS Support. Journal of
Systems and Software, Special Issue on ICPS’06, 80:12, 1941–1955 (2007).
41. Ben Mokhtar, S., Preuveneers, D., Georgantas, N., Issarny, V., Berbers, Y.: EASY:
Efficient SemAntic Service DiscoverY in Pervasive Computing Environments with QoS
and Context Support. Journal of Systems and Software, Special Issue on Web Services
Modelling and Testing, 81:5, 785-808 (2008)
42. Haas, M., Lin, E., Roth, M.: Data integration through database federation. IBM Systems
Journal, 41:4, 578-596 (2002)
43. Jung, J.: Taxonomy alignment for interoperability between heterogeneous virtual
organizations. Expert Systems with Applications, 34:4, 2721–2731 (2008)
44. Berlin, J., Motro, A.: Database schema matching using machine learning with feature
selection. In: Proceedings of the 14th International Conference on Advanced Information
Systems Engineering (CAiSE '02), pp. 452-466 Springer-Verlag, London, UK (2002)
45. Widom, J.: Trio: A System for Integrated Management of Data, Accuracy, and Lineage.
In: Second Biennial Conference on Innovative Data Systems Research (CIDR '05), Pacific
Grove, California (2005)
46. Vetere, G., Lenzerini, M.: Models for semantic interoperability in service-oriented
architectures. IBM Systems Journal, 44:4, 887-904 (2005)
47. Fagin, P., Kolaitis, P., Popa, L.: "Data Exchange, Getting to the Core. In: Symposium of
Principles of Database Systems, pp. 90-101, ACM, New York (2003)
48. Euzena, J. and Shvaiko, P. Ontology matching. Springer-Verlag. 2007
49. Shvaiko, P., Euzenat J., Giunchiglia F., Stuckenschmidt H., Mao, M. Cruz, I. Proceedings
of the 5th International Workshop on Ontology Matching (OM-2010). CEUR. 2010