IFIP
Working Group 2.2
Meeting
13 - 15 June, Shelter Island, USA.
Abstracts of the talks given
M. Broy
On the Meaning of Message Sequence Charts
Message Sequence Charts (MSCs) are a technique to describe patterns of
the interaction between the components of a interactive system by
diagrams. They have become rather popular in the design of software
architectures and distributed software systems. They are used
frequently to describe scenarios of interaction illustrating use cases
in software and systems development. Nevertheless, both the semantics
of MSCs as means of specification and their methodological and
technical role in the development process are not precisely clarified,
so far. In this paper, we suggest a semantic model of MSCs in terms of
stream processing functions that allows us to use them as an
intuitively clear specification technique for the components of a
system with a precisely defined meaning. In contrast to other
semantical models for MSCs suggested in the literature where the
meaning of message sequence charts is explained by state transition
machines and traces for the composed system, we define MSCs as a
technique to specify the behavior of the components of a
system. Furthermore, we discuss the systematic use of MSCs in the
software development process.
U. Montanari (joint work with Carolyn Talcott)
Can Actors and Pi-agents Live together?
The syntax and semantics of actors and pi-agents are first defined
separately, using a uniform, "unbiased" approach. New coordination
primitives are then added to the union of the two calculi which allow
actors and pi-agents to cooperate.
Since we aim at modeling concurrent distributed systems, and thus we are
interested in asynchronous behavior, we choose for comparison with actors
an asynchronous version of the pi-calculus.
The behaviors of both actors and pi-agents are defined by certain logic
sequents called tiles. A tile is a rewrite rule which describes a possible
evolution of a part s of the system which is matched by it. In addition, a
tile also describes the evolution of the interfaces of s with the rest of
the system. Thus two parts s and s' sharing an interface can be rewritten
only by tiles which agree on the evolution of the common interface. This
restriction introduces a powerful notion of synchronization among tiles,
and also makes it possible to see the synchronization of two tiles as a
(larger) tile. Eventually, all the possible evolutions are obtained by the
repeated composition (synchronized, or in sequence, or in parallel) of
certain small basic tiles called rewrite rules.
In the case of actors and pi-agents, it is convenient to take a
coordination point of view and to distinguish between the behavior of
agents in isolation and the behavior of coordinators, i.e. of system
components whose role is to connect agents and to control their behavior.
This approach allows us to abstract from the behavior "in the small" of
agents, which is presented in a state transition, syntax-independent form,
and to focus on the behavior of coordinators, which are the most
characteristic feature of distributed systems. Correspondingly, we
distinguish between two kinds of rewrite rules: activity rules and
coordination rules. An activity rule describes an evolution of a single
sequential agent, and may produce some action at its interface with the
rest of the system. A coordination rule describes an evolution of a
coordinator, and may both require certain actions from the agents it
controls, and produce actions for a coordinator operating at a higher
level.
R. Cleaveland (joint work with G. Luettgen, V. Natarajan)
Observational Equivalences for Process Algebras with Priorities
Process algebras provide users with high-level, modular notations for
describing and reasoning about the behavior of communicating systems. These
languages typically include operators for assembling systems from collections
of components that execute by performing (uninterpreted) actions. Proving a
system correct in these formalisms relies on the use of a behavioral
equivalence; one formulates the specification of a system in the algebra and
then verifies the system description by showing that it is equivalent the
specification. These equivalences typically relate systems on the basis of
their external, or "observable", behavior and are congruences: they permit the
substitution of "equals for equals" inside larger systems.
Traditional process algebras focus on modeling the concurrency and
nondeterminism that systems may exhibit during execution; in particular, all
enabled actions are viewed as equally likely to execute. However, in modeling
system features like interrupts or time-outs, some actions should be given
priority over others. To cater for such behavior, researchers have
investigated equipping actions with priority and pre-emption and have studied
the resulting semantic theories. However, although interesting language
features have been defined, the state of semantic equivalences has remained
somewhat unsatisfactory, as the congruences that have been proposed remain to
sensitive to internal system behavior.
In this talk I will describe work conducted with two colleagues on defining
semantic congruences for process algebras in which some actions take
precedence over others. I will define a behavioral equivalence based on the
notion of observational equivalence from traditional process algebra and
characterize the largest congruence contained in this relation. I will also
show how this relation can be given a complete equational axiomatization for
finite processes. Simple examples will be used to illustrate the utility of
this equivalence.
J. de Bakker (joint work with M.M. Bonsangue, F. Arbab,
J.J.M.M. Rutten, A. Scutella, G. Zavattaro)
A transition system semantics for a control-driven coordination language
Coordination languages are a new class of parallel programming
languages which manage the interactions among concurrent programs.
Basically, coordination is achieved either by manipulating data values
shared among all active processes or by dynamically evolving the
interconnections among the processes as a consequence of observations
of their state changes. The latter, also called control-driven
coordination, is supported by the MANIFOLD language. We present the
formal semantics of MANIFOLD, based on a two-level transition system
model: the first level is used to specify the ideal behavior of each
single component of a MANIFOLD application, whereas the second level
captures their interactions. Although we apply our two-level model in
this paper to define the semantics of a control-oriented coordination
language, this approach is useful for the formal studies of other
coordination models and languages as well.
P. Panangaden
Stochastic Relations
The notion of binary relation is fundamental in logic. What is the correct
analogue of this concept in the probabilistic case? I will argue that the
notion of conditional probability distribution (stochastic kernel) is the
correct generalization. One can define a category based on stochastic
kernels which has many of the formal properties of the ordinary category of
relations. I will use the concept of stochastic relation to introduce some
of the ongoing joint work with Edalat and Desharnais on Labeled Markov
Processes.
J S. Moore
Write-Invalidate Cache Coherence: A Pedagogical Example of ACL2
As a pedagogical exercise in ACL2, we formalize and prove the correctness of
a write invalidate cache scheme. In our formalization, an arbitrary number of
processors, each with its own local cache, interact with a global memory
via a bus which is snooped by the caches.
T. Ida (joint work with A. Middeldrop, T. Suzuki)
Lazy Narrowing Calculi
M. Sato (joint work with Rod Burstall and Takafumi Sakurai)
Explicit Environments
We present a simply typed lambda caclulus which has environments as
first class values. This calculus generalize both that of explict
substitutions and records. It enjoys desirable properties such as
subject reduction, confluence, conservativity over typed lambda-beta
calculus and strong normalizability.
J. Esparza
Verification with Unfoldings
The automatic verification of finite state systems suffers from
the explosion of states caused by the many possible permutations
of concurrent events. Unfoldings are a verification technique
that avoids this explosion by disregarding the order of
concurrent events. It belongs to the group of so-called
partial-order methods for model checking, which
also contains Valmari's stubborn sets
(implemented in the PROD tool), Godefroid's sleep sets
(implemented in Holzman's SPIN, and others.
Petri nets are a natural model for the unfolding approach, since
they make concurrent events explicit, but the technique can be
equally well applied to communicating
automata. The behaviour of the Petri net is captured by unfolding of
the net into an infinite acyclic occurrence net. The unfolding
operation is similar to the unwinding of a finite automaton into an
infinite acyclic automaton, but retains the concurrency aspects of the
net.
The use of unfoldings for automatic verification was first
proposed by K.L. McMillan in his Ph. D. Thesis, where he showed that the
infinite unwinding can be terminated when it contains full information
about the reachable states of the original net, even though the states
are not represented explicitly. In this sense, this prefix of the
infinite unwinding can be seen as a compact encoding of the state space.
Since the permutations of concurrent events are not enumerated, the
prefix can be much smaller than the state
graph of the system. A weakness of McMillan's original proposal was
that the prefix could also be
larger (even exponentially larger) than the state graph.
This problem was solved by Esparza, Roemer, and Vogler by means
of an improved criterion for termination.
Different authors have defined, implemented and tested
algorithms which use unfoldings to efficiently solve a number of
verification problems: deadlock detection, reachability,
concurrency of events, model-checking for both branching and linear
time logics, and conformance between trace structures. Most
of these algorithms are part of PEP, a modelling and
verification tool developed at the University of Oldenburg and
the Technical University of Munich.
Unfolding techniques have been applied to problems in different
areas, such as design of asynchronous circuits, verification
of protocols and manufacturing systems,
management of telecommunication networks, and others.
The talks presents the basic ideas of the unfolding technique,
compares it with others, and provides performance statistics
on some case studies. It finishes with pointers to further
information and available software.
F. Parisi-Presicce (joint work with M. Grosse-Rhode, M. Simeoni)
Refinement Modules for Graph Transformation Systems
S. Merz
Quantifier Rules for (Propositional) Temporal Logic
Allowing quantification over flexible variables strictly increases the
expressiveness of propositional temporal logic of linear time, which is then
equivalent to S1S. It is useful for the description of systems with internal
(hidden) state components.
I will discuss proof rules for flexible quantification, including rules for
auxiliary variables such as history and prophecy variables. Following Kesten
and Pnueli's LICS'95 paper, one obtains a completeness result by associating
a Buchi automaton with every QPTL formula. I will then show that the rule
for prophecy variables can be derived, whereas a derivation of the rule for
history variables requires a language with past-time operators. Finally, I
will consider quantification for stuttering-invariant logics and add
stuttering variables to obtain a completeness result for quantified
propositional TLA.
G. Holzmann
Requirement Analysis and Software Design
If enough people are involved in the design of a
piece of software, communicating a clear message
on the design constraints and the design objectives
can become a problem. The design requirements must
be carefully worded and documented, so that they can
be used to resolve possible ambiguities for designers,
coders, and testers later.
A significant design effort therefore often starts
with an exploration of the design requirements by
systems engineers. This phase is followed by design
proper, coding, and testing.
It is commonly believed that a large part of the
faults found in testing and in the field, can be traced
back to minor or major misjudgements in the first
phases of design: most of the bugs appear to be
introduced before any part of the software is even written.
In the talk I will try to show why this is good news.
I'll demonstrate some simple tools that can be used to
intercept common design flaws in the earliest phases of
software design.
D. Sangiorgi (joint work with B. Pierce)
Behavioural Equivalence in the Polymorphic pi-calculus
I will discuss parametric polymorphism in message-based concurrent
programs, focusing on behavioral equivalences in a typed process
calculus analogous to the polymorphic lambda-calculus of Girard and
Reynolds.
Polymorphism constrains the power of observers by preventing them from
directly manipulating data values whose types are abstract, leading to
notions of equivalence much coarser than the standard untyped ones.
I will explain the nature of these constraints through simple examples of
concurrent abstract data types and propose basic theoretical machinery
for establishing bisimilarity of polymorphic processes.
I will also show some surprising interactions between polymorphism and
aliasing, drawing examples from both the polymorphic pi-calculus and
ML.
WG 2.2
(darondeau@irisa.fr)