Abstracts of the talks
Willem-Paul de Roever
The Verification of Concurrency: from Noncompositional to
Compositional Proof Methods.
An outline is given of a forthcoming book for classroom teaching of
the verification of concurrent (and distributed) systems leading up to
a level enabling research in this field. Parts of the method have been
tested during courses given at the universities of Eindhoven, Utrecht
and Kiel. Its authors, besides the speaker, are: Frank de Boer, Ulrich
Hannemann, Jozef Hooman, Yassine Lakhnech, Paritosh Pandya, Mannes
Poel, Henk Schepers, Qiwen Xu, and Job Zwiers.
In the book a clear separation is made between the mathematical
principles underlying concurrent program verification - as based on
the inductive assertion method -, and their syntactical formulation in
Hoare Logic. One of the reasons is that soundness and completeness
proofs for Hoare Logic are regarded as unteachable; however, for the
inductive assertion method they're (almost) obvious. In the talk an
algebraic technique is presented for lifting these proofs from the
level of inductive assertions to Hoare Logic. At first sight this
concerns noncompositional proof principles for shared variable
concurrency and distributed communication (based on work by Owicki &
Gries, Levin & Gries; Apt, Francez & de Roever). Recent advances
enable also the formulation of a compositional proof rule for
concurrency (and distributed communication) within the inductive
assertion framework (a result due to Frank de Boer). It is shown that
this algebraic technique for lifting proofs also extends to such
compositional rules, and, therefore, also applies to compositional
proof methods.
Lesli Lamport
How to Write a Proof?
A method of writing proofs is proposed that makes it much harder to
prove things that are not true. The method is based on hierarchical
structuring. Formally, it is just natural deduction reformated. In
practice, it turns such proofs from an object of study by logicians
to a practical method of writing large, believable proofs.
Peter Mosses
Theory and Practice of Action Semantics.
Action Semantics is a framework for the formal description of
programming languages. Its main advantage over other frameworks is
pragmatic: action-semantic descriptions (ASDs) scale up smoothly to
realistic programming languages. This is due to the inherent
extensibility and modifiability of ASDs, ensuring that extensions and
changes to the described language require only proportionate changes
in its description. (In denotational or operational semantics, adding
an unforeseen construct to a language may require a reformulation of
the entire description.)
After sketching the background for the development of action
semantics, we summarize the main ideas of the framework, and provide a
simple illustrative example of an ASD. We identify which features of
ASDs are crucial for good pragmatics. Then we explain the foundations
of action semantics, and survey recent advances in its theory and
practical applications. Finally, we assess the prospects for further
development and use of action semantics.
The action semantics framework was initially developed at the
University of Aarhus by the lecturer, in collaboration with
David Watt (University of Glasgow). Groups and individuals scattered
around five continents have since contributed to its theory and
practice.
Jay S. Moore
Recent Commercial Applications of the ACL2 Mechanical Theorem Prover
ACL2 is a reimplemented extended version of Boyer and Moore's Nqthm
(including Kaufmann's optional proof checker interface), intended for
large scale verification projects. We describe several recent
applications of this new theorem prover. In one application, ACL2 was
used to prove the correctness of the floating-point division algorithm
for the AMD5K86, the first Pentium-class microprocessor produced by
Advanced Micro Devices, Inc. This required formalizing and deriving
much of the "folklore" of floating-point arithmetic. ACL2 has also
been used to prove the correctness of the floating-point square root
algorithm on the AMD5K86. We also discuss the use of ACL2 to
formalize a bit-accurate model of the complex arithmetic processor
(CAP) being developed by Motorola, Inc. The CAP is a single-chip,
digital signal processor (DSP) optimized for communications signal
processing. The ACL2 model executes several times faster than the
standard SPW engineering model of the CAP. With ACL2 we proved the
correctness of a predicate which recognizes CAP programs that avoid
pipeline hazards. In addition, we proved the correctness of several
CAP microcode programs for DSP applications.
Hans Langmaack
A Contribution to Goodenough's and Gerhart's Theory
of Software Testing and Verification:
Relation between Strong Compiler Test and Compiler Implementation Correctness
The lecture is treating the problem how to reach highest degree of
quality in IT-safety. Main problem is how to create initial fully
correct compiler programs such that not only their translation
functions, but above all their implementations down to binary code of
real processors are correct with mathematical certainty. Literature
has neglected the latter problem. For the first time L.M. Chirica and
D.F. Martin 1986 and J S. Moore 1988 have explicitly formulated the
necessity not only of translation specification verification, but also
of compiler implementation verification.
Assume, an inductively specified translation relation or function is
verified, and target and host machine languages are the same
TL. N. Wirth 1977 recommends first to implement the source
language SL by a compiler pi_c1 written in SL
itself. That can easily and correctly be done if SL has rich
recursion and data structures, e.g. SL = ComLisp, a sublanguage of
the available ANSI-CommonLisp, but implemented only by unverified
compilers pi_c0.
Refinement of pi_c1, written in SL, down to TL
can mechanically be done by twofold bootstrapping. Additional, but
unsufficient correctness certainty of the wanted and generated SL
--> TL-compiler pi_c, implemented in TL, can be
achieved by a third bootstrapping and a following so called strong
compiler test. But this test might be successful even if the
pi_c0-compiler builder in his hacker's mentality has provided
for intrusion of a virus in pi_c via pi_c0. This fact is
a main reason why presently certification institutions are acting
right not to trust any compiler, but to certify high level
safety-relevant application programs only together with their
generated binary machine codes. On the other hand, we can detect any
virus in pi_c if we do proper mathematical control (double
checking) of the result pi_c of input pi_c1 due to the
verified translation specification (Such mathematical control or
double checking must be done for any test computation of any algorithm
if the algorithm is not verified).
Since double checking of a whole compiler's binary code pi_c is
an error prone task we may ask as in J.B. Goodenough's and
S.L. Gerhart's theory of test data selection 1975: Under what as weak
as possible assumptions about pi_c can we imply pi_c's
correctness if the strong compiler test has been successful? Answer:
Do translation of SL to TL via appropriately defined
intermediate languages. Careful argumentation saves "half" of the
control work, especially inspection of low level code. In a way,
compiler pi_c0 is used as a proof assistant which generates
compiler program printouts which, juxtaposed, represent a proof
(derivation) tree. Proofs have usual program parts as constituents
understandable by usual programmers. They need not learn special
formal logics as users of theorem provers have to. This is in
accordance with ideas of A. Robinson 1996 who recommends that
informaticians should try to mechanize informal proving of
mathematicians.
Masashiko Sato
A typed lambda calculus with environments
We introduce a typed lambda calculs that has environments as first
class objects. This calculus is a natural extension of the calculi of
explicit substitutions, and as such, substitutions are manipulated
explicitly as internal operations while substitions are performed at
the meta level in the ordinary typed and untyped lambda calculi.
Also, unlike the calucli of explicit subsutituions, environments are
first class objects and therefore we can pass environmes as arguments
and envrionments can be returned as the results of computations.
We also argue that in this setting it is possible to regard
assignments not as side effects but as synonyms for certain let
constructs.
Prakash Panangaden
Semantic Foundations of Concurrent Constraint Programming
Recently constraint programming has emerged as an independent programming
paradigm evolving from developments in both functional and logic
programming. Conceptually, however, one can describe constraint
programming as a more primitive notion than what is traditionally called
logic programming. The key operational idea is to think of processes
interacting via a collection of shared data. The two basic communication
operations that can be performed by a process are ASK and TELL which pose
entailment queries make assertions respectively. In this talk the focus
will be on a semantical theory based on Dana Scott's notions of
information. The idea is to model processes as closure operators on
lattices. Within this semantical framework parallel composition, hiding
and communication can be modelled very simply. There is also a very
pleasing correspondence between operations on closure operators, modelling
process combinators, and logical conenctives. This connection is made by
showing that both first order logic and ccp form hyperdoctrines. This talk
touches on joint work with several people including Vijay Saraswat, Martin
Rinard, Radhakrishnan Jagadeesan, Keshav Pingali, Phil Scot and Robert
Seely.
T. H. Tse
Department of Computer Science,
The University of Hong Kong
Modelling and Visualizing Object-Oriented Systems:
a NOODLE Approach *
Various problems have been identified in popular object-oriented analysis
methodologies because of the lack of a supporting theoretical foundation.
For example: Object-oriented concepts are introduced by loosely and may be
interpreted differently by individuals. Only syntax and static consistency
checks are supported by CASE tools. Potential concurrency problems in target
systems must be identified by human intuition from the graphical display.
We propose an object-oriented model based on predicate-transition nets, with
equivalent algebraic and graphical representations. We illustrate by means
of a 3-dimensional graphical prototype how the static and dynamic properties
of classes, attributes, methods and their relationships are modelled by
predicates, transitions, arcs carrying algebraic terms, and net refinements.
We also illustrate through the prototype how inheritance and overloading can
be unified with message passing in the 3-dimensional model, but visualized
as independent concepts by projecting vertically and horizontally on to
2-dimensional plains.
Moreover, we can map the formal model to popular object-oriented analysis
notations, so as to help to validate and verify formally the correctness of
specifications in existing methodologies, including reachability, structural
and behavioural consistencies, and inheritance and interaction couplings.
* This research is supported in part by a grant of the Research Grants
Council.
Kees Middleburg
United Nations University,
International Institute for Software Technology,
Macau
Semantic Models for SDL that Support its Timing Mechanism.
A new semantics of an interesting subset of the specification language
SDL is presented. The strength of the chosen subset, called $phi$-SDL,
is its close connection with full SDL, despite its dramatically
reduced size. Thus, we are able to concentrate on solving the basic
semantic issues without being in danger of having to turn the results
inside out in order to deal with full SDL. The presented semantics is
given by a translation to a discrete-time extension of process algebra
in the form of ACP. Novel to this semantics is that it relates the
time used with timer setting to the time involved in waiting for
signals and delay of signals. Additionally, it is sketched how an
existing process algebra model for asynchronous dataflow networks can
be adapted to match the concepts around which SDL has been set up.
The resulting model is well suited as an underlying model for a more
abstract semantics of SDL. It is expected to facilitate the quest for
rules of reasoning about $phi$-SDL specifications.
Jaco de Bakker (joint work with Jerry den Hartog and Erik de Vink)
Metric Semantics for Action Refinement.
We present a case study of semantic design based on the methodology as
developed in our book [BV]. We focus on a language with action refinement in
the presence of parallel composition and synchronization (and
recursion, inducing infinite behaviour and interesting limit
phenomena). Compared with Chapter 14 of [BV], the approach is
-
simplified, since no more atomization of the refining statement is
imposed. This is achieved by strengthening the notion of semantic
equivalence: two statements are equivalent if their meanings coincide
under all refinements;
-
extended, in that synchronization - in the sense of CCS - is now also
catered for.
For our example language we develop a linear operational and a
branching - or bisimulation - denotational model, and we prove the
relationship which holds between these two.
- [BV]
- Jaco de Bakker and Erik de Vink, Control Flow Semantics,
The MIT Press, 1996.
SPIC Mathematical Institute, Madras, India
Linear-Time Temporal Logics over Mazurkiewicz Traces
Linear-time temporal logic is a well-established tool for
specifying and reasoning about the computations performed by
distributed systems. Traditionally, computations of these
systems are represented as infinite sequences of global states.
The atomic assertions of the logic describe properties of these
global states.
However, it is often the case that the distributed system to be
analyzed is presented as a parallel composition of finite-state
sequential systems. In such a framework, global properties of
the system are just combinations of local properties of each
component. From the point of view of improving the practical
efficiency of automated verification methods based on temporal
logic, it seems useful to be able to directly define logics which
reflect the distributed structure of the system.
In this talk, we survey some logics which have arisen from this
approach. Formally, the logics are interpreted over Mazurkiewicz
traces generated by distributed alphabets. Both atomic
propositions and modalities are interpreted in a "local"
fashion. We describe automata theoretic methods for solving the
satisfiability and model-checking problems for these logics, as
well as some issues which remain to be resolved before these
logics become "usable".
This talk describes joint work with Milind Sohoni, as well as
work by R. Ramanujam and P. S. Thiagarajan.
He Jifeng
Linking Theories in design of Multi-Target Hardware Compiler.
We perform such transformastion by using a set of protocol converters.
Then we use an automatic tool to check that our lower-level
implementations meet their specifications. In order to fill the gap
between various notations used at different levels, we establish
formal mathematical links among CSP (used for the high-level
specifications of hardware), DI (delay-insensitive algebra used for
the descriptions of asynchronous circuits) and Clocked CSP (for the
specifoications of synchronous circuits).
Zhou Chaochen (joint work with Michael R. Hansen)
The paper uses left and right
neighbourhoods as primitive interval modalities
to define other unary and binary modalities of
intervals in a first order logic with interval
length. A complete first order logic for
the neighbourhood modalities is presented. The
paper demonstrates how the logic can support
formal specification and verification of liveness
and fairness properties, and also of properties of
hybrid systems by defining in the logic
various notions of real analysis.
Paritosh K. Pandya
Recursion in Duration Calculus.
Ernst-Ruediger Olderog
(joint work with Cheryl Dietz and Henning Dierks)
Design of Real-Time Systems: From CDs to PLCs via DC.
This work is motivated by the UniForM Workbench project where
the Universities of Bremen and Oldenburg collaborate together
with an industrial partner from Berlin engaged in the construction
of railway signalling systems. In this project the Oldenburg group
is conducting an industrially relevant case study concerned
with the safety of a one-way rail track. One of the characteristics
of the case study is the use of programmable logic controllers (PLCs)
for implementing the control software.
In this talk we outline an approach to the design of real-time systems
that will be pursued in the project. Requirements are formulated in a
graphical formalism called Timed Constraint Diagrams (CDs).
A CD consists of a number of waveforms that desribe the timewise
behaviour of observables and a number of arrows that describe the
timed interdependencies between these waveforms. The formal semantics of
CD's is given in a subset of Duration Calculus (DC) [1].
Design specifications are formulated as so-called PLC-automata [3].
These can be understood as a special class of times automata
that model in an abstract way the cyclic behaviour of PLCs.
Also PLC-automata have a formal semantics expressed in DC.
This allows to formally prove that PLC-automata satisfy the
requirements expressed as CDs. Moreover PLC-automata can be
easily compiled into PLC-code.
We demonstrate the approach using the case study of the Generalized
Railroad Crossing proposed by Heitmeyer and Lynch as a benchmark for
comparing formal methods in real-time computing [3].
References
- [1]
-
C. Dietz, Graphical formalization of real-time requirements.
In B. Jonsson and J. Parrow, editors, Formal Techniques in Real-Time
and Fault-Tolerant Systems, LNCS 1135 (Springer-Verlag, 1996) 366--385.
- [2]
-
H. Dierks, PLC-automata: a new class of implementable real-time
automata, Manuscript, Univ. Oldenburg, Nov. 1996.
- [3]
-
E.-R. Olderog, A.P. Ravn, and J.U. Skakkebaek.
Refining system requirements to program specifications.
In C. Heitmeyer and D. Mandrioli, editors, Formal Methods for
Real-Time Computing, vol. 5 of Trends in Software
(Wiley, 1996) 107--134
Egon Boerger
Parnas Tables and Abstract State Machines
We show that Parnas' approach to use function tables for a precise
program documentation can be generalized and gentilized in a natural
way by using Gurevich's Abstract State Machines for well documented
program development as proposed by us in [1].
Parnas' function table approach and the ASM approach to
program documentation share a large common ground, pragmatically,
conceptually and methodologically. Pragmatically, they both aim at
supporting the understanding of programs by humans - for purposes like
review, maintenance, use by application domain experts. Conceptually,
they both are based on the use of functions, of the notion of states
and their dynamics as a function of time, and of the distinction of
environmental (monitored) and controlled quantities. Methodologically,
they both are concerned a) to use only standard mathematical language
for providing unambiguous complete descriptions using simple notation,
b) to reveal the structure of programs by identifying byilding blocks
and by keeping traces of their development, c) to "show" the
correctness of programs or program parts by mathematical reasoning.
ASMs offer a fainer grained use of functions (which by
the way are allowed to come with an arbitrary finite number of
arguments and are classified also into dynamic and static
functions). ASMs provide methods for semantical
structuring of programs (which are based on systematic use of stepwise
refinements). In addition by a simple yet precise semantics they yield
a safe mathematical foundation for the use of function tables. (This
semantics is based only on first-order logic and directly supports the
intuitive understanding of programs by progrmamers.)
In this talk we illustrate our claims by translating some typical
Parnas tables, taken from the literature, into ASMs.
- [1]
-
C. Beierle, E.Boerger, I. Durdanovic, U. Glaesser,
E. Riccobene: Refining abstract machine specifications of the
steam boiler control to well documented executable code.
in: J.-R. Abrial, E.Boerger, H. Langmaack (Eds.): Formal Methods
for Industrial Applications. Specifying and Programming the
Steam-Boiler Control. Springer LNCS State--of--the--Art Survey,
vol. 1165, 1996, 52-78.
Tang Zhisong
Institute of Software,
Academia Sinica,
Beijing
Some Features in XYZ System.
XYZ System consists of a temporal logic language XYZ/E and a toolkit
with the goal to unify various kinds of programming paradigm with a
simple formal framework and to serve as a universal basis for the
entire software development process. In order to carry out this plan a
lot of difficult problems are in front of us. These include, for
example, how to guarantee the decomposability of the semantics of
parallel statements of the communicating processes, how to extend the
temporal operators with real time bound on the basis of the temporal
operators without time bound, how to combine the dynamic semantics of
message passing with the static semantics of the data modules in OOP,
how to formalize and verify programs with pointers in order to
represent dynamic binding of communicating processes, etc. The ways
to overcome these difficulties in XYZ will be explained.
The toolkit consists of 5 groups of tools:
- A Hoare logic verifier and a real time program generator.
- An integrated system to support stepwise refinement and rapid
prototyping.
- Two kinds of graphic tools to support distributed system design
and information system design.
- A language transformation system which consists of an attribute
grammar evaluator to do static type and scope rule checking and to
replace the context-sensitive items for the source language and an
automatic transformation system from source language to XYZ/E based
on replacement rules.
- A module management system for these tools.
Shigeru Igarashi
Amir Pnueli
Marrying Statecharts with Multi-Clock Formalisms
In the talks we considered a common semantic base for mixing together
specifications written in the Statecharts visual formalism and
specifications written in a multi-clock synchronous language such as
Signal. We distinguished between "volatile" variables which represent
events of very short duration (spikes) and "persistent" variables
which maintain their values until modified by an explicit action. The
proposed common semantics based on the notion of a "synchronous
transition system" (STS), uses persistent variables for presenting
both types of variables, and introduces explicit "shutdown" operations
into the transition relation, ensuring that the "event" variables are
automatically turned off, unless generated again in the next step.
The need for such a common semantic base arises in the Esprit project
Sacres, which provides formal specification methodology combined with
automatic code generation for Safety Critical Systems development.
Egidio Astesiano (joint work with G. Reggio)
Rewriting Logic and Labelled Transition Logic: Comparison and Cross
Fertilization
Rewriting Logic RL (by Meseguer) and Labelled Transition Logic LTL
(by Astesiano and Reggio) are two specification formalisms which:
-
have similar application fields (concurrent, parallel,
distributed and object-oriented systems);
-
have common features:
- use/are based on algebraic specifications;
- the systems are typed;
- the activity of systems is operationally modelled by transitions;
- the source and the target of the transitions (intermediate states in
the life of the systems) are represented by terms in (a model of) an
algebraic specification;
-
have strongly distinguishing features, e.g.:
- in LTL the transitions are labelled and the specifications may
have different semantics, as initial semantics and variants of
observational semantics (among them the various bisimulation semantics, as
strong, weak, ...);
- in RL the transitions are decorated with "proofs" describing in
some sense such activity, by naming some subactivities and making explicit
the role of some components;
-
have different ways to formally present "things", which may be
essentially equivalent,
e.g. in RL the models are described by using the language of the
category theory, while in LTL by using the first-order logic (universal
algebras) language.
We study the nature of the relationships between LTL and RL,
in order to find out:
-
which are the really different points;
-
which parts are more or less the same, differing mainly in the presentation;
-
whether any "cross fertilization" is possible between the two formalisms,
i.e. if features peculiar of one formalism may be added to the other.
Notice that our aim is not just to simply find out a translation/simulation
between the two formalisms.
Since syntactically the specifications in the two formalisms are very
similar, as a result of this work, we also make clear that
their intuitive meanings/interpretations are very different.
Andrzej Tarlecki (joint work with
D. Sannella)
Mind the gap! Abstract versus concrete models of specifications.
In the theory of algebraic specifications, many-sorted algebras are
used to model programs: the representation of data is arbitrary and
operations are modelled as ordinary functions. The theory that
underlies the formal development of programs from specifications takes
advantage of the many useful properties that these models enjoy.
The models that underlie the semantics of programming languages are
different. For example, the semantics of Standard ML uses rather
concrete models, where data values are represented as closed
constructor terms and functions are represented as "closures". The
properties of these models are quite different from those of
many-sorted algebras.
This discrepancy brings into question the applicability of the theory
of specification and formal program development in the context of a
concrete programming language, as has been attempted in the
Extended ML framework for the formal development of Standard ML
programs. This paper is a preliminary study of the difference between
abstract and concrete models of specifications, inspired by the kind
of concrete models used in the semantics of Standard ML, in an attempt
to determine the consequences of the discrepancy.
WG 2.2
(darondeau@irisa.fr)