Logic and Computation Seminar at Penn
Penn Logic and Computation Seminar 1999-2000
The logic and computation group is composed of faculty and
graduate students from the
computer and information science,
mathematics, and
philosophy
departments, and participates in the
Institute for Research in the Cognitive Sciences.
The logic and computation group runs a weekly seminar. The
seminar meets regularly during the school year on Mondays at 4:30 p.m.
in room 554 of the Moore Building at the University of
Pennsylvania. Any changes to this schedule will be specifically
noted. The seminar is open to the public and all are welcome.
Aug 28: James Leifer
, Cambridge University ,
Deriving operational congruences for reactive systems
Aug 10: Randy Pollack , University of Durham ,
Dependently Types Records for Representing Mathematical Structure
Aug 14: Jen Davoren , Australian National University, Canberra ,
Logic-based design and synthesis of
controllers for hybrid systems
May 1: Max Kanovitch , University of Pennsylvania ,
What logic is good for CS ?
or
"The Drinking Philosophers"
in the mirror of Horn linear logic
April 27: Esfandiar Haghverdi , University of Ottawa ,
Geometry of Proofs and Models of Computation
April 24: Kevin X. He , Notre Dame ,
Supervisory Control of Distributed Systems using Petri Nets and
PartialOrder Methods
Feb. 28: Radu Grosu , University of Pennsylvania ,
TITLE TBA
Feb. 21: Kousha Etessami , Bell Labs, Lucent Technologies ,
Reconciling message sequence charts and state machines
Jan 31: Hee Hwan Kwak , University of Pennsylvania ,
Process Algebraic Approach to the Parametric Analysis of Real-time Scheduling Problems
Jan 17: Naoki Kobayashi , University of Tokyo ,
An Implicitly-Typed Deadlock-Free Process Calculus
Dec. 6: Michael McDougall , University of Pennsylvania ,
Analysis of Software Accidents, Attacks, and Disagreements
Part II: Software Engineering Standards
Nov. 29: Carl A. Gunter , University of Pennsylvania ,
Analysis of Software Accidents, Attacks, and Disagreements
Part I: Fundamentals and Models
Nov. 22: Catherine Meadows , Naval Research Laboratory ,
A Formal Framework and Evaluation Method for Network Denial of Service
Nov. 17: Alfred Maneki , National Security Agency ,
Strand Spaces
Nov. 17: Sylvan Pinsky , National Security Agency ,
A Decision Procedure for Noninterference
Nov. 15: George Pappas,
Finite Bisimulations of Linear Differential Equations
Nov. 8: Pankaj Kakkar,
Reasoning About Secrecy for Active Networks
Nov. 1: Davor Obradovic,
Formal Verification of Distance Vector Routing Protocols
Oct. 25: Atsushi Igarashi,
Featherweight Java: a minimal core calculus for Java and GJ
Oct. 15: Peter Sewell,
Secure Composition of Untrusted Code: Wrappers and Causality Types
Sep. 27: C. Barry Jay,
Shape Theory: FISh implementation
Sep. 20: Insup Lee, University of Pennsylvania ,
Run-time Assurance Based On Formal Specifications
Sep. 9: P. Y. A. Ryan , UK Defence Evaluation Research Agency, Malvern ,
Process Algebra and Non-interference
Deriving operational congruences for reactive systemsJames Leifer
Cambridge University
Monday, Aug 28, at 4:30 in Moore 556
[Jamey will be vising the CIS department for a few days,
beginning Aug. 26. Please let me know if you'd like to talk with him
while he's here. --B]
Assigning mathematical interpretations (semantics) to computer
programs allows us to prove that code is correct with respect to a
specification, to make precise how a compiler works and what
optimisations are valid, to compare language features, etc. Many
interpretations exist for sequential computation and typically model a
(deterministic) program as a function from memories to memories that
maps each state to the one that holds after the program executes.
The situation is more complex when considering parallel computation,
and, in particular, languages presented in terms of reaction rules
(characterising the primitive interactions) that cater for
communication, distribution, mobility, and security. It is a major
challenge to find a compositional semantic interpretation (one that
respects the constructions of the language) and, in practice, is
either impossible or accomplished in an ad hoc manner with a difficult
hand-crafted proof of compositionality bolted on.
I describe new work that begins to address this challenge. I first
show how to derive uniformly a labelled transition system (LTS) from a
set of reaction rules, taking the labels, or ``observations'', to be
those contexts which are just large enough to enable a reaction: a
program undergoes a labelled transition exactly when it requires the
entire label to react. The key contribution is the precise definition
of ``just large enough'' in terms of the categorical notion of
relative pushout. I prove using some lightweight reasoning that
operational equivalences built from the LTS such as strong and weak
bisimilarity and trace equivalence are congruences. Thus the semantic
interpretation of programs in terms of these equivalences is
guaranteed to be compositional. I conclude by touching on some future
work.
Joint work with Robin Milner.
Dependently Types Records for Representing Mathematical Structure
Randy Pollack
University of Durham
Thursday, Aug 10, at 3:00 in Moore 556
Note special day and time
[Randy will be visiting the CIS department on Thursday and Friday.
Please contact bcpierce@cis.upenn.edu if you'd like to meet with him.
His LCS seminar will continue informally during the PL Club lunch on
Friday (12:30-2, in the Theory Lab).]
It is folklore that dependently typed tuples suffice to formalize
abstract mathematical structures such as semi-group, monoid, group,
ring, field, ... We want natural properties of informal mathematical
language to be preserved; e.g. inheritance of properties and sharing
of substructures. There is a big literature on the related topic of
programming language modules.
This talk develops a notion of dependently typed records by adding
field labels to Sigma types. Some obscure features from the
literature are simplified and explained. E.g. field variables (local)
vs field labels (global) are explained by ordinary lambda-binding.
Subtyping is orthogonal to records, and is introduced using Luo's
coercive subtyping. (Hence I do not explain the subtyping system in
detail, but only show some examples.)
I discuss the two standard approaches to sharing substructures:
"Pebble style" and "manifest types" (ML style). By example, I argue
that we really do need manifest types to formalize mathematics. I
extend dependently typed records to include manifest types and the
"with" notation for adding manifest information to an existing
signature.
Most recently, I have been able to directly program records with
first-class labels, dependent, manifest signatures and "with" in type
theory, using Dybjer's "inductive-recursive" definition. All of the
rules (projection, subtyping, "value rules", definition of "with") are
derivable from the inductive-recursive definition, which has only
three constructors and three corresponding computation rules.
Logic-based design and synthesis of
controllers for hybrid systems
Jen Davoren
Australian National University, Canberra
Monday, Aug 14, at 4:30 in Moore 556
Hybrid systems are heterogeneous dynamical systems characterized
by a non-trivial interaction of continuous and discrete dynamics,
and typically arise in the embedded software control of physical
processes. The widely accepted hybrid automaton model consists of
a finite collection of continuous systems defined by differential
equations, over a common state space, together with a mechanism
which decides when and how the system can switch between these
continuous sub-systems. We consider the following type of control
problem. Given a finite collection of continuous systems, design
and build a hybrid automaton model for that collection so that every
hybrid trajectory of the resulting system is guaranteed to satisfy
a given list of performance specifications. One novelty of our work
is that in addition to the well-studied classes of safety and liveness
properties of hybrid trajectories, we also address a quite general
class of event sequence properties, by which one can enforce
positive behavioural requirements on hybrid trajectories. We develop
an abstract algorithm to solve this control problem for which we can
guarantee finite termination. A central idea is that in tackling the
problem, one needs to reason about sets of continuous states, and
to build up complex sets of states using operators on sets induced by
the given continuous flows and data from the specifications. We
demonstrate how such reasoning and construction of sets can be cleanly
and naturally formalized in the language of modal logic. The benefits
of using the technical tool of modal logic are two-fold. First, it
gives us a formal language in which to formulate our solution
algorithm. Second, a proof of correctness can transparently be derived
from a theory of modal formulas that are true of any hybrid automaton
solution purely in virtue of the statement of the algorithm.
The abstract algorithm will be illustrated via a simple example,
with output generated by our prototype software implementation.
(Joint work with Dr. Thomas Moor, RSISE, ANU)
Short bio:
Jen Davoren is a research fellow in the Computer Sciences Laboratory
at the Research School of Information Sciences and Engineering at the
Australian National University. She has a PhD in mathematics and an MSc in
computer science from Cornell University, after undergraduate studies
in mathematics and philosophy at the University of Melbourne,
Australia. Prior to starting at the ANU last September, she had a
postdoc position at Cornell with extended visits to the Dept. EECS
at U.C. Berkeley.
Web page: http://arp.anu.edu.au/ davoren
What logic is good for CS ?
or
"The Drinking Philosophers"
in the mirror of Horn linear logic
Max Kanovitch
University of Pennsylvania
Monday, May 1, 2000, 4:30 PM
University of Pennsylvania, Moore 554
The aim of this research is
to develop adequate and comprehensive logical systems
capable of handling important properties of real-time systems,
The basic step in understanding a given timed system S is to
show that its `reachability' predicate:
``There exists a trajectory that leads to a given state s'',
is partial recursive.
After that, for any reasonably expressive logical system L,
one can immediately conclude that
the above `reachability problem' is equivalent to the problem
whether the corresponding sequent is provable in L or not.
The real challenge is to establish that
a proposed logical system L is `fully adequate':
that there is a direct correspondence
between trajectories in S and proofs in L.
We will consider real-time systems with quantitative time constraints,
within which all things may be changed during the course of actions
and events: the number of actors, the ``communication topology'',
even the numerical bounds in the time constraints.
A number of obstructions to a fully adequate logical analysis
are figured out:
-
A global continuous time implies an over-complicated set
of trajectories.
- Potentially unbounded number of actors
and dynamically configured topology of actors push us out
beyond the finite automata paradigm.
- Global quantitative time constraints push us out beyond
the Markov processes property (that is,
the Present determines the Future);
whereas the proofs are always ``Markovian''.
- Besides, a common user is used to describe actions and events
in his own terms by means of a Horn-like format: if ... then ... .
The aim of the talk is to show that the Horn fragment of linear
logic provides us with a fully adequate logical formalism
for the real-time systems under consideration.
The ``non-Markovian'' problem is circumvented by introducing hidden
variables recorded by ``time-guards''.
For the underlying trajectories with time-guards we introduce
an exact bisimulation equivalence
(together with a high-school proof of its correctness,
based on the combination:
a coarse hour-glass + a precise one-hand watch).
Geometry of Proofs and Models of Computation
Esfandiar Haghverdi
University of Ottawa
Thursday, April 27, at 3:00 in DRL 4E9
Note special day, time, and location
Girard introduced his Geometry of Interaction Programme (GoI)
in a series of influential papers. The goal of this programme
was to provide a mathematical analysis of
the cut elimination process in linear logic proofs. This new
interpretation replaces graph-rewriting by information flow in
proof-nets. A specific model considered by Girard was based on the
C*-algebra of bounded linear operators on the space l2. From a computational
point of view this yields an analysis of l-calculus b-reduction
and has been applied in such areas as the analysis of
optimal reduction strategies.
GoI has also brought forward a new perspective for the semantics of computation
which places it in a kind of ``middle ground'' between
imperative/procedural, denotational/operational approaches in
the semantics of programming languages. This new view helps
to model the computational dynamics which is absent in
denotational semantics and manages to offer a mathematical analysis
which is lacking in operational semantics.
In this talk we give a general categorical framework for the Girard
programme inspired by recent work of Samson Abramsky. We show how to
construct linear combinatory algebras and
models of untyped combinatory logic in this framework.
In particular, we will focus on combinatory algebras constructed using
a special class of traced symmetric monoidal categories
called traced unique decomposition categories.
These categories provide an algebraic setting for an important class
of examples and axiomatize the ``particle-style'' GoI along with
providing a computational calculus.
Supervisory Control of Distributed Systems using Petri Nets and
PartialOrder Methods
Kevin X. He
Notre Dame
Monday, April 24, 2000, 4:30 PM
University of Pennsylvania, Moore 554
A distributed system is a system that consists of a group of distributed
and communicating processes. Examples of such systems arise in robotic
manufacturing systems, computer networks, telecommunication systems and
systems with networked embedded microprocessors. This talk introduces a
systematic way to achieve automatic generation of maximally permissive
supervisors or supervisory software plug-ins that ensure desired system
behavior. The approach is based on a discrete event model called
Petri net and a partial order method called network
unfolding. A Petri net is a bipartite graph consisting of places,
transitions and directed arcs connecting places and transitions. An
unfolding is a net homomorphism that maps the original net to an acyclic
occurrence net. Petri nets provide compact and convenient models
for distributed systems, while unfoldings identify concurrent, cyclic, and
causally related fundamental executions within the system. The
maximally permissive supervisor is efficiently constructed by restricting
undesirable interactions among fundamental executions. A distributed
cache system example is also provided to illustrate this approach.
TITLE TBA
Radu Grosu
University of Pennsylvania
Monday, Feb. 28, 2000, 4:30 PM
University of Pennsylvania, Moore 554
Reconciling message sequence charts and state machines
Kousha Etessami
Bell Labs, Lucent Technologies
Monday, Feb. 21, 2000, 4:30 PM
University of Pennsylvania, Moore 554
I will describe message sequence charts (MSCs) and give examples of
how they are being used in early modeling of software systems. I will
then describe a framework for reconciling MSCs with the more familiar
state machine model of concurrent software. In this framework we give
an algorithm for inferring unspecified MSCs from a set of MSCs which
supply a partial description of the behaviors of the system, and an
algorithm for synthesizing concurrent state machines giving the "most
conservative" realization of the behaviors specified by a given set of
MSCs.
Based on a joint paper with Rajeev Alur and Mihalis Yannakakis,
to appear in 22nd Intl Conf on Software Engineering (ICSE 2000).
Process Algebraic Approach to the Parametric Analysis of Real-time Scheduling Problems
Hee Hwan Kwak
University of Pennsylvania
Monday, Jan. 31, 2000, 4:30 PM
University of Pennsylvania, Moore 554
This talk describes a general framework for parametric analysis of
real-time systems based on process algebra and illustrates the
usefulness of a process-algebraic framework by applying this method to
real-time scheduling problems.
The Algebra of Communicating Shared Resources (ACSR) has been extended
to ACSR with Value-passing (ACSR-VP) in order to model the systems
that pass values between processes and change the priorities of events
and timed actions dynamically. In order to capture the semantics of
ACSR-VP process terms, we propose a Symbolic Graph with Assignment
(SGA). We further presents an algorithms for computing symbolic weak
bisimulation for ACSR-VP processes.
With our new framework the scheduling problem in real-time systems is
reduced to the problem of solving predicate equations, a boolean
expression, or the set of sequential boolean equations. A solution to
them yields the values of the parameters that make the system
schedulable. To solve predicate equations with unknown parameters, we
demonstrate the use of constraint logic programming techniques. A
boolean expression with unknown parameter can be solved by integer
programming tools, and we present new algorithms to solve a set of
boolean equations with unknown parameters.
An Implicitly-Typed Deadlock-Free Process Calculus
Naoki Kobayashi
University of Tokyo
Monday, Jan. 17, 1999, 4:30 PM
University of Pennsylvania, Moore 554
We have extended Kobayashi and Sumii's type system [1, 2] for the
deadlock-free Pi-calculus and developed a type reconstruction algorithm.
Kobayashi and Sumii's type system is a generalization of Kobayashi,
Pierce, and Turner's linear pi-calculus, and it helps high-level
reasoning about programs by guaranteeing that communication on certain
channels will eventually succeed. It can ensure, for example, that a
process implementing a function really behaves like a function.
However, because it lacked a type reconstruction algorithm and
required rather complex type annotations, it was impractical to apply
it to real concurrent languages. We have therefore developed a type
reconstruction algorithm for an extension of the type system. The key
novelties that made it possible are generalization of the usages
(which was first introduced in [2]) and a subusage relation.
In this talk, I will first summarize the ideas of Kobayashi and
Sumii's original deadlock-free process calculus, and then explain
how we extended it and obtained its type reconstruction algorithm.
(Joint work with Shin Saito and Eijiro Sumii)
[1] Kobayashi, "A Partially Deadlock-Free Typed Process Calculus,"
ACM TOPLAS, 20(2), pp.436--482, 1998
[2] Sumii and Kobayashi, "A Generalized Deadlock-Free Process Calculus,"
Proceedings of HLCL'98, ENTCS, 16(3), 1998.
Analysis of Software Accidents, Attacks, and Disagreements
Part II: Software Engineering Standards
Michael McDougall
University of Pennsylvania
Monday, Dec. 6, 1999, 4:30 PM
University of Pennsylvania, Moore 554
In 1992 the London Ambulance Service (LAS) switched from a manual
dispatch
system to a software controlled system. The system promptly failed,
resulting in the loss of emergency calls and slow ambulance response
times. Software engineering standards are designed to prevent such
software disasters, but these standards force a developer to
spend time and money meeting the requirements of the standard.
Standards do not guarantee that a given project will be successful,
and developers often decide that their time and money could be better
spent on other things. This talk will cover an analysis of the LAS
failure with respect to the ISO 12207 software engineering standard.
I will try to answer the following questions: Would adherence to ISO
12207
have prevented the failure? Would ISO 12207 have made any difference
at all?
Analysis of Software Accidents, Attacks, and Disagreements
Part I: Fundamentals and Models
Carl A. Gunter
University of Pennsylvania
Monday, Nov. 29, 1999, 4:30 PM
University of Pennsylvania, Moore 554
In some areas of engineering, like commercial aviation, diligent
analysis of accidents has led to steady improvements in safety.
Analysis of software failures is a topic of common interest among
computer professionals too, as evidenced by sources like the ACM Risks
column. This talk will describe a software engineering
perspective on fundamental concepts of analysis and three case
studies. Fundamentals include the concepts of causality, liability, and
evidence. We consider two "classic" case studies: Leveson and Turner's
study of the Therac 25 accidents, and Spafford's study of the Morris
Internet worm attack. I will provide an additional study based on
disagreements (lawsuits) about breach of contract and fraud. The focus
in these studies concerns the obligations implied by commercial software
specifications in terms of the software reference model developed by
myself, Elsa L. Gunter, Michael Jackson, and Pamela Zave.
A Formal Framework and Evaluation Method for Network Denial of Service
Catherine Meadows
Naval Research Laboratory
Monday, Nov. 22, 1999, 4:30 PM
University of Pennsylvania, Moore 554
Denial of service is becoming a growing concern. As our systems communicate
more and more with others that we know less and less, they become increasingly
vulnerable to hostile intruders who may take advantage of the very protocols
intended for the establishment and authentication of communication to tie up
our resources and disable our servers. Since these attacks occur before
parties are authenticated to each other, we cannot rely upon enforcement of
the appropriate access control policy to protect us. Instead we must build
our defenses, as much as possible, into the protocols themselves.
This paper shows how some principles that have already been used
to make protocols more resistant to denial of service can be formalized,
and indicates the ways in which existing cryptographic protocol analysis
tools could be modified to operate within this formal framework.
Strand Spaces
Alfred Maneki
National Security Agency
Wednesday, Nov. 17, 1999, 3:00 PM
University of Pennsylvania, Moore 212
Note special day and time
In a communications network, cryptographic protocols are used either to
verify the authenticity of communicating parties or to establish cryptographic
keys shared by these parties for the purpose of secret communications.
Typically, each party in a network holds long term "secret" keys. Parties
wishing to verify the identity of other parties (authentication) or to
establish session keys (secrecy) will employ a specific cryptographic protocol
(a prescribed sequence of exchanged messages using one's long term secret key
and sometimes the services of a trusted network server) for these purposes.
With the rapid development and deployment of communications networks,
many cryptographic protocols have been designed and implemented. Given their
widespread use, and our dependence on these for providing critical security
functions, the evaluation of these protocols has assumed primary importance.
A cryptographic protocol evaluation would consist of a search for
vulnerabilities; i.e., the development of attacks by other parties in the
network who may wish to interfere with the communications between legitimate
parties or to recover the information being shared between legitimate parties.
If the evaluation does not yield the existence of such vulnerabilities, then
the evaluator is faced with the daunting task of "proving" the correctness of
the security features of the protocol.
Since cryptography is used in these protocols their evaluation depends
heavily on an understanding of how the underlying cryptography is used.
Beyond that, the evaluator must keep an open mind to the misuses of the
protocol and must be willing to employ a wide variety of techniques from the
field of mathematics, logic, and computer science. As a measure of the
difficulty of the task of evaluating a protocol, many widely used protocols,
thought to be secure for many years, were ultimately shown to be flawed.
A number of researchers have proposed methodologies for protocol
evaluation. In this lecture we will describe the theory of strand spaces
proposed by Joshua Guttman and his colleagues at MITRE. As a methodology for
evaluating cryptographic protocols, strand spaces have the advantage of
incorporating techniques developed by Millen, Meadows, Roscoe, Lowe, and
others. Further investigations have shown that strand space theory can be
extended and generalized in many ways. In this lecture, we present the basic
definitions, motivate the fundamental theorems, and give an outline of their
proofs. We will conclude the lecture by applying strand space theory to the
TMN protocol.
A Decision Procedure for Noninterference
Sylvan Pinsky
National Security Agency
Wednesday, Nov. 17, 1999, 3:00 PM
University of Pennsylvania, Moore 212
Note special day and time
Noninterference was introduced by Goguen and Meseguer to provide a foundation
for the specification and analysis of security policies. The intuitive
notion that a security domain U is noninterfering with a security domain V is
conveyed by stating that no action performed by U can influence subsequent
outputs seen by V. This lecture gives an introductory formulation of this
concept and provides necessary and sufficient conditions for a system to
satisfy noninterference. Security is defined in terms of allowable flows of
information among security domains. We examine properties of special sets
called basis elements which are generated from the interference relationship
defined from the allowable flows of information. We present a decision
procedure for noninterference when the system is deterministic.
Finite Bisimulations of Linear Differential Equations
George Pappas
Monday, Nov. 15, 1999, 4:30 PM
University of Pennsylvania, Moore 554
In this talk, I will focus on algorithmic methods for exactly computing
the reachable states of hybrid systems. Current state of the art methods
perform reachability computation for untimed, timed, multirate, and
rectangular hybrid automata before reaching undecidability barriers. Using the
very recent notion of order minimality from geometric model theory, the first
class of hybrid systems with linear differential equations having a decidable
reachability
problem is obtained. This importance of this result is immediate given the
wide
applicability of linear differential equations in systems theory and
applications.
Joint work with Gerardo Lafferriere, Shankar Sasrty, and Sergio Yovine
Reasoning About Secrecy for Active Networks
Pankaj Kakkar
Monday, Nov. 8, 1999, 4:30 PM
University of Pennsylvania, Moore 554
In this talk, we present a language of mobile agents called uPLAN for
describing capabilities of active (programmable) networks. We
demonstrate, using a formal semantics for uPLAN, how capabilities
provided for programming the network can impact the security of the
network and the potential flows of information between users. We
formalize a concept of security against attacks on secrecy by an
`outsider' and show how basic protections of this kind are preserved in
the presence of programmable network functions like user-customized
labeled routing. Finally, we show how this relates to some well-known
concepts of information flow in mobile calculi.
Joint work with Carl A. Gunter and Martin Abadi
Formal Verification of Distance Vector Routing Protocols
Davor Obradovic
Monday, Nov. 1, 1999, 4:30 PM
University of Pennsylvania, Moore 554
The purpose of routing protocols is to devise routes (i.e. paths)
between hosts of a network. It is important that this procedure
converges to a stable set of routes to desired destinations
and to do so with real-time bounds. The Routing Information Protocol
(RIP) is a widely-used routing protocol on small internets.
The Ad Hoc On-Demand Distance Vector (AODV) Protocol is a proposed
standard for Wireless Networks. Both are instances of Distance Vector
Routing. We aim to provide a framework for formal reasoning about
these protocols.
We show how to use an interactive theorem prover, HOL, together with
a model checker, SPIN, to prove key properties of distance vector
routing protocols. We do three case studies: correctness of the RIP
standard, a sharp realtime bound on RIP stability, and preservation
of loop-freedom in AODV. We develop verification techniques suited
to routing protocols generally. These case studies show significant
benefits from automated support in reduced verification workload and
assistance in finding new insights and gaps for standard specifications.
Joint work with Karthikeyan Bhargavan, Carl Gunter, U of Penn.
[1] Formal Verification of Standards for Distance Vector Routing Protocols,
Karthikeyan Bhargavan, Carl Gunter and Davor Obradovic, August 1999,
Under Review.
Featherweight Java: a minimal core calculus for Java and GJ
Atsushi Igarashi
Monday, October 25, 1999, 4:30 PM
University of Pennsylvania, Moore 554
Several recent studies have introduced lightweight versions of Java:
reduced languages in which complex features like threads and
reflection are dropped to enable rigorous arguments about key
properties such as type safety. We carry this process a step further,
omitting almost all features of the full language (including
interfaces and even assignment) to obtain a small calculus,
Featherweight Java, for which rigorous proofs are not only possible
but easy.
Featherweight Java bears a similar relation to full Java as the
lambda-calculus does to languages such as ML and Haskell. It offers a
similar computational ``feel,'' providing classes, methods, fields,
inheritance, and dynamic typecasts, with a semantics closely following
Java's. A proof of type safety for Featherweight Java thus
illustrates many of the interesting features of a safety proof for the
full language, while remaining pleasingly compact. The syntax, type
rules, and operational semantics of Featherweight Java fit on one
page, making it easier to understand the consequences of extensions
and variations.
As an illustration of its utility in this regard, we extend
Featherweight Java with generic classes in the style of GJ
(Bracha, Odersky, Stoutamire, and Wadler) and sketch a proof of type
safety. The extended system formalizes for the first time some of the
key features of GJ.
Secure Composition of Untrusted Code: Wrappers and Causality Types
Peter Sewell
Friday, October 15, 1999, 3:00 PM
University of Pennsylvania, Moore 554
Note special day and time
The use of software components from untrusted or partially-trusted
sources is becoming common. A user would like to know, for example,
that they will not leak personal data to the net, but it is often
infeasible to verify that such components are well-behaved. Instead,
they must be executed in a secure environment, or wrapper, that
provides fine-grain control of the allowable interactions. In [1] we
introduced a process calculus with constrained interaction to express
wrappers and discussed their security properties. In this work we
study information flow properties of wrappers. We present a causal type
system that statically captures the allowed flows between wrapped
potentially badly-typed components; we use the type system to prove
that a non-trivial wrapper has the desired property.
Joint work with Jan Vitek, Purdue University
[1] Secure Composition of Insecure Components, Peter Sewell and Jan
Vitek. Computer Security Foundations Workshop, CSFW-12, June 1999.
Shape Theory: FISh implementation
C. Barry Jay
Monday, September 27, 1999, 4:30 PM
University of Pennsylvania, Moore 554
Shape theory supports abstraction over the structures used to hold
data, based on new denotational concepts. This results in increased
expressive power, improved error detection, and faster execution than
is usually possible with higher-order, polymorphic, strongly-typed
languages. It also has significant potential for supporting parallel
execution.
The programming language FISh2 will support uniform, or generic
operations, such as mapping and folding over a large class of data
structures, including lists, trees, matrices, quad-trees, etc.
Experience with FISh1 and the current prototype suggest that this can
be done without loss of performance compared to other popular
languages, such as C.
Run-time Assurance Based On Formal SpecificationsInsup Lee
University of Pennsylvania
Monday, September 20, 1999, 4:30 PM
University of Pennsylvania, Moore 554
We describe the Monitoring and Checking (MaC) framework that assures the
correctness of the current execution at run-time. Monitoring is
performed based on a formal specification of system requirements. Our
framework is to bridge the gap between formal verification and testing.
The former is used to ensure the correctness of a design specification
rather than an implementation, whereas the latter is to used to
validate an implementation. An important aspect of the
framework is clear separation between implementation-dependent
description of monitored objects and high-level requirements
specification. Another salient feature is automatic instrumentation
of executable code. The paper presents an overview of the framework
and two languages for specifying events and conditions and their
three-value logic semantics. These two languages are used to specify
what to observe in the program and the requirements that the program
must satisfy, respectively. This talk also briefly describes our
current prototype implementation in Java as well as future work such
as steering and applying the framework to simulation and security.
Joint work with Sampath Kannan, Moonjoo Kim, Oleg Sokolsky, and Mahesh
Viswanathan.
Process Algebra and Non-interference
P. Y. A. Ryan
UK Defence Evaluation Research Agency, Malvern
Thursday, September 9, 1999, 3:00 PM
University of Pennsylvania, Moore 554
Note special time and place
In this joint work with Steve Schneider, RHBNC University of London,
we outline the key concept in computer security of non-interference. This
concept was proposed by Goguen and Meseguer as a way to formalise the absence
of information flow between objects or across interfaces. It has however been
a difficult and controversial concept, especially when applied to systems
displaying non-determinism.
We argue that casting this concept in process-algebraic terms helps
clarify many of the issues. In particular characterising
non-interference reduces to characterising the equivalence of
processes, itself a controversial but well-studied issue in the
process algebra community. The various notions of process equivalence,
such as testing equivalence, observational, bisimulation etc appear to
have useful roles in information security. In particular a
reformulation of an "unwinding" result in terms of power-bisimulation
is given.
Seminars from
previous years
Return to the Logic and
Computation Group Page
Comments about this page can be sent to bcpierce@saul.cis.upenn.edu.