The logic and computation group is composed of faculty and graduate students from the computer and information science, mathematics, and philosophy departments, and participates in the Institute for Research in the Cognitive Sciences.
The logic and computation group runs a weekly seminar. The seminar meets regularly during the school year on Mondays at 4:31 p.m. in room 4C8 of the David Rittenhouse Laboratory at the University of Pennsylvania. Any changes to this schedule will be specifically noted. The seminar is open to the public and all are welcome.
Abstraction based methods extend the applicability of model checking to programs written in general-purpose programming languages by extracting abstract (finite state) models from the source program. Such methods are generally unsound since abstraction usually introduces unrealistic behaviors.
This talk develops methods to perform automatic abstraction in such a way that it yields verification results whose soundness can be guaranteed. Our framework is based on modal transition systems (MTSs) to allow the sound transfer of properties and counter-examples (for mu-calculus properties) from the abstract program to the concrete program. MTSs arise naturally in three value program analysis and have robust semantic foundations based on a mixed powerdomain of states.
We describe algorithms to automatically generate an abstract MTS by adapting existing predicate and cartesian abstraction techniques, and algorithms for model checking abstract MTSs. These algorithms and can be implemented in combination with existing abstraction techniques without incurring significant computational overhead. The accuracy of verification results is improved by the use of Generalized model checking, a technique to improve the precision of reasoning about partial state spaces of concurrent reactive systems.
The talk is based on joint work with Godefroid, Huth and Schmidt.
The novelty of our approach to the combinatorics is in the use of rewriting techniques for the purpose of establishing explicit bijections between combinatorial objects of two different types.
We present initial results of an ongoing analysis (joint work with F. Butler, I. Cervesato, and A. Scedrov) of Kerberos 5 using the MultiSet Rewriting (MSR) framework.
Distributed contract signing over a network involves many challenges in mimicking the features of paper contract signing. For instance, a paper contract is usually signed by both parties at the same time and at the same place, but distributed electronic transactions over a network is inherently asymmetric in that someone has to send the first message.
Inter-domain routing in the Internet is currently implemented with the Border Gateway Protocol (BGP). This protocol allows every autonomous administrative domain to define local routing policies that are consistent with its own interests. Such locally defined policies can interact in ways that cause various kinds of global routing anomalies --- nondeterministic routing and protocol divergence ("live lock") --- and these anomalies can span multiple autonomous domains making them very difficult to debug and correct. Analysis of BGP is complicated by the fact that route messages carry a number of semantically rich attributes that play a role in the BGP route selection algorithm and in the implementation of local routing policies. The talk will begin with a short tutorial on BGP, and so no previous knowledge will be assumed (don't panic!).
The recent rush to adopt XML can be attributed in part to the hope that the static typing provided by DTDs (or more sophisticated mechanisms such as XML-Schema) will improve the robustness of data exchange and processing. However, although XML _documents_ can be checked for conformance with DTDs, current XML processing languages offer no way of verifying that _programs_ operating on XML structures will always produce conforming outputs.
We describe properties of a process calculus that has been developed for the purpose of analyzing security protocols. The process calculus is a restricted form of pi-calculus, with bounded replication and probabilistic polynomial-time expressions allowed in messages and boolean tests. In order to avoid problems expressing security in the presence of nondeterminism, messages are scheduled probabilistically instead of nondeterministically. We prove that evaluation may be completed in probabilistic polynomial time and develop properties of a form of asymptotic protocol equivalence that allows security to be specified using observational equivalence, a standard relation from programming language theory that involves quantifying over possible environments that might interact with the protocol. We also relate process equivalence to cryptographic concepts such as pseudo-random number generators and polynomial-time statistical tests. The work has been carried out in collaboration with P. Lincoln, J. Mitchell, M. Mitchell, A. Ramanathan, and V. Teague.
We will show how the classical theorem on the continuity of the roots of algebraic functions in one variable can be reduced via some ``easy'' model theory to an ``obvious fact''. This also leads to a vast generalisation of the result in discussion. Some questions about higher dimensional variants will be also asked/discussed.
The pioneering work of Necula and Lee on Proof-Carrying Code motivated recent developments in typed intermediate languages and typed assembly languages that enforce various security policies during code execution.
Workshop on the best known method for "infinite precision" real arithmetic and its connection with the construction of the reals as a co-inductive type.
The second-order polymorphic lambda calculus, F2, was independently discovered by Girard and Reynolds. Girard additionally proved a representation theorem: every function on natural numbers that can be proved total in second-order intuitionistic propositional logic, P2, can be represented in F2. Reynolds additionally proved an abstraction theorem: for a suitable notion of logical relation, every term in F2 takes related arguments into related results. We observe that the essence of Girard's result is a projection from P2 into F2, and that the essence of Reynolds's result is an embedding of F2 into P2, and that the Reynolds embedding followed by the Girard projection is the identity. The Girard projection discards all first-order quantifiers, so it seems unreasonable to expect that the Girard projection followed by the Reynolds embedding should also be the identity. However, we show that in the presence of Reynolds's parametricity property that this is indeed the case, for propositions corresponding to inductive definitions of naturals, products, sums, and fixpoint types.
One could claim that the undecidability of the problem of optimistic protocol completion in the absence of an intruder is not so interesting, since the standard security protocol settings include some model of the intruder, who, for instance, can control the network, and can duplicate and delete messages (but not the state of protocol participants). But, many of the approaches for analyzing and reasoning about protocols based on certain formal specification languages, and, prior to analysis of protocol properties related to unreliable broadcast, we have to show that our formal specification of a given protocol meets the protocol rules and conventions at least under ideal conditions when no one interferes with network transmission.
This is joint work in progress with Dugald Macpherson (Leeds). The point is to develop a model theory for classes of finite structures that bears a similarity to contemporary model theory for classes of infinite structures, in which some notion of dimension typically plays a central role. We thus refer to the classes we study as, "classes of finite structures with dimension and measure." Our definition of dimension and measure is directly inspired by the paper, "Definable sets over finite fields," by Chatzidakis, van den Dries, and Macintyre (Crelle 1992). Only the most basic familiarity with model theory will be supposed.
Physicists know how to integrate over all possible paths, computer- vision experts want to assign probabilities to arbitrary scenes, and numerical analysts act as if some continuous functions are more typical than others. In these three disparate cases, a more flexible notion of integration is being invoked than is possible in the traditional foundations for mathematics. If allowed to enter a highly speculative mode, such as the intersection of category theory and computer science, we may bump into some solutions to the problem.
Combining transition systems can be easily achieved with limits, subobjects and fibrations. However, applying this dogma to stochastic systems leads to several hurdles. In fact, the suitable notion of morphism between probability spaces fails to be closed under composition, and hence, we obtain a non-category!
This talk will present my dissertation work, which develops programming languages and associated techniques for sound and efficient implementations of algorithms for program generation. First, we develop a framework for practical two-level languages. In this framework, we demonstrate that two-level languages are not only a good tool for describing program-generation algorithms, but a good tool for reasoning about them and implementing them as well. We pinpoint several general properties of two-level languages that capture common proof obligations of program-generation algorithms:
With the term `proof mining' we denote the activity of transforming a prima facie non-constructive proof into a new one from which certain computational information can be read off which was not visible beforehand. Already in the 1950's Georg Kreisel realized that logical techniques from proof theory -- originally developed for foundational purposes -- can be put to use here. In recent years, a more systematic proof theoretic approach to proof mining in numerical analysis emerged yielding new quantitative (and even qualitative) numerical results in approximation theory and fixed point theory and providing a bridge between mathematical numerical analysis and the area of computability (and complexity) in analysis which has mainly been developed by logicians (and complexity theorists). Although proof mining has been applied also to e.g. number theory and combinatorics, the area of numerical (functional) analysis is of particular interest since here non-effectivity is at the core of many principles (like compactness arguments) which are used to ensure convergence. In mathematical terms this non-computability often is an obstacle to obtain a quantitative stability analysis and rates of convergence.