The Logic and Computation Group is composed of faculty and graduate students from the Computer and Information Science, Mathematics, and Philosophy departments, and participates in the Institute for Research in the Cognitive Sciences. The Logic and Computation group runs a weekly seminar. The seminar is open to the public and all are welcome.
The seminar meets regularly during the school year on Mondays at 3:30 p.m. in room DRL 4C8 on the fourth (top) floor of the David Rittenhouse Laboratory (DRL), on the Southeast corner of 33-rd and Walnut Streets at the University of Pennsylvania. Directions may be found here. Any changes to this venue or schedule will be specifically noted.
In this talk we give a survey on recent applications of the ‘proof mining’ program. This program is concerned with the extraction effective uniform bounds from ineffective proofs in analysis using techniques from proof theory. We will focus on problems in ergodic theory. More specifically, we discuss new explicit uniform rates of metastability (in the sense of Tao) for the von Neumann Mean Ergodic Theorem (for uniformly convex Banach spaces) as well as several nonlinear ergodic theorems (in the Hilbert space case) due to Baillon and Wittmann.
Gurevich and Neeman introduced Distributed Knowledge Authorization Language (DKAL). The world of DKAL consists of communicating principals computing their own knowledge in their own states. DKAL is based on a new logic of information, the so-called infon logic, and its efficient subsystem called primal logic. In this paper we simplify Kripkean semantics of primal logic and study various extensions of it in search to balance expressivity and efficiency. On the proof-theoretic side we develop cut-free Gentzen-style sequent calculi for the original primal logic and its extensions. This is joint work with Y. Gurevich.
The concept of direct computation used by Statman (1977) was instrumental in the development of a notion of normal form for proofs of equality. On the other hand, the so-called Brouwer-Heyting-Kolmogorov interpretation of logical constants via the notion of proof-construction gave rise to a whole tradition of looking at proofs as terms in a functional calculus. Here we take a closer look at proof procedures for first-order sentences with equality drawing the attention to the need for introducing (function) symbols for rewrites. This leads us to a proposal to the effect that the framework of labelled natural deduction gives the right tools to formulate a proof theory for the “logical connective” of propositional equality in the style of the so-called Curry–Howard interpretation. The basic idea is that when analysing an equality sentence into (i) proof conditions (introduction) and (ii) immediate consequences (elimination), it becomes clear that one needs to bring in identifiers (i.e., function symbols) for sequences of rewrites, and this is what we claim is the missing entity in Per Martin-Löf’s equality types (both intensional and extensional). What we end up with is a formulation of what appears to be a middle ground solution to the ‘intensional’ vs. ‘extensional’ dichotomy which permeates most of the work on characterising propositional equality in natural deduction style.
Sophisticated theorems about finite structures often end up needing long strings of bounds with elaborate dependencies that can obscure the underlying elegance of the proof. Correspondence principles make it possible to prove an equivalent theorem in an infinite setting, usually a topological or measure-theoretic one, where the need for precise bounds disappear, and where powerful structure theorems can be brought to bear on the the problem. We will illustrate this method by taking a powerful but difficult method from graph theory --- the Szemeredi regularity lemma --- and outlining a short proof that avoids most of the need for exacting calculations. (No knowledge of the Szemeredi regularity lemma will be assumed.)
The formal verification of concurrent software often amounts to analyzing some combinatorial state space, such as a directed graph. As time scales shrink and numbers of processes increases, the sizes of these directed graphs explode. Despite existing state space reduction techniques, this combinatorial explosion often renders industrial validation of concurrent software incomplete in practice; system designers often must accept either a certain level of risk or reduced performance from a simplified design. A “directed space,” a topological space equipped with some structure of time, represents the limit of ever more intricate directed graphs representing ever finer observations in time. Classical algebraic topology gives tractable methods for classifying reasonable spaces, irrespective of size, typically up to continuous deformation. Some forms of critical system behavior remain unchanged under continuous deformations respecting time flow. An appropriate "directed algebraic topology" thus provides a naturally geometric program semantics and a convenient setting for circumventing combinatorial explosion in validating concurrent programs.
We introduce basic ideas from directed algebraic topology (not assuming any prior familiarity with classical algebraic topology) and show how calculations of suitable invariants on directed spaces, some of which were jointly developed with Eric Goubault and Emmanuel Haucourt from the French Atomic Energy Commission, can reveal coordination problems in concurrent software. We also compare our directed topological approach with traditional model-checking and sketch applications of directed algebraic topology to rewriting theory (time-permitting).
In this talk I will describe the state of the art in higher-order quantum computation. To place things in context, I will briefly present you the two main paradigms in which research is being done: quantum control and classical control. We will then focus on the later, the most understood one. In this setting, we have to deal with quantum (non-duplicable) and classical (duplicable) data, higher-order structures, and a probabilistic side-effect. We will analyze an abstract machine and explore the denotational tools that are useful for getting some insights on this computational paradigm.