The Logic and Computation Group is composed of faculty and graduate students from the Computer and Information Science, Mathematics, and Philosophy departments, and participates in the Institute for Research in the Cognitive Sciences. The Logic and Computation group runs a weekly seminar. The seminar is open to the public and all are welcome.
The seminar meets regularly during the school year on Mondays at 4:30 p.m. in room DRL 4C8 on the fourth (top) floor of the David Rittenhouse Laboratory (DRL), on the Southeast corner of 33-rd and Walnut Streets at the University of Pennsylvania. Directions may be found here. Any changes to this venue or schedule will be specifically noted.
To subscribe to the mailing list, send mail to majordomo@cis.upenn.edu
with the following in the message text: "subscribe lcs".
(Don't include quotes! The message should be in the message text, NOT the
subject)
To unsubscibe from the list, mail to the same address (majordomo@cis.upenn.edu) with the following in the message body: "unsubscribe lcs". By sending "help" in the message body, you will get a detailed list of commands for various other services.
Is a ruler-and-compass algorithm executable by a Turing machine? Yes, but at a level of abstraction far remote from the intended one. These are, for all purposes, two different algorithms, though linked by a notion of simulation (supported by sediments of mathematics). The `behavioral theory of algorithms', developed in this century by Y. Gurevich (and others, foremostly A. Blass) insists on the distinction. If so, what is an algorithm? By intended abstraction, algorithms come in several species: simple (isolated, sequential, small-step), interactive (between steps, and, since one algorithm's complex protocol is another algorithm's step, also within a step), parallel (wide-step, with interacting components), distributed ... The purpose of this talk is to present the basic ideas and results of the theory under development, emphasizing several new twists in which the author is involved.
In particular, what is abstract cryptography about, how do we abstract from the interactive algorithms of computational cryptography? It turns out that the basic PPT computational notions, of indistinguishability and unforgeability, have sharp abstract counterparts, allowing porting of typical proofs between the two abstraction levels, both ways. How can this be useful, except for proofs-by-hand? Several species of algorithms are provably encodable in AsmL/Spec# (by intention they should all be, extending the programming languages if not otherwise), and, time permitting, a tool demo, of a general-purpose software-testing tool developed in MSR, SpecExplorer, finding attacks on abstract cryptographic protocols, given a straightforward protocol model, will be presented.
Randomness is crucial for cryptography - without randomness there can be no secrecy. However, it is not clear how to obtain the random bits necessary in a secure manner. In particular, one problem is that in general, we can *not* rely on data sampled from nature to be distributed according to a "nice" distribution (such as the uniform distribution). In this talk, I will survey some of the approaches to solving this problem in theory and practice (with an emphasis on the former).
I will mention results from a CHES 2003 paper which is joint work with Ronen Shaltiel and Eran Tromer, a FOCS 2004 paper which is joint work with Russell Impagliazzo and Avi Wigderson, and a yet unpublished joint work with Shai Halevi.
Some ideas from theoretical computer science lead to a theory for ``interval algebras''. Its intended model is the closed real interval. A major theorem is that every algebra of this finitely presented equational theory has a simple quotient and that any simple algebra appears uniquely as a subalgebra of the intended model. This acts as a powerful completeness theorem and allows one to recover -- using algebraic techniques -- what appears to be a good part of real analysis. I'll report on a number of recent results in this project.
We outline a new approach to showing the decidability of type checking for type theories with beta-eta-equality, relevant to foundations for modules systems and type theory-based proof systems. The key to the approach is a syntactic translation mapping terms in the beta-eta presentation into their full eta-expansions in the beta presentation. Decidability of type checking is lifted from the target beta presentation to the beta-eta presentation. The approach extends to other inductive types with a single constructor, and is demonstrated for singleton types and sigma-types.
The basic tenet of intuitionism is that the objects of mathematics are mental creations of human beings. In 1907 Brouwer put forward the ur-intuition of mathematics as the common origin of discrete and continuous mathematics. After his first topological period Brouwer introduced a refined set of notions for intuitionistic (constructive) mathematics. The key-notion of choice sequence provided a basis for the traditional parts of mathematics (algebra, analysis, topology, measure theory, etc.). .Brouwer introduced certain principles that exploit the character of choice objects, e.g. the continuity principle, transfinite induction. We will look into the justification of these principles. Somewhat later he searched for means to describe the differences between the algorithmic universe and the choice universe (the 'reduced continuum' versus the 'full continuum'). The tool he found was the 'creating subject', conveniently formulated as Kripke's Schema. We will show that the principle is not just a curiosity, but also has its role in analyzing the continuum. Brouwer's view of logic was never made explicit, but from his dissertation it appears that he had an early `proof interpretation in mind. It is likely that his requirements at the time were a bit too strong. The combination of intuitionistic logic and intuitionistic objects yields a mathematical universe that differs considerably from the traditional (Cantorian) Universe.
In 1975, Baker, Gill, and Solovay showed that there are oracles A, B so that P^A \neq NP^A and P^B = NP^B. This shows that relativizing techniques won't settle the question of whether P = NP. However some have interpreted this result to show that diagonalization won't settle the P = NP question since the diagonalization techniques that we are familiar with seem to relativize. However, in 1980, Kozen proved a result which he interpreted as saying that "if P \neq NP is provable at all, then it is provable by diagonalization." One difficulty with this interpretation is that there is no generally-accepted definition of "separation by diagonalization." Regarding Kozen's result, Fortnow wrote in 2000 paper that "I believe (Kozen's) result says more about the difficulty of exactly formalizing the notion of a `diagonalization proof' than of actually arguing the diagonalization technique is the only technique we have for class separation."
In this talk, we will discuss some recent work of Impagliazzo, Nash and Remmel on the power of diagonalization. In particular, we will define two notions of diagonalization, strong diagonalization and weak diagonalization (which is implicit in Kozen's result). In contrast to Kozen's result that virtually every separation can be recast as weak diagonalization, we show that there are classes of languages which cannot be separated by strong diagonalization and provide evidence that strong diagonalization does not relativize. We also define two kinds of indirect diagonalization and discuss their power.