[Prev][Next][Index][Thread]
Your counterexample: Why Reason About Continuations?
Yes, I think your analysis is right, and that the pitfalls we note are pretty
obvious when you think about them. Also, comments are reaching us that the
fact that continuations create ``black holes'' is indeed ``well-known''.
(* begin flame *)
But there's known and there's known, and my impression remains that the
majority of the language design and implementation experts, who may feel that
our observations are well-known, could not DEFINE the notion of program
equivalence which Riecke and I point out changes upon introduction of
continuations. I think we're making a worthwhile contribution by giving a
precise formulation of what the ``well-known'' problem is, not to mention
that we've made some partial progress on it.
(* end flame *)
Now please indulge me as I repeat stuff you know in the following commentary
included for the sake of TYPES and LOGIC subscribers.
My motivation for the questions about continuations in our initial 1985 LNCS
v.193 paper and in my LISP & FP '88 paper with Riecke was neither to perform
post-continuation-transform ``peephole'' optimization of functional
expressions, nor to reason about SCHEME congruences in call-cc contexts.
My main purpose has been to understand verification logic for code with
structured break statements if not full-fledged goto's. It's easy to find
congruences between, for example, Algol-like programs which hold wrt
break-free contexts, but not in the presence of nondivergent procedures which
can fail to return to the calling routine (cf. Example 2 of my POPL'88 paper
with Sieber). One usually can give a direct semantics for languages without
breaks--as Sieber and I did for our Algol fragment--but with breaks one needs
to move to standard continuation-style semantics.
So continuation semantics gives a technically adequate, apparently clear
semantics for languages extended with breaks. But what does the theory of a
language look like after extension, and how does it compare to the theory of
the unextended language? For example, are nested declarations still
equivalent to simultaneous declarations? Are identities like
WHILE p DO a OD = IF p THEN a; WHILE p DO a OD FI ,
LETREC F(X) BE F(X) IN F(3) END = LETREC G(X,Y) BE G(X,Y) IN G(1,2) END
still true? Etc.
As you nicely put it, ``Alas, this appears to be a problem of the world, not
a problem in the model....'' That is, my concern is not with
continuations---they seem just right for modeling what needs be
modelled---but rather with what they model and how much they help one to
understand languages whose semantics mandates their use.
For example, I would expect that the ``goto considered harmful'' view might
find theoretical support here if we could identify nice logical principles
which held wrt structured-break contexts, but not wrt ones with arbitrary
goto's. So far, I'm not aware of any work along this potentially fruitful
line.
To summarize, continuations have played a role in semantics for more than a
decade before they became prominent in compiler methodology or in the call-cc
extension of SCHEME. With so many uses, it seems appropriate to develop
principles for reasoning about them. The paper with Riecke
(1) highlights the fact that the nice work of Felleisen et al provides only a
theory of conversion---essentially symbolic execution---and so is sure to be
limited,
(2) points out a few ``obvious'' logical pitfalls such as the
failure of some familiar functional identities in the continuation setting,
(3) pursues a little ways further the enterprise of theoretically relating
direct and continuation semantics and logic started by Reynolds, Fischer, and
Plotkin over 15 years ago, and taken up again in the past few years by you
and me, and by Felleisen et al.
Regards, A.