Slides from the two talks. [pdf]

*Abstract*: Call-by-push-value is a calculus for computational
effects that provides fine-grain primitives into which both
call-by-value and call-by-name calculi can be decomposed. This
decomposition appears in a huge variety of semantics: operational,
domains, state, continuations, games, possible worlds, etc.

In this tutorial, we first look at semantics of call-by-value and call-by-name, using algebras for a monad, and see how that leads us to call-by-push-value.

We then look at two examples of call-by-push-value semantics that do not use algebras: state (global ground storage cells) and continuations.

Slides from the talk. [ppt]

*Abstract*: In this talk, I will describe Hoare Type Theory
(HTT) which integrates a dependently typed higher-order language,
with Hoare-like specifications for reasoning about mutable state and
pointer aliasing. The lack of this integration in the past has
arguably been one of the most significant obstacles in the application
of Hoare-style verification methodology, since it prevented effective
reasoning about constructs for data abstraction, information hiding,
and code reuse.

From the technical standpoint, it is interesting that the design of HTT relates in an essential way some of the most venerable ideas from language theory like Dijkstra's predicate transformers, Curry-Howard isomorphism, monads, as well as the more recent idea of Separation Logic.

We have formally proved that HTT is modular, in the sense that the verification of individual program components suffices to establish the correctness of the whole program. HTT is also (almost) conservative over the programming practice in modern functional languages; if the verification features are not used, HTT very much looks like core Haskell.

I will discuss the implementation of HTT (called Y-not) which is currently under way, as well as the possibilities for scaling HTT to support further programming features.

Slides from the talk. [pdf]

*Abstract*:How to integrate effects and dependent types? I
will discuss a proposal which is based on the following key
ingredients:

- 1. Instead of extending Type Theory, we specify the behaviour of effects in a purely functional way. This executable specification is replaced at run-time by the actual effects in the real world.

- 2. Partiality and non-determinism are modalities of effects which are also specified functionally. In particular effects may or may not introduce partiality or non-determinism.

- 3. We exploit the rich structure of dependent types to express important aspects of effects such as the allocation of resources.

This talk is based on joint work with Wouter Swierstra on specifying effects functionally and joint work with Tarmo Uustalu and Venanzio Capretta on the partiality monad.

Slides from the talk. [pdf]

*Abstract*: We present relational interpretations of
type-and-effect systems for tracking store operations (allocation,
reading, writing). Effects are interpreted denotationally as the
preservation of binary store relations which are indexed by partial
bijections on locations. The type system may be used to justify a
number of program transformations and may thus serve as a basis for
effect-based compiler optimisations. (Joint work with N. Benton,
M. Hofmann, and A. Kennedy.)

Slides from the talk. [pdf]

*Abstract*: Reasoning on imperative programs requires the
ability to track aliasing and ownership properties. We present a type
system that provides this ability, by using regions, capabilities and
singleton types. It is designed for a high-level programming language
with higher-order functions, algebraic data structures, and references
(mutable memory cells).

We then exhibit a type-directed translation of imperative programs into a purely functional language. Like the monadic translation, this is a store-passing translation. Here, however, the store is partitioned into multiple fragments, which are threaded through a computation only if they are relevant to it. Furthermore, the partition of the store into fragments can evolve dynamically to reflect changes in the ownership hierarchy.

The translation offers deep intuition about the inner workings and soundness of the type system. Furthermore, it provides a foundation for our long-term objective of designing a system for specifying and certifying imperative programs with dynamic memory allocation. (Joint work with François Pottier.)

Slides from the talk. [pdf]

*Abstract*: Dynamic join points-and-advice aspects can be seen
as specializing continuation frames. In this way, aspects compose
additional effects into the program flow. This description, dual to
objects, reinforces the need for a lightweight effect annotations for
aspects. I present four high-level aspects, and examine potential
annotations for them.

Slides from the talk. [pdf]

*Abstract*: Monads are widely used in the writing of effectful
computations, as a powerful tool allowing for more concise and
intuitive expressions. One might expect reasoning about them to be
just as straightforward, but the evidence in support of this claim is
meagre so far. We try to address the issue by studying a plain
example: the labelling of a tree with different labels. We take a
simple algorithm using a modifiable state, and prove it correct using
elementary techniques in a first go, and monads/applicative functors
the second time around. We thus obtain a factorisation of the
labelling routine, the proof of which suggests taking some liberty on
the type system (in fact justified by notational abuse), so as to
describe some quite intriguing properties of the state applicative
functor in interaction with function composition.

Slides from the talk. [pdf]

*Abstract*: A venerable foundation for imperative programming
is the theory of monotone predicate transformers, inspired by Hoare,
advocated by Dijkstra, and elaborated in the refinement calculus of
Back and von Wright. A pivotal theorem of RC states that MPTs can be
written in a sum-of-products form. This form can be captured
precisely by a structure introduced in type theory about 20 years ago,
and now known under various names, such as "interaction structure", or
"indexed container". Furthermore, data-refinement in the refinement
calculus corresponds to a certain notion of morphism between
interaction structures.

It is perhaps a scandal that after some 50 years of advocacy for type-theory as a wide-spectrum programming language, there is still no credible account of how to specify interaction with the world. Sooner or later, we will be found out. I suggest that the link with refinement calculus is not just a technical curiosity, but a fundamental part of any resolution of this scandal.

Slides from the talk. [pdf]

*Abstract*: Let us consider Parigot's lambda-mu-calculus
extended with a single dynamically bound continuation variable. We
show that this calculus is a calculus of delimited control:

- its call-by-value version is equationally equivalent to Danvy and Filinski's calculus with shift and reset, a calculus known to represent all monads expressible in lambda-calculus (exceptions, references, ...);

- its call-by-name version is (surprisingly) equivalent to de Groote's variant of Parigot's lambda-mu-calculus, a calculus shown by Saurin to satisfy Böhm theorem while Parigot's calculus does not.

Slides from the talk. [pdf]

*Abstract*: We present an extension of System F+eta
(introduced by John C. Mitchell) with mechanism of typed
exceptions. The reduction of exceptions in this extension is somewhat
non-standard as it follows a call-by-name discipline (to be compared
to the exception handling in lazy language). This peculiarity leads us
to introduce an original notion of "corrupted" type, meaning that a
term may raise an exception in an arbitrary evaluation context, not
necessarily at top-level. Terms of a corrupted type may be seen as
terms of the corresponding safe type where arbitrary subterms are
replaced by the construction 'raise E' which raise an exception. This
corruption allows a nice mixing of exceptions and arrow type together
and in particular, allows a certain modularity of the type sytem: a
function whose type does not mention an exception is still allowed by
subtyping to take an argument corrupted by this exception. Hence, this
typing of exceptions deeply relies on a subtyping relation.

We also show a realisability model of this calculus in which the type arrow is interpreted in a non-standard way. We deduce from this model type safety and (weak) normalisation results.

Slides from the talk. [pdf]

*Abstract*: I'll talk about some ideas I haven't finished having yet,
about the syntax of functional programs with effects.
I'll examine the legacy of Haskell's design choices from
the luxurious position of being able to start all over
again, and I'll remind you of what one might prefer about
ML. I contend that:

- it is important to separate notions of value and notions of computation in types and terms, to reduce the notational burden of lifting computations through multiple layers of effects;

- we should spend more notation on doing nothing {thunking} in order to spend less on doing something (applicative programming);

- we should think about the syntax of effectful programs, not just expressions.

Or, more briefly, I resent writing "return", especially just after "=", and I plan to do something about it.

Slides from the talk. [pdf]

*Abstract*: We investigate recursion and recursive types in
models of two-level type theories for effects, similar to ones used in
earlier work with Alex Simpson on parametric polymorphism and
effects. As in Paul Levy's call-by-push-value calculus, these type
theories have two collections of types: value types (whose elements
are static values) and computation types (whose elements are dynamic
effect-producing computations). Earlier work has shown that inductive
and coinductive computation types make sense, in this talk we
investigate when general recursive computation types can modeled.

Slides from the talk. [pdf]

*Abstract*: I would like to present a total, executable
specification of mutable state, discuss some of the technical issues
involved, and speculate about the limits of our approach. If time
permits, I'd like to explore how more advanced logics, such as Hoare
Type Theory or separation logic, might be built on top of our
specification.

Slides from the talk. [pdf]

*Abstract*: We have previously shown that comonads are a good
tool for analyzing various notions of context-dependent computation,
primarily dataflow computation and tree transformations. For some of
these notions the first comonads to formulate suffer from a
deficiency: instead of being strong symmetric monoidal, which is the
perfect situation, they are only lax symmetric semimonoidal, for poor
reasons. As a result the Cartesian closed structure of the base
category of pure computations does not properly carry over to the
coKleisli category. In this talk we show how this can be remedied by a
finer approach: switching to a comonad on the category of functors
from a small category of indices. The Haskell implementation
redemonstrates that faking dependent types can be painful. (Joint work
with Varmo Vene.)

Last update 23 December 2007