This post is largely based on some conversations I've had about Polarized Logic/Focusing/Call-By-Push-Value recently, especially with Neel Krishnaswami and Wes Filardo, though it was originally prompted by Manuel Simoni some time ago.
I think that one of the key observations of focusing/CBPV is that programs are dealing with two different things - data and computation - and that we tend to get the most tripped up when we confuse the two.
- Data is classified by data types (a.k.a. positive types). Data is defined by how it is constructed, and the way you use data is by pattern-matching against it.
- Computation is classified by computation types (a.k.a. negative types). Computations are defined their eliminations - that is, by how they respond to signals/messages/pokes/arguments.
There are two things I want to talk about, and they're both recursive types: call-by-push-value has positive recursive types (which have the feel of inductive types and/or algebras and/or what we're used to as datatypes in functional languages) and negative recursive types (which have the feel of recursive, lazy records and/or "codata" whatever that is and/or coalgebras and/or what William Cook calls objects). Both positive and negative recursive types are treated by Paul Blain Levy in his thesis (section 5.3.2) and in the Call-By-Push Value book (section 4.2.2).
In particular, I want to claim that Call-By-Push-Value and focusing suggest two fundamental features that should be, and generally aren't (at least simultaneously) in modern programming languages:
- Support for structured data with rich case analysis facilities (up to and beyond what are called views)
- Support for recursive records and negative recursive types.
Minor addition to "core Levy"
I'll be presenting with an imaginary extension to Bauer's Levy language in this post.1 I've mucked around Levy before, of course, and it would probably help to review that previous post before reading this one. I want to make one addition to Levy before we begin making big, interesting ones. The derived form that I want to add - e1 orelse e2 - is computation code with type F bool if e1 and e1 have type F bool as well. This is definable as syntactic sugar, where x is selected to not be free in e2:
e1 to x in
if x then return true
else e2
One other aside - do remember that, at the cost of potential confusion, I modified Bauer's original Levy implementation to make force a prefix operator that binds more tightly than application - force g z y can be written with parentheses as (((force g) z) y).
Positive types, positive recursive types
The functional languages have the construction of positive types just about right: they're made up of sums and products. I used a special LLF-ish construction for datatypes in my exploration of Levy#, but the more traditional way of introducing datatypes is to say that they are recursive types μt.T(t). The recursive types we're used to thinking about are naturally interpreted as positive types, because they are data and are inspected by pattern matching.
There's a tendency in programming language design to shove positive recursive types together with labeled sum types to get a familiar datatype mechanism.2 I will go along with this tendency and merge labeled sums and recursive types, writing them as μt.[L1: T1(t), L2: T2(t), ..., Ln: Tn(t)].
Here are datatype definitions for trees of ints and lists of ints:
type+ tree = μtree. [ Leaf: int, Node: tree * tree ] type+ list = μlist. [ Nil, Cons: int * list ]Note from the definition of lists that we also allow types to have no arguments: it's possible to treat the definition of Nil as syntactic sugar for Nil: unit. The associated value is Nil, which is syntactic sugar for Nil ().
There are infinitely many trees and lists as we've defined them. In fact, it's just a convention of how we have to write programs that we think of positive types as being finite sums. Even though we can't write it down as a datatype declaration, it's perfectly reasonable to think of a built-in type of infinite precision integers as being defined as follows:
type+ int =
μint.
[ 0,
1,
~1,
2,
~2,
3,
... ]
The same goes for built-in strings and the like - think of built-in types being positive enumerations that were provided specially by the language since the language didn't give us the ability to write down the full declaration containing all possible constants.
Powerful views
A point that has not, in my humble opinion, been made simply enough is that the perspective of focusing says that we should think about integrating much more powerful case analysis mechanisms into our programming languages. I learned about this point from Licata, Zeilberger, and Harper's 2008 Focusing on Binding and Computation, but their setting had enough other interesting stuff going on that it probably obscured this simple point.
Standard ML (and, to the best of my knowledge, all other functional languages circa-1997) only provides a limited form of case analysis - arbitrary finite views into the outermost structure of the datatype:
case x of
| Leaf 9 => ...
| Leaf y => ...
| Node (Node (Leaf 5, y), z) => ...
| Node (y, z) => ...
This limitation comes with a nice tradeoff, in that we can pretty effectively estimate how much work compiled code needs to do to handle a pattern match. However, the structure of focusing suggests that any mathematical function from a value's structure to computation is fair game. One well-studied modernization of pattern matching is views, which allows us to group computations in other ways. One use case would be allowing us to take an integer variable x and say that it is even or odd:
case x of
| 0 => return true
| x is even => return false
| ~1 => return false
| x is odd => return true
The focusing-aware view of pattern matching suggests that what a pattern match is actually doing is defining a case individually for each value structure - if we could write a countably-infinite-sized program, then we could expand the view-based program above to the no-longer-view-based, but infinitely long, program below:
case x of
| 0 => return true
| 1 => return true
| ~1 => return false
| 2 => return false
| ~2 => return false
| 3 => return true
| ...
So: the focusing perspective says that any effect-free (effects include nontermination!) mathematical function we can write from value structure to computations is fair game for case analysis; views are just one known way of doing this in an interesting way. In principle, however, we can consider much richer case-analysis machinery to be fair game. For instance, there is a mathematical function \(f\) from integer values int to computation code of type F bool with the variable coll free:
- If the integer is 1, the result is return true
- If the integer is less than 1, the result is return false
- Otherwise, if the integer is divisible by 2 and the result of dividing the integer by 2 is i, then the result is force coll i
- Otherwise, let j be 3 times the integer, plus one. Then the result is force coll j
rec coll: int -> F bool is
fn x: int =>
case x of
| 0 => return false
| 1 => return true
| ~1 => return false
| 2 => force coll 1
| ~2 => return false
| 3 => force coll 10
| ~3 => return false
| 4 => force coll 2
| ~4 => return false
| 5 => force coll 16
| ...
The important question here is: what is the right way to think about how we write down total functions from value structures? Are views about right, or do we need something different, clearer/more/less powerful? I don't have a sense for the answers, but I'm pretty confident that's the right question.
Views are only over the structure of values
One critical caveat: I used the phrase value structure in the previous discussion repeatedly on purpose. Because computations can always contain effects, we cannot poke at the computations at all when we think about views - the value structure is just the part of the value that we can inspect without traversing a thunk. Therefore, if we have this definition...
type+ somefn =
μsomefn.
[ BoolFn: U (bool -> F bool),
IntFn: U (int -> F int) ]
...then a case analysis on a value of type somefn can have at most two branches - one for the BoolFn case and one for the IntFn case. We can't pattern-match into computations, so even though it would be reasonable to say that there are many, many values of type somefn, there are only two value structures that can be inspected by case analysis.
Negative types, negative recursive types
In call-by-push-value, the computation types (a.k.a. the negative types of polarized logic/focusing) are defined by the way we use them. Computation code of computation type does stuff in response to being used - a function does stuff in response to getting an argument, and so the most well-understood negative type is implication.
Conjunction (the product or pair type) is a bit slippery, says the CBPV/focusing methodology. We've already used conjunction as a positive type (t1 * t2), but we can also think of conjunction as being a negative type. This is because it's equally valid to think of a pair being positively defined by its construction (give me two values, that's a pair) and negatively as a suspended computation that awaits you asking it for its first or second component.
CPBV/focusing makes us choose: we can have either type of conjunction, or we can have both of them, but they are different. It will be convenient if we keep the normal tuples as positive types, and associate negative types with records - that is, with named products. Records are defined by how you project from them, and just as we tie labeled sums in with positive recursive types, we'll tie labeled products in with negative recursive types.
Negative recursive types are naturally codata in the same way that positive recursive types are naturally data: the canonical example is the infinite stream.
type- stream =
μstream.
{ head: F int,
tail: stream }
It's natural to define particular streams with fixedpoints:
val Ones = thunk rec this: stream is { head = return 1, tail = force this } val CountUp = thunk rec this: int -> stream is fn x: int => { head = return x, tail = force plus x 1 to y in force this y }Say I wanted to get the fourth element of an infinite stream str of type U stream. The ML-ish way of projecting from records would write this as #tail (#head (#head (#head (force str)))), but I will pointedly not use that notation in favor of a different record notation: (force str).head.head.head.tail. It nests better, and works better in the case that record elements are functions. (Because functions are negative types, like records, it is very natural to have functions be the arguments to records.)
Negative recursive types as Cook's objects
Here's a bit of a case study to conclude. The entities that William Cook names "Objects" in his paper On Understanding Data Abstraction, Revisited are recognizably negative recursive types in the sense above.3 Cook's examples can be coded in Standard ML (see here), but the language gets in the way of this encoding in a number of places.4 To see what this would look like in a language with better support, I'll encode the examples from Cook's paper in my imaginary CBPV implementation. The recursive negative type looks much like a stream, and encodes the interface for these set objects.
type- iset =
μiset.
{ isEmpty: F bool,
contains: int -> F bool,
insert: int -> iset,
union: U iset -> iset }
One difficulty: the Insert and Union operations that Cook uses involve mutual recursion. I don't have mutual recursion, so rather than pretending to have it (I've abused my imaginary language extension enough already) I'll code it up using records. The record type insertunion I define isn't actually recursive, it's just a record - this is analogous to using ML's datatype mechanism (which, as we've discussed, makes positive recursive types) to define a simple enumeration.
type- insertunion =
μinsertunion.
{ Insert: U iset * int -> iset,
Union: U iset * U iset -> iset }
val InsertUnion = thunk
rec x: insertunion is
{ Insert = fn (s, n) =>
(force s).contains n to b in
if b then force s else
rec self: iset is
{ isEmpty = return false,
contains = fn i => (force equal i n) orelse (force s).contains i,
insert = fn i => (force x).Insert (self, i),
union = fn s' => (force x).Union (self, s') },
Union = fn (s1, s2) =>
rec self: iset is
{ isEmpty = (force s1).isEmpty orelse (force s2).isEmpty,
contains = fn i => (force s1).contains i orelse (force s2).contains i,
insert = fn i => (force x).Insert (self, i),
union = fn s' => (force x).Union (self, s') } }
val Insert: U (U iset * int -> iset) = thunk ((force InsertUnion).Insert)
val Union: U (U iset * U iset -> iset) = thunk ((force InsertUnion).Union)
We've defined union of sets and insertion into a set, but we haven't actually defined any sets yet! Once we give the implementation of an empty set, however, we can manipulate these sets with a Java-esque method invocation style:
val Empty = thunk rec self: iset is
{ isEmpty = return true,
contains = fn x => return false,
insert = fn i => force Insert (self, i),
union = fn s' => force s' }
val JustOne = thunk (force Empty).insert 1
(force Empty).insert(3).union(JustOne).insert(5).contains(4)
As Cook noted, it's also very natural to talk about infinite sets in this style, such as the set of all even numbers and the set of all integers:
val Evens = thunk rec self: iset is
{ isEmpty = return false,
contains = fn i =>
force mod i 2 to x in
(case x of
| 0 => return true
| _ => return false),
insert = fn i => force Insert (self, i),
union = fn s' => force Union (self, s') }
val Full = thunk rec self: iset is
{ isEmpty = return false,
contains = fn i => return true,
insert = fn i => force self,
union = fn s' => force self }
1 If anyone is interested in helping me implement a toy language along these lines, I'm all ears. Also, the disclaimer that since I don't have an implementation there are surely bugs bugs bugs everywhere is strongly in effect.
2 As a practical matter, this makes it easier for the language to know where to put implicit roll/unroll annotations, so that programmers doesn't have to write these explicitly, which would be a pain.
3 Cook does give the disclaimer that this definition is "recognized as valid by experts in the field, although there certainly are other valid models." I carefully call these things "Cook objects" rather than "objects" - my claim is that negative recursive types correspond to what William Cook names objects, not that they correspond to what you call objects. Note, furthermore, that I could be wrong about what William Cook calls objects! I only have his paper to go on and I have been known to be a few cookies short of a haystack.
4 There are two issues with the encoding in SML. The most basic is that we have to encode the recursive types using SML's recursive type mechanism, which is biased towards the positive encoding of recursive types as finite sums. The other issue is that the way SML ties together (mutual) recursion with closures gets in the way - the standard way of writing a "thunk" as a function from unit is all over the place, and it wan't possible to define EmptySet so that it was mutually recursive with the definition of Insert and Union as a result. So, I'm certainly not arguing that SML facilities programming in (what Cook calls) an object-oriented style in a strong sense - it's unnecessarily painful to code in that style - but the minimal semantic building blocks of what Cook presented and named Object are present and accounted for. [Update:] As gasche discusses in the comments, Ocaml's records, and the way it deals with laziness, make it quite a bit better at encoding the example, but it's still not ideal.