Thursday, August 11, 2011

Embracing and extending the Levy language

I started writing the post about the thing I really wanted to write about, but then I realized that the background was already a long post on its own. So, that's this post! It's about two things:
  1. Levy, an implementation of call-by-push-value that Andrej Bauer and Matija Pretnar wrote in OCaml for the PL Zoo. I made a fork of Levy that, among other small changes, reduces the need for parentheses by making operator precedence more like it is in OCaml.
  2. A more divergent fork of Levy that adds datatypes in a familiar-but-as-yet-unmotivated way. This diverges from the goals of Levy - as a PL Zoo project, Levy aims to be a clean and uncluttered language implementation, and that's not the case for this more divergent fork. I'll call this new version of Levy... let's see... Levy#. Sure, that works.
Other things have been written about Levy at Andrej's blog and LtU.

Basic concepts

The big idea of call-by-push-value, and, in turn, Levy, is that it is a programming language that is incredibly picky about the difference between code that builds data (I'll call this value code as shorthand) and code that does stuff (I'll call this computation code). Value code tends to be really boring: 3 builds an integer value and true builds a boolean value. Levy also builds in a couple of total binary operations on integers, so 3 + x and y < 12 are both viewed by Levy as value code. Variables are always value code.

Computation code is more interesting. If-statements are computation code: if v then c1 else c2 examines the value built by v and then either does the stuff described by c1 or else does the stuff described by c2. Functions are computation code: fun x: ty -> c receives a value and then does the stuff described by c, possibly using the value it received.

The computation code return v does a very simple thing: it takes the value built by v and says "look, I made you a value." If the value code v had value type ty, then the computation code return v has computation type F ty. So yeah: I hadn't talked about types yet, but pieces of value code are given value types and pieces of computation code are given computation types.1

Which brings us to an interesting kind of value code that I didn't mention before: thunk c wraps up (and suspends) the computation code c in a value; if the computation code c had computation type ty, then the value code thunk c has value type U ty.

The typical type of a function that would be written as int -> int in ML is U (int -> F int). It has to be a value type (ML functions are values), hence the "U", and the computation code in the function wants to take a value, do some stuff, and eventually return an integer. Here's an implementation of the absolute value function:
   val abs = thunk fun n: int -> 
if n < 0
then return 0 - n
else return n
;;
In order to use the absolute value function, we have to "un-thunk" the computation code inside of the value: the computation code force v does this. Where we'd write (abs 4) in ML, we write force abs 4 in Levy (force binds more tightly than anything, including application).

Evaluation order

A consequence of being persnickety about the difference between value code and computation code is that Levy is very explicit about sequencing. The SML/Haskell/OCaml code "foo(abs 4,abs 5)" means "call foo with the absolute values of 4 and 5." One occasionally hard-learned lesson for people who have programmed in both SML and OCaml is that, if abs contains effects, this code might do different things: SML has a specified left-to-right evaluation order, but OCaml's evaluation order is unspecified, and in practice it's right-to-left.

Such underspecification is impossible in Levy, since calling a function is a computation, and computations can't be included directly in values! Instead, if I have computation code c of type F ty, we expect it to do some stuff and then say "look, I made you a value (of type ty)." The computation code c1 to x in c2 does the stuff described by c1, and when that stuff involves saying "look, I made you a value", that value is bound to x and used to do the stuff described by c2. Therefore, when translate our SML/Haskell/OCaml code above to Levy, we have to explicitly write either
   force abs 4 to x1 in
force abs 5 to x2 in
force foo x1 x2
or
   force abs 5 to x2 in
force abs 4 to x1 in
force foo x1 x2
That concludes the introduction of Levy, (my slightly de-parenthesized fork of) the language implemented by Bauer and Pretnar. There are two example files here (discusses basics) and here (discusses functions).

Matching

The first new feature in Levy# is the addition of a new kind of computation code, match v with | pat1 -> c1 ... that is a generalization of both let-expressions let x be v in c and if-expressions if v then c1 else c2 from Levy. Patterns like pat1 are a refinement of value code: constants are patterns, variables are patterns (variables in patterns are binding occurrences, like in most functional languages), but addition is not a pattern, and variables can only appear once in a pattern.

The simplest new thing that we can do with this language construct is write computation code that mimics a switch statement in C:
   val f = thunk fun x: int ->
match x with
| 1 -> return 4
| 2 -> return 3
| 3 -> return 2
| 4 -> return 1
| 5 -> return 0
| y -> return (x + y)
;;
This function is included as an example in the "datatype" fork of Levy# on Github here; let's look at what it actually does in the interpreter:
   bash-3.2$ ./levy.byte switch.levy 
... snip ...
Levy. Press Ctrl-D to exit.
Levy> force f 1 ;;
comp F int = return 4
Levy> force f 3 ;;
comp F int = return 2
Levy> force f 5 ;;
comp F int = return 0
Levy> force f 7 ;;
comp F int = return 14
That's what I expected to happen; hopefully it's what you expected to happen too. Let's declare victory and move on.

Using datatypes

The main innovation of Levy# is the introduction of an inductive datatype mechanism, which is basically a bedrock component of every functional programming language (that isn't a Lisp) ever. This should have been a relatively small change, but parser generators are the worst and I'm bad at them so I basically had to rewrite the parser to have a new phase. Sigh.

I'll start with showing how datatypes are used, because that's a little more standard than the way I declare them. It would look exactly like the uses of ML datatypes, except that I curry them in a Haskell-ey fashion. The function sumlist, which sums up a list of integers, looks like this:
   val sumlist = thunk rec sumlist : list -> F int is 
fun xs : list ->
match xs with
| Nil -> return 0
| Cons x xs -> force sumlist xs to y in return x + y
;;
The other thing that's new in the above example is the use of Levy's fixedpoint operator, but there's nothing interesting there from the perspective of reading the code. This and other examples are in datatype.levy, by the way.

Side note: I'm lazy but still wanted a realistic execution model *and* coverage checking (redundant/non-exhaustive matches raise warnings), so I limited the current version of Levy# to depth-1 pattern matching:
   Levy> match v4 with | Cons x1 (Cons x2 xs) -> return 0 ;;
Type error: Cons x2 xs not allowed in pattern (depth-1 pattern matching only)
If I was doing this for real I'd implement real coverage checking and pattern compilation as a code transformation, which would leave the depth-1 pattern matching interpreter unchanged.

So that's the use of datatypes; one upshot is that my pieces of value code are maybe a bit more interesting than they used to be: I can make big piece value code like Cons 4 (Cons 9 (Cons 16 (Cons (-1) Nil))) to create a big piece of data. I think that's kind of interesting, at least.

Declaring datatypes

Datatypes are just a way of telling the computer that you want to work with a particular set of inductively defined structures (at least this is what strictly positive datatypes, the only ones Levy# allows, are doing). A point that I've haltingly tried to make in a previous post was that, well, if you're going to write down a particular set of inductively defined terms, a canonical-forms based logical framework like Linear LF is a perfectly good language to write that in.

So our constants look basically like the declaration of constants in a Linear LF signature, except that we make the constants uppercase, following ML conventions:
   data Nil: list
| Cons: int -o list -o list
It's a little point, but it's kind of a big deal: we can understand all the values in Levy# as canonical forms in a simply-typed linear lambda calculus with integers and some arithmetic operations - kind of like Twelf with the integer constraint domain extension.2

The reason it's a big deal is that we can now observe that the two kinds of "function types" we have in this language are way different. There's a computational arrow ty1 -> ty2 that we inhereted from Levy, and then there's a Linear LF linear implication arrow, a value arrow ty1 -o ty2 that we use in value code to construct patterns and values. I didn't invent this idea, it's actually a subset of what Licata and Harper propose in "A Universe of Binding and Computation" (ICFP 2009). Dan Licata would call the Linear LF arrow a "representational arrow," which is probably a better name than "value arrow."

Of course, the values that we actually use are, without exception, at atomic type; if we try to even write down Cons 4, which seems like it would have type list -o list, we get a type error from Levy#: constructor Cons expects 2 arg(s), given 1. So by requiring that all the Linear LF terms (i.e. value code) be canonical (beta-normal and eta-long), I've effectively excluded any values of type ty1 -o ty2 from the syntax of the language.

What would it mean to introduce value code whose type was a linear implication to Levy#? Well, that's the thing I really wanted to write about.


1 Focusing and polarization-aware readers will notice that this has something to do with focusing. In particular: positive types are value types, negative types are computation types, F is "upshift," and U is "downshift."
2 Technical note: Boolean values, from the perspective of the value language, are just two defined constants, as explained here. That's important: we have Boolean constants that are values, but the elimination form for Booleans is computation code not value code. If we had an "if" statement at the level of values, all sorts of crazy (and interesting) stuff could potentially happen. At least for the moment, I'm not considering that.

No comments:

Post a Comment