Monday, April 18, 2011

Tell me a story of logic

I have a little technical note that I'm working on, in which I tell a story about linear logic that I tried to work out a year ago and finally returned to with Bernardo Toninho and Yuxin Deng last week. But as I tried to write the introduction, the two-year-old voice in my head kept asking "why?" Why do I need to tell this story? What's the point? The resulting thoughts don't really belong in the technical note I wanted to write, but I hope they're just barely coherent enough that they're worth putting here. (Note: there is a similar discussion to this one in Section 3 of my thesis proposal.)

Big picture: before we can contemplate any discussion of the formal correctness of any system (a logic, type theory, programming language, cryptographic protocol, or self-driving car) - before we can even ask ourself what "correctness'' means - we have to have a way of formally specifying the system itself. Mathematicians do this by defining sets, category theorists by enumerating universal mapping properties. However, if you're writing down theorems to be checked by a computer, there's almost exactly one game in town: systems for mechanized mathematics have a built-in facility for directly giving (co)inductive specifications of systems.

A tool's interface for describing inductive specifications is, therefore, a matter of some importance. In most systems, the statement of an inductive specification looks something like a datatype declaration in a programming language in ML or Haskell. One system for mechanized mathematics, Twelf, takes a rather different approach. Rather than directly specifying an inductive structure, Twelf users specify a signature of type families and constants in the dependent type theory LF. If you asked someone about this while they were in a hurry to catch a plane, they'd probably respond "oh, that's what adequacy theorems are for" and run away.

A word on adequacy

In the first paper on LF, Harper, Honsell, and Plotkin discuss adequacy theorems. When we write down a bunch of type families and constants in an LF signature, that means that we can form a certain (inductively-defined!) set of terms. To use a standard example, given the LF signature
  nat: type.

z: nat.
s: nat -> nat.
we can create well-typed terms of type nat that look like z, (s z), (s (s z)), and ((λx. s x) (s z)). That fourth example has a β-redex that, when we reduce it, turns it into the third example. We usually say that the above LF specification is adequate because there is a one-one-correspondence between natural numbers and normal terms of type nat in this signature.1 In general, to quote Harper and Licata's 2007 "Mechanizing metatheory in a logical framework",
An LF representation of a language is adequate iff it is isomorphic to the informal definition of the language.
I'm currently thinking that this might be the wrong slogan, because the imprecision of "informal" leads to seemingly inevitable misunderstandings. Another slogan that I might consider in its place is that an LF specification is adequate iff it is isomorphic to the inductive specification of the language. It's just really very deeply ingrained in our collective heads that when we're writing down horizontal lines on paper, that means an inductive specification. But perhaps that's not giving LF enough credit! There's no reason to cede that my scratches on a whiteboard have any special spiritual relationship with inductive definitions and not LF signatures (or sets in ZFC or constructions in category theory, for that matter).

Taking the view that adequacy relates the canonical forms of LF to some inductively defined structure ("on paper" or otherwise) has two advantages that I see. First, it turns adequacy into something that takes place entirely within formal mathematics. In fact, we have a pretty clear understanding of how to write "first-order" or "concrete" LF types that encode inductive definitions (hint: never allow left-nested arrows). But this means that we can prove adequacy theorems in Twelf. My Twelf code proving a bijection between de Bruijn lambda calculus terms and HOAS lambda calculus terms is, in this view, part of an adequacy theorem.2 The second advantage, in my mind, is it clarifies why we never formally prove adequacy with respect to our "informal definitions" (a point that Daniel Lee and I poked a bit of fun in our SIGBOVIK 2007 paper Manifest Adequacy). When I've been using Adga for awhile, I think of my on-paper notation as corresponding to inductive definitions (or, occasionally, inductive-recursive definitions, the more powerful variant of inductive definitions available in Agda). When I've been using Twelf for awhile, I think of my on-paper notation as corresponding to LF signatures that use higher-order abstract synatx. It's only in the event that I want to know that these two ways of formalizing the same system coincide that I would actually think about adequacy in practice. To reempahsize, in this event adequacy is an entirely formal concept.

Why logical frameworks?

There's no contrarian impulse that lead to LF being used to encode systems. Harper, Honsell, and Plotkin made the point as well as I could hope to:
The structure of the LF type system provides for the uniform extension of the basic judgment forms to two higher-order forms introduced by Martin-Löf, the hypothetical, representing consequence, and the schematic, representing generality.
There are two important points to make about the above quote. The more obvious one is that LF, compared to "raw" inductive definitions, gives a clean notion of hypothetical reasoning and uniform substitution (the basis of what was later termed "higher-order abstract syntax"). The less obvious point is that, in the above quote, Harper, Honsell, and Plotkin are referencing Martin-Löf's "On the meaning of logical constants and the justification of the logical laws," which isn't actually about LF. Rather, it's a story about how we understand the meaning of propositions and how we establish that a logic's definition makes sense in the presence of a uniform notion of hypothetical judgments.

Martin-Löf's discussion, which Pfenning and Davies termed the "judgmental methodology" in their their judgmental reconstruction of modal logic, is important for people who write systems in Twelf because it sets up a particular world-view that maps quite well onto formalization in LF. To put it another way, the judgmental methodology is one important source of intuitions that guide the mental process of mapping between our "on-paper" notation and LF signatures. And so it's no accident that 1) we use Twelf a lot at Carnegie Mellon 2) our student reading group usually reads On the meaning of logical constants and the justification of the logical laws at least once every other year. The judgmental properties underlying LF and Martin Löf's reconstruction are the same, so understanding the judgmental methodology is a good way of understanding LF formalization.

This continues to be the case in the Linear LF extension to LF, where linear hypothetical assumptions are understood to be resources that will be consumed in the service of building a goal. The underlying story here, which Wikipedia originally attributes to Lafont, is called the resource interpretation, and it allows us to understand how to do things like represent programming languages with state conveniently, because "a bunch of memory" is a concept that can be represented as "a bunch of resources." The story that we tell about linear logic - the resource interpretation - is important in part because it gives us intuitions for understanding LLF formalizations.

I don't know that I can speak for where my intuitions about inductive definitions come from - as I've said before, computer scientists tend to treat them as pretty foundational. On the other hand, it's difficult to really use the "raw" inductive definitions provided natively by most non-Twelf provers to reason about programming languages, which is why there has been much recent (and, from an outside perspective, seemingly quite successful) work aimed at providing the right tool support for specifying and reasoning about programming languages in highly-developed proof assistants like Coq. The only danger here is, as soon as you start adding layers of interface between yourself and inductive definitions, the question of adequacy (the informal version, at least) again raises its head: there's not necessarily the same Martin-Löfian story guiding you, so where are your intuitions about the correctness of your system coming from? How would you actually prove that it corresponds to an ostensibly equivalent LF signature?

This, in turn, is one reason why two-level approach have been recently popularized by Felty's Hybrid system and Gacek's Abella. In these systems, there's a specially supported logical layer where one can use the same kind of intuitions we use in LF encodings. LF or LLF could, in fact be used for this logical layer, though these two systems use different variations on that theme. The second layer is a separate facility for reasoning about these encodings.

I'll stop here, though one argument I might be interested in trying to make in the future is that Twelf itself is actually the original two-level approach. There's an even stronger argument to be made in the case of HLF, Jason Reed's implementation of meta-reasoning for LLF on top of Twelf. However

1 We usually say canonical terms or canonical forms, not normal terms. The canonical forms are the β-normal η-long terms. However, η-expansion doesn't mean anything for this example.

2 It would be a full adequacy theorem if I'd shown that the bijection was compositional - that substitution on the de Bruijn terms means the same thing as substitution that LF gives us for free - but I didn't prove that part. Perhaps I should.