Most papers in computer science describe how their author learned what someone else already knew. -- Peter Landin
I'm gonna be perfectly honest: I was specifically thinking of the holey data series when I responded.
@lambda_calculus Presumably the implication is that this is bad for the advancement of science. OTOH, it's a pretty viable model for a blog.
'Cause really, I was kind of surprised that I hadn't found any previous presentations of the idea (and was therefore encouraged when Conor indicated that he at least sort of had thought of it too). Chung-chieh Shan's response was great:
@simrob @lambda_calculus I'm not sure that Landin intended to imply that it's bad!
I guess I hope he's right (or at least that I am), because in the comments to Part 3, Aaron Turon correctly observes that I missed a whopper: Minamide's 1998 POPL Paper "A Functional Representation of Data Structures with a Hole." Horray for blogs, I guess: if this has been a paper review, I would have been quite embarrassed, but as a blog comment I was delighted to find out about this related work.
A Functional Representation of Data Structures with a Hole
Minamide's paper effectively covers the same ground I covered in Part 1 of the Holey Data series: his linear representational lambdas are called hole abstractions. It's a very well written paper from the heady days when you could talk about the proposed Standard ML Basis Library and it was still common for authors to cite Wright and Felleisen when explaining that they were proving language safety by (what amounts to) proving progress and preservation lemmas.My favorite part of reading the paper was that it simultaneously confirmed two suspicions that I picked up after a recent discussion with Dan Licata:
- Levy#/call-by-push-value was an unnecessarily restrictive calculus for programming with linear representational functions - ML would work just fine.
- By using call-by-push-value, I'd avoided certain red herrings that would have plagued the development otherwise - somewhere on almost every page I thought "yep, that explanation is less clear than it could be because they don't know about difference between value (positive) types and computation (negative) types yet."
Overall, this paper made me quite happy - it suggests there is something canonical about this idea, and proves that the idea leads to concrete performance gains. It also made me sad, of course, because it seems like the ideas therein didn't really catch on last time around. But that's part of why I didn't stop with Part 1 in the series: it's not clear, even to me, that hole abstractions/difference lists/linear representational functions are worth adding to a functional programming language if all they can do is be applied and composed. However, with the additional expressiveness that can be found in pattern matching against hole abstractions (hell, that's a way better name than "linear representational functions," I'm just going to call them hole abstractions from now on), I think there's a significantly stronger case to be made.
I've thrown a transliteration of Minamide's examples up on GitHub. The binary tree insertion example, in particular, could come from Huet's paper: it's a beautiful example of how even the "Part 1" language can implement zipper algorithms so long as you never need to move up in the tree. As for the hfun_addone function, I don't really understand it at all, I just transliterated it. In particular, it seems to not be entirely tail recursive (in particular, it seems to regular-recurse along the leftmost spine of the binary tree - if I'm not mistaken about this, I accuse line 131 of being the culprit.)
6.3 Logic Variable
(Note: I've tried to make sure this section doesn't come across as mean-spirited in any way, but I want to emphasize: this is not intended to be any sort of criticism of Yasuhiko Minamide. His paper was terrific! Go read it!)My other favorite part of reading the paper was Section 6.3, which discusses difference lists. I plan to actually email Minamide and ask him about Section 6.3. Here's my reconstruction of events: Fictional Minamide has a great idea, implements it into a prototype ML compiler, tests it. His paper has strong theoretical and practical results, and gets accepted to POPL 1998. However, one of the reviewers says "oh, this is almost exactly difference lists in logic programming, you need to explain the connection." Fictional Minamide is not a logic programming person, has no access to a logic programming person, doesn't particularly have any reason to care about logic programming, but he does feel the need to address the concerns of a reviewer that he can't actually communicate with. Fictional Minamide manages to find, with great difficulty in the pre-Google-Scholar world, some poorly expressed explanation for what difference lists are in some random papers that are mostly about something else.1 After a couple of hours, Fictional Minamide gives up, exclaiming "okay, this makes absolutely no sense whatsoever," and writes something kinda mumbly about graph reduction that seems vaguely plausible to satisfy the reviewer's demand.
The result is a section that mostly misses the point about the connection between hole abstraction and difference lists, both of which are declarative abstractions that allow a programmer to think about working modulo an uninitialized pointer in memory (though the way Minamide and I do it, the types help you way more). This is not intended as any criticism of either Real Minamide or Fictional Minamide. Indeed, it's mostly a criticism of the conference review processes: I'm pretty sure you could find similar "Section 6.3"s in my papers as well!2 I do hope, however, that my exposition in Part 1, which was in fact motivated by difference lists and reached more-or-less the exact same setup that Minamide came up with, clarifies the record on how these two ideas are connected.
[Update Aug 25] I heard back from Real Minamide - while it was a long time ago, he did recall one of the reviewers mentioning logic variables, leading to the inclusion of that section. I win! Well, kind of. There was probably no "this makes no sense" exclamation; Minimade says that at the time his understanding at the time was in line with my comment about working modulo an uninitialized pointer. The comments about graph reduction, which were why I thought the section misses the point, were more of a side comment.
Minamide also remembered another wonderfully tantalizing tidbit: he recalls that, at POPL 1998, Phil Wadler said he'd seen a similar idea even earlier. Perhaps hole abstraction is just destined to be continuously reinvented until it finally gets included in the C++2040 standard.3
1 The work he cites is on the adding the logic programming notions of unbound variables to functional programming languages, which (while I haven't looked at them) certainly don't look they would give a good introduction to the simple-but-goofy logic programming intuitions behind difference lists.
That said, I basically have never seen a good, clear, self-contained description of what a difference list is - I consider Frank's logic programming notes to be pretty good, but I recognize that I'm not a fair judge because I was also present at the associated lecture.
2 Grep for "∇", or "nominal", especially in anything prior to my thesis proposal, if you want a head start.
3 I wonder what brackets they'll use, since as of the current standard they seem to have run out.
No comments:
Post a Comment