One of the things Aaron noticed was that it's nice to define a tail-recursive map over trees this way, and indeed that is true. I transliterated some code Aaron wrote out into Levy#: you can see it at turon.levy. The example really starts rubbing against two of the unnecessary deficiencies of Levy# - a lack of polymorphism (the trees have to be integer trees) and a lack of mutual recursion. We really want to define the map in terms of two mutually (tail-)recursive functions

`moveUp`and

`moveDown`, both of which take a holey

`tree -o tree`and a

`tree`that belongs "in the hole," the difference being that in

`moveUp`I've

*already*applied the map function to the subtree in the hole, and in

`moveDown`I still need to apply the map function to the subtree in the hole.

This only works for monomorphic maps - if we had polymorphism, our hole abstraction zippers would allow us to write maps from

`'a tree`to

`'a tree`using a map function with type

`'a -> F 'a`. It would not allow us to write the maps we're really used to from

`'a tree`to

`'b tree`using a map function with type

`'a -> F 'b`.

In order to capture more general maps - or fold/reduce operations - the non-hole-abstraction approach is to use the generalization of the derivative that Conor McBride describes in clowns to the left of me, jokers to the right. We observe that a map function like the one in turon.levy sweeps left-to-right through the data structure, always segmenting the data structure into some already-processed "clowns to the left," some not-yet-processed "jokers to the right," and a path through the middle. Then, we either have a sub-tree of clowns, which means we'll now look up for more jokers, or a sub-tree of jokers, which means we need to look down into the jokers to process them. The overall picture is this one:

A nice thing about the hole abstraction framework is that we can represent and dynamically manipulate

*precisely this picture*, whereas the derivative/zipper approach generally forces one to think about the derivative-like structure from the "inside out." If the jokers are a tree of ints and the clowns are a tree of bools, we can describe them like this:

` data ILeaf: int -o itree`

| IBranch: itree -o itree -o itree

data BLeaf: bool -o btree

| BBranch: btree -o btree -o btree

Then, the data structure that we will use as a representation of the intermediate state of computation as illustrated above is either stuck in the middle with clowns to the left or stuck in the middle with jokers to the right.` data Clowns: btree -o cj -o cj # Clowns to the left of me`

| Jokers: cj -o itree -o cj # Jokers to the right

Just like our difference lists from before, there are no closed members of the type `cj`; we can only form values of type

`cj -o cj`. The map implementation can be found at clownjoker.levy.

If we did have polymorphism, the clown/joker data structure for a map would look like this:

` data Clowns: 'clown tree `

-o ('clown, 'joker) cj

-o ('clown, 'joker) cj

| Jokers: ('clown, 'joker) cj

-o 'joker tree

-o ('clown, 'joker) cj

The more general clown/joker data structure for a fold would look like this:` data Clowns: 'clowns `

-o ('clowns, 'joker) cj

-o ('clowns, 'joker) cj

| Jokers: ('clowns, 'joker) cj

-o 'joker tree

-o ('clowns, 'joker) cj

This is, it must be said, all reasonably unsatisfying. We have seen that hole abstractions can *completely replace*the derivative data structure - we could well and truly "scrap our zippers." To use the clown/joker generalization of derivatives, we have to define an obvious, boiler-plate-ey datatype

`cj`, which it is merely mechanical to compute for any datatype of jokers. Can we do better?

## No comments:

## Post a Comment