I made the following provocative but rather opaque tweet a little while ago:
I wanted to start writing down some things working up to what I actually meant by it. So I'm going to start by telling a little backstory:Ok, so now I think I can fit everything I've learned about logic in the last decade into about ~30 lines of agda pic.twitter.com/nweoRRKMKg
— jason reed (@jcreed) March 31, 2018
The line of thinking the tweet is referring to is something I've been ruminating about ever since my phd thesis. The thesis work, in turn, came directly out of, some time in the early-mid 2000s, Frank suggesting I think about the problem of, hey, wouldn't it be nice if we could take carsten's LLF — a generalization of LF incorporating linear logic into the types — and actually prove theorems the same logic-programming sort of way you do in LF. The main problem is that even stating the theorems we wanted to prove seemed difficult!
A fairly representative hello-world sort of thing you prove in LF without any linearity is cut-elimination theorem for an intuitionistic propositional sequent calculus. You say things like o : type. to declare that o is a type of propositions, and hyp : o -> type. conc : o -> type. to declare that you're able to hypothesize and conclude propositions, and then the type of the cut elimination function — to be written as a logic program — is cutAdmissible : conc A → (hyp A → conc C) → conc C → type. (with implicit universal quantification over both A and C since they're capitalized) together with some declarations saying that you intend to perform cutAdmissible with the first two arguments being input, and the last being output, and then twelf's totality checker goes to work on your logic program and (if it's written correctly) verifies that indeed logic programming execution will always terminate yielding something of type conc C.
Thanks to the magic of higher-order abstract syntax, this can be shown to correspond exactly to proving the cut admissibility theorem as you'd usually state it:
So, I might paraphrase Frank as saying to me, let's think about how the linear cut admissibility theorem looks. It's
Bearing that in mind, Frank writes on the chalkboard: cutAdmissible : ((conc A $\tensor$ (hyp A $\lol$ conc C)) $\&$ conc C) $\lol$ type. which seems really slick and nice — just like higher-order abstract syntax uses binding in the metalanguage to simulate binding in an object language, here it looks like we can use linearity in the metalanguage to talk about linearity in the object language.
The only problem is that it totally doesn't work! One obvious problem is that LLF didn't have $\tensor$ (even though CLF did) but it's not terrifically difficult to imagine working around that by coding up something kind of like $\tensor$ even without the elim rules working right — such a thing would probably be good enough for representing 'disjointness of resources' in the sense that cutAdmissible needs.
Another obvious problem is that instead of the type family having a nice distinct set of arguments joined by $\to$, ending in type, it has just one arguent, which is a decently complicated beast. How are we to even name the subparts of it when we want to talk about logic programming input/output modes, and about which argument is supposed to get smaller when doing totality checking? Not at all impossible to address, but it sounds a little like an ergonomic nightmare.
But the real kicker, the biggest obstacle, isn't either of these, but something more basic: we didn't know how to actually have kinds of the form $A \lol \rtype$ in the type theory without everything blowing up. If you try to have a dependent function type like $\Pi (x \hat : A) . B(x)$ where the bound variable appears in $B$, but is also supposed to be somehow linear, all kinds of weird phenomena pop up where the variable's double role as an expendable resource and as an index for other type families seem at odds with one another; the variable gets stranded on one side of a multiplicative context split, despite still being necessary on the other side to make a type make sense, and it's confusion everywhere you look.
There has been work about linear dependency that's come out since, things like McBride's I Got Plenty o' Nuttin' — but I'm still not sure they enable stating cut admissibility like the above. I'd love to be corrected if I'm wrong, though!
Anyhow, I'm going continue by shamelessly discussing how I ended up thinking about the problem, and where it led me, because I do genuinely think it ended up in an interesting place.
This is where I was in the middle of trying to work on my thesis, trying to imagine what sense could be made of $A \lol \rtype$. Next time I'm going to try to summarize what, looking back from the present, I think of as The Point of my thesis, which is a certain way of doing an end-run around the problems of linear dependency by disentangling the linearity of it from the dependency.
(Next: "Everything I've Learned About Logic in the Last Decade or So, Part 2")