jcreed blog > Everything I've Learned About Logic in the Last Decade or So, Part 1

Some Notes on Representing Focused Logical Connectives

or: "Everything I've Learned About Logic in the Last Decade or So, Part 1"

I made the following provocative but rather opaque tweet a little while ago:

Ok, so now I think I can fit everything I've learned about logic in the last decade into about ~30 lines of agda pic.twitter.com/nweoRRKMKg

— jason reed (@jcreed) March 31, 2018
I wanted to start writing down some things working up to what I actually meant by it. So I'm going to start by telling a little backstory:

Linear Types in Logic Programming Proofs

The line of thinking the tweet is referring to is something I've been ruminating about ever since my phd thesis. The thesis work, in turn, came directly out of, some time in the early-mid 2000s, Frank suggesting I think about the problem of, hey, wouldn't it be nice if we could take carsten's LLF — a generalization of LF incorporating linear logic into the types — and actually prove theorems the same logic-programming sort of way you do in LF. The main problem is that even stating the theorems we wanted to prove seemed difficult!

Cut Admissibility

A fairly representative hello-world sort of thing you prove in LF without any linearity is cut-elimination theorem for an intuitionistic propositional sequent calculus. You say things like o : type. to declare that o is a type of propositions, and hyp : o -> type. conc : o -> type. to declare that you're able to hypothesize and conclude propositions, and then the type of the cut elimination function — to be written as a logic program — is cutAdmissible : conc A → (hyp A → conc C) → conc C → type. (with implicit universal quantification over both A and C since they're capitalized) together with some declarations saying that you intend to perform cutAdmissible with the first two arguments being input, and the last being output, and then twelf's totality checker goes to work on your logic program and (if it's written correctly) verifies that indeed logic programming execution will always terminate yielding something of type conc C.

Thanks to the magic of higher-order abstract syntax, this can be shown to correspond exactly to proving the cut admissibility theorem as you'd usually state it:

If $\Gamma \prov A$ and $\Gamma, A \prov C$, then $\Gamma \prov C$.
The $\Gamma$s correspond to whatever assumptions hyp A are floating around ambiently when you run the logic program. It's a definite advantage of the twelf programming style that you don't really have to think about them except when you actually add variables to the context, or need to extract them from it. They just go along for the ride in a nice way, and what we wanted was exactly that same niceness to generalize to the linear case.

Linear Cut Admissibility

So, I might paraphrase Frank as saying to me, let's think about how the linear cut admissibility theorem looks. It's

If $\Gamma \prov A$ and $\Delta, A \prov C$, then $\Gamma, \Delta \prov C$.
Where $\Gamma$ and $\Delta$ are now contexts of linear assumptions that don't satisfy contraction or weakening. If we try to write down a version of cutAdmissible for this theorem, we need to somehow accound for the fact that
  1. The set of hyps used by the first two arguments to cutAdmissible (the things that would correspond to $\Gamma \prov A$ and $\Delta, A \prov C$) are somehow disjoint
  2. The union of those two disjoint sets (which represents $\Gamma, \Delta$ together) is somehow the same as — or, one might say, shared with — the hyps used in the third argument to cutAdmissible, the thing that would correspond to the proof of $\Gamma, \Delta \prov C$.
Now the good news is that linear logic absolutely gives us ways to talk about disjointness and sharing! To be able to obtain $A \tensor B$ is something like the ability to obtain $A$ and $B$ from disjoint pools of resources, and to construct $A \& B$ is to be able to construct either one of $A$ or $B$ from the same set of resources.

Bearing that in mind, Frank writes on the chalkboard: cutAdmissible : ((conc A $\tensor$ (hyp A $\lol$ conc C)) $\&$ conc C) $\lol$ type. which seems really slick and nice — just like higher-order abstract syntax uses binding in the metalanguage to simulate binding in an object language, here it looks like we can use linearity in the metalanguage to talk about linearity in the object language.

The only problem is that it totally doesn't work! One obvious problem is that LLF didn't have $\tensor$ (even though CLF did) but it's not terrifically difficult to imagine working around that by coding up something kind of like $\tensor$ even without the elim rules working right — such a thing would probably be good enough for representing 'disjointness of resources' in the sense that cutAdmissible needs.

Another obvious problem is that instead of the type family having a nice distinct set of arguments joined by $\to$, ending in type, it has just one arguent, which is a decently complicated beast. How are we to even name the subparts of it when we want to talk about logic programming input/output modes, and about which argument is supposed to get smaller when doing totality checking? Not at all impossible to address, but it sounds a little like an ergonomic nightmare.

Linear Dependency

But the real kicker, the biggest obstacle, isn't either of these, but something more basic: we didn't know how to actually have kinds of the form $A \lol \rtype$ in the type theory without everything blowing up. If you try to have a dependent function type like $\Pi (x \hat : A) . B(x)$ where the bound variable appears in $B$, but is also supposed to be somehow linear, all kinds of weird phenomena pop up where the variable's double role as an expendable resource and as an index for other type families seem at odds with one another; the variable gets stranded on one side of a multiplicative context split, despite still being necessary on the other side to make a type make sense, and it's confusion everywhere you look.

There has been work about linear dependency that's come out since, things like McBride's I Got Plenty o' Nuttin' — but I'm still not sure they enable stating cut admissibility like the above. I'd love to be corrected if I'm wrong, though!

Anyhow, I'm going continue by shamelessly discussing how I ended up thinking about the problem, and where it led me, because I do genuinely think it ended up in an interesting place.

To Be Continued

But not right now! This feels like a natural stopping point in the story, which I'm going to try to break up into reasonably small chunks so I don't put off writing because it's too much all at once.

This is where I was in the middle of trying to work on my thesis, trying to imagine what sense could be made of $A \lol \rtype$. Next time I'm going to try to summarize what, looking back from the present, I think of as The Point of my thesis, which is a certain way of doing an end-run around the problems of linear dependency by disentangling the linearity of it from the dependency.

(Next: "Everything I've Learned About Logic in the Last Decade or So, Part 2")