Creative Annealing
This is a feeling I've been having a lot lately. The kernel of it
is something that is, I think, wholly unremarkable in
almost every creative field, but it's something that I've had to
learn and relearn in my guts several times. I'm going to try to
state the thing I'm talking about, and it's going to sound very
bland and obvious, but I'll say it anyway:
You probably have to make lots of wrong things before you're able to make the right thing.
An artist makes sketches. A writer makes rough drafts. A musician rehearses. An
architect makes models
(which, as
such, are completely ineffective at achieving the goals of the actual building)
Ok, sure, why is this interesting?
Is Mathematics Special
Because the notion of 'wrong' — or, maybe it's better to
say, our expectations of the notion of wrongness — in art,
and writing, and music, and architecture, and so on, are typically
somewhat different than what wrongness entails in math. The
'wrongness' that gets remediated through refinement, practice,
etc. is, more or less, a subjective wrongness, that is never
absolutely present or absolutely eliminated. The 'wrongness' of a
center-for-ants architectural model is a kind of
wrongness-of-symbolic-representation that isn't even really a
fault. A thing is made which says things about the real
object's structure, but it needn't be (and its usefulness is
predicated on not being) as fully developed as the real object.
But mathematics — good, old-fashioned formal mathematics
with Axioms and Definitions and Proofs — has a very clear notion of wrongness,
and on the face of it, it doesn't seem to have much use for wrong proofs.
And yet, here I am, with a pile of definitions and assertions
of consequences that I've been tinkering with for the last month
or so, and they're just wrong, and I know it. Some of the
asserted consequences do not follow from the definitions. Really
big, red-flag things, things that 100% need to be true for the
system to be taken at all seriously as a proof theory, things like
being able to implement identity functions, for crying out
loud, just flat out don't work yet. And yet I don't feel very
bothered about it!
I mean, don't get me wrong, I'm somewhat bothered. Some
of the section titles in my LaTeX notes are things like Slight
Sense of Panic. But it's not surprising; it's normal. This
is what research looks like.The conclusion that I'm drawn to is that math
actually isn't very special, and depends on drafts and
naive models and broken-assed garbage as much as any other field.
Certainly I should mention physics, in which QFT is the
shining example of how wrongness — by which I mean in this
case "lack of secure mathematical foundations" — has
provided apparently no impediment whatsoever to making just
stupidly accurate predictions about the physical world. On the
other hand, I still am a mathematician, and I find this
state of affairs unsatisfying as an endpoint — but it
does corroborate, I think, the notion that the human endeavor
of doing mathematics can survive and make progress even for
decades on the basis of merely some partially understood
principles with some partial formal probes into their
sensibleness.
How Can This Possibly Work
I think some things that make it possible to work on "wrong proofs" for so long
without becoming discouraged (at least for me) are
-
The mere fact that it's worked out before — that I've spent
a long time massaging a gradually decreasingly incoherent pile
of intuitions and conjectures into a formally functioning piece
of mathematics.
-
Being able to 'smell' signals of near-rightness among the
general cloud of wrongness. Things being more symmetric than
you'd expect. Theorems being aaaalmost satisfied. Things going
"the way they usually go" 90% of the way, leaving you with the
knowledge that what you need to think about is that
last 10%.
-
Having notation that's compact enough that you can do small
calculations reliably enough to be confident in which parts of
the whole are wrong or right. I am absolutely the worst
at making frequent random algebra mistakes, and the rate at
which I make them seems to be simply proportional to how many
symbols I have to write down. Figuring how to present a
small-as-possible cross-section to the bit-flipping cosmic
rays of human stupidity is essential.
-
Being able to use software in various ways that are in
principle unnecessary but in practice incredibly helpful for
guiding one's intuition through the swamp: certainly proof assistants, and
even the humble ability to copy-and-paste-and-modify text in an emacs buffer.