Writing Bankruptcy
Bankruptcy
I have accumulated a lot of writing debt lately. Unfinished series of posts, drafts that needs to be reviewed, texts that need to be edited, etc. This is not only a result of me not having enough time: my writing process simply generates more debt than it can afford.
My process is “Draft bullet points about an idea, split out what can be written right now in a self-contained format from what could be written in the future, write down a first draft, send it for review, edit it, send it for final check and send”.
Despite the debt, I have gotten some benefits from it: I got much better at writing, 300 subscribers here, and by sending out drafts for review, I have shared a lot of ideas around me. But overall, it is too heavy of a process, and I have accumulated too much debt.
As a result, I am now declaring writing bankruptcy. I am stopping my current writings, and starting a new process instead.
Why write in the first place?
I know a few people who write for the pleasure of it. They enjoy writing, or being read.
This is not my motivation. I write because others are missing information that would make all of our lives better if they had it. Specifically, I write because I need to communicate at scale, and get better at it.
New Format
For the new format, I am planning to keep writing notes every day on my phone and laptop, and to flush them out here regularly, with minimal styling. I’ll try to cap this at one hour per day.
This will result in posts that are of lower quality, less coherent and less thematic than before. But also, I have written like only one post in the last three months.
Still, to make it clear, I will tag them with Notes: in their title, so that it is easy to separate them from the rest. As you can see, this is the post itself is the first one!
Topic: Computational Complexity
Introduction
Computational Complexity Theory is a field of computer science, most famous for its P vs NP problem.
Scott Aaronson wrote a 60 pages paper about the philosophical implications of complexity. This paper, titled “Why Philosophers Should Care About Computational Complexity”, is one of my favourite papers.
But here, in the spirit of the new format, I do not want to write my own 60 pages text about all the philosophical implications of computational complexity. I just want to share some notes.
Why Computational Complexity?
Maths is very abstract. As a result, it deals with too many irrelevant entities. This can easily make mathematical results be irrelevant. An example of such violation is the Banach-Tarski Paradox. It is a theorem that states that you can split a ball into a few subsets, and reassemble those subsets into two balls of size equal to the original ball.
That you can just duplicate a ball by just disassembling and reassembling it seems obviously wrong: this contradicts conservation of matter. And indeed, you need to make non-physical / non-computational assumptions, that involve very strange entities. Those exotic entities make intuitions coming from that sub-field of maths not really translate to most real world situations.
Keeping track of which entities are relevant or not is hard, and Computational Complexity is one way of doing it. It is not perfect, as relevance is contextual, and hard to formalise.
I see Computational Complexity as the continuation of thinking about which formal patterns can be practically manipulated.
Definability
First, you can consider about all possible mathematical objects. For instance, all the real numbers, all sets of natural numbers (including infinite ones), or all possible spatial concepts (arbitrary subspaces of our universe).
Most of these objects are not only irrelevant, they are not definable. There are simply too many of them. This means that even though we can talk about “the real numbers”, as a set, we can not talk about each single real number. There are more real numbers than text we can write to uniquely talk about one. The same is true for sets of natural numbers, or spatial concepts.
Computability
Then, even within definable mathematical objects, many are still not computable. There are numbers that we can non-ambiguously define, but have guaranteed that we can not compute. One of the most famous examples of this is the Busy Beaver function, which is about finding the programs of a given size that run for the longest (but without running infinitely).
This function is not computable, and it has been demonstrated that for even “small” numbers like 748, we can never prove what would be the value of BB(748).
Computational Complexity
Unfortunately, even though it is more restrictive than definability, computability is still too lax. Most concepts that can mathematically-theoretically be computed can still never be computed in our physical universe.
For instance, EXPSPACE is a class of problems that require an exponential amount of memory to solve. This means that solving such problems, even on moderately sized inputs, would quickly require exponentially more memory than there are atoms in the universe.
This is where computational complexity shines in. It defines multiple classes of problems and concepts, which, given different contexts, are essentially tractable.
Its main ones are L (LOGSPACE) and P (Polynomial Time). Though still not perfect, showing that a concept can be constructed (or that a problem can be solved) in L or P, shows that it is tractable. It means that we will likely be able to model it, combine it with others, and so on and so forth.
This is not exactly true, but it is already much better than completely abstract maths, which is where we started!
Reductions
Reductions are concept from Computational Complexity. Complexity focuses on establishing wich problems are harder than others, and reductions are one of its main tools to do so.
A problem X is said to be “reducible to Y” if you can show that solving Y is enough to solve X. There are two key insights behind this definition.
The first insight is that, If you can use Y to solve X, it means that fundamentally, solving Y is more powerful than solving X. In other words, if you could chose between solving X or Y forever (which is called an oracle), you’d pick an oracle for Y over X.
Basically, all of the following are roughly equivalent: Y is harder than X, Y is more powerful than X, Y is more general than X. Conversely, X is easier than X, X is weaker than Y, and X is a special case of Y.
The second is that, If it takes you more effort to use Y to solve X than to solve X directly, then Y is not more powerful than X. In other words, if you just need to solve X, you should stop thinking about Y.
—
You can just ask ChatGPT to solve various problems. In that sense, those problems reduce to ChatGPT.
On the other hand, if it takes you more time to get ChatGPT to solve your homework than just doing it yourself, then you should just drop it and actually do the work yourself.
And sure, there might be a magic prompt or instruction that gets ChatGPT to perform well. But from a practical standpoint, what matters is not whether such a prompt exists, what matters is whether you already have access to it.
Topic: Boundaries
I find “boundaries” to be quite an interesting concept. A lot of our world revolves around boundaries, yet I haven’t seen a lot of abstract treatment of it. So I’ll just share a couple of thoughts.
Games
Games are often framed as being about winning.
I think this is missing the key part of games. Games are about The Rules, boundaries between acceptable and unacceptable behaviours. You can play a game without a win ever happening, but you can’t play a games without any rule.
Cheating is about violating the boundaries of the game for your own gain.
Skirting the rules is about playing with the ambiguity that is present within any boundary: all boundaries are fuzzy, it doesn’t mean that they don’t exist.
Both the spirit and the letter of the rules are important. The spirit is, because formal rules often fail at capturing what was meant. The letter is, because there is a cost to always having to adjudicate each situation on a case by case.
Outside of the boundaries of the game, you get the real-world. If you die in a game, you don’t die in real-life. But if you play poker against someone armed with a gun, I don’t recommend winning too much if you want to leave the room with your money.
A game is a contract between players to stay within the boundaries. If one of the player stands to lose too much from respecting the contract, they can just stop. This is usually violent.
Very often, someone in a game will forget that the boundaries are relative. They might try to push the other players to lose their cool so that they break the rules and lose. This behaviour is toxic. The game exists in a big part because people take part in it in good faith. This type of behaviour kills that good will, and makes it more costly for everyone to keep playing. This does not end well.
I would be interested in a game theoretic treatment of boundaries. Where the focus is not on winning strategies within games, but on the conditions under which the games themselves are stable, their boundaries, a quantification of “good faith” and the social contract behind the participation in the game. If anyone knows keywords or papers, feel free to send them my way.
Geopolitics and Politics
Geopolitics has various rules to it. But as always, international war is the continuation of geopolitics with other means. If countries disagree too much on international law, on the interpretation of treaties, or just have lost all good faith in each other, they can always go to war.
Similarly, civil wars and revolutions are the continuation of democracy with other means. If civilians disagree too much on laws, on the interpretation of the rules, or just have lost all good faith in each other…
VAR
In football, VAR is an assistant to the referee, in big games, who uses video footage for decisions. VAR came from various referee decisions being contested by public viewers and commentators, who could just replay the action and see that the decision made by the referee was wrong.
This might seem like an unalloyed good, a net win.
But obviously, there is a petition and a Twitter account against VAR.
The invoked reasons are that VAR rulings often feel arbitrary, stop the flow of the game and sometimes mistaken. (Ofc, there is a bunch of typical twitter BS on top of that.)
Still, I think that laws feel a lot like VAR to more and more people. And I don’t think the solution is banning VAR.
Vagueness
Our world is not binary, and not single-dimensional. This means that boundaries are vague: complex and fuzzy.
As a result, there is always a difference between the spirit of the boundary, and the actual boundaries that we put in the real world. Laws, formal rules, Chesterton’s Fences, frontiers, p=0,05.
Real-world boundaries are almost always arbitrary: they can never capture the full breadth of the spirit of the boundary. This is not repeated, hammered and drilled down nearly enough.
When a real-world boundary hits an edge-case where it gives a intuitively wrong decision, there are very often two clashing reactions:
“The boundary should be taken down!”
“The boundary is actually justified!”
Both are wrong.
The boundary is often not justified. Real-life boundaries are not perfect, so we should expect edge-cases where they give out wrong decisions.
Nevertheless, it should not be taken down. Removing a boundary does not yield better decisions, it leads to even worse decisions. This is why there are boundaries in the first place.
What should be done is restating where the current boundary is, understanding what led to the decision, discussing where a better place could be, and the cost of moving it there.
Conclusion
No conclusion! Those are just random notes that I wanted to share.