How to Identify Futile Moral Debates
A small framework for identifying which moral discussions are worth having
Quick Summary: We do better when we (1) acknowledge that Human Values are broad and hard to grasp; (2) treat morality largely as the art of managing trade‑offs among those values. Conversations that deny either point usually aren’t worth having.
Morality is about pragmatic matters. If we get better at morality, we should get better at living our lives, becoming happy, making compromises together, writing good laws, and building non-profits that durably improve our world.
Sadly, there has been little progress on this in recent decades. Moral discourse is at large bad.
This can be ascribed to emotional causes, such as people slinging shit at each other instead of trying to solve problems together.
This could also be ascribed to structural causes, like Section 230 or the media ecosystem having been taken over by people refer to their users as “dumb fucks”.
But today, I'm more interested in the rational causes.
Namely, I am interested in two specific truths about morality that people reliably miss.
1. Human Values are hard to figure out.
I have already explained why we should study Human Values in the past, but I'll summarise it quickly here:
In the past, trying to figure out Human Values allowed us to improve morally and design better societies. So there's a track record there.
Furthermore, still in the present, people often live their lives ignoring huge aspects of themselves and what they value, leading to persistent unhappiness.
Thus it seems like we are still missing a lot about Human Values and that we ought to study them more.
I have also written in the past about ideologues over-fixating on one narrow slice of Human Values and becoming worse people for it.
In general, I think people simply underestimate the extent of Human Values.
It is very easy for anyone to only care about their explicit moral principles (Freedom! Security! Equality! Social Justice!), durably ignore all of their implicit moral beliefs, and then assume that their opposition are assholes who deny their values.
Whereas the reality is that the other side is often doing that very same thing. And this leads to very bad attractors.
This dynamic is an unending source of polarising scissor statements:
Pick a situation whose morality completely changes if you consider it through one slice of Human Values (Security!) versus another (Equality!), like Immigration.
Assert that it is Good or Bad.
Let people fight about it.
Let social media reward you for the fight, feeding more people to the mêlée.
I do believe that if smart, reasonable people agreed on the fact that Human Values are hard to figure out, we could make a dent in this type of dynamic.
Instead, I see them constantly waste their time getting intellectually trolled by the Controversy du Jour.
2. Morality is Primarily about Trade-Offs
The second truth is the hard one.
Morality is about trade-offs.
For now, we cannot reliably define human values, let alone reliably reason about them, let alone quantify them. This renders unfit all of our typical mathematical tools that let us deal with various types of trade-offs: economics, engineering, programming, etc.
Moral trade-offs are everywhere.
Short-term vs Long-term. How do I weigh my short-term preferences vs my long-term preferences? If I always focus on becoming a better person instead of enjoying life, I will simply be unhappy and not enjoy my life. If I always focus on making my partner become a better person instead of pleasing them, they will leave me.
Many vs Few. How do we weigh the interests of the many against that of the few? We do not want a small group of people to have power over everyone else, so the collective does matter. But conversely, we do not want to kill people for their organs or sacrifice a child for the happiness of many. So the question is how can we draw the line on a situation-by-situation basis: taxes, expropriation, compelled work, military or civic service, etc.
Givers vs Takers. How do we weigh the interests of the people who give to the collective against that of the ones who take? Should we give privileges to people who contribute a lot to taxes, in the vein of Noblesse Oblige? In the past, I wrote about the Responsibility of the Weak: if we do not take care of it explicitly, how do we prevent free-riders, people who do not pull their weight even though they could?
Personal vs Collective. How do we weigh our personal interests against that of our family, friends and neighbours? Should my happiness triumph over that of my children, partner or parents? What if I have no children, and it's about my happiness versus that of all of my family?
Across Countries. When different countries have different values, what should we do? Is war the only solution? Like, is there any way we could have an international debate about a cross-country issue with an actual better-than-current resolution? Or is staying in unstable geopolitics with implicit threats of war and constant memetic and propagandist warfare where everyone loses the best we can do?
Across Rationality. Worse. What should we do when a country doesn't even have values, where they're just devolving and in great internal turmoil? What are good diplomatic and trade relationships with it, or is it always exploitative postcolonialism? In the real world, entities that interact are rarely as coherent and rational as each other.
Across People. Let's be clearer, the last two questions are not only about countries. They are about all types of entities, most important people. What do we do when we morally disagree with people? Are we doomed to fight them, or to outcompete their morals? What if they're too stupid to have proper morals? Do we just overpower them? It seems quite uncivilised, and that we have already done better than Homo Homini Lupus, the State of Nature and Might Makes Right.
Deontology vs Consequences. At a more abstract level, how do we weigh deontological principles against consequentialist principles? This is the point of the trolley problems. The answers to each variant do not matter that much. What matters is uncovering the principles we use to balance deontology with consequentialism.
This is in large part why a lot of moral thought experiments focus on dilemmas: by picking crazy scenarios where both choices are extreme and make sense, the trade-offs principles become more salient, and we can reuse them in more realistic situations.
Identifying Futility
Internalising these truths is very important. This allows for better conversations that do not have to rehash all the basics and bypass all the common pitfalls like "are you saying that you hate waffles??"
More generally, I think it's good to have a few criteria to identify futility in discourse.
We have limited time and attention, and we'd better put it on people who matter first. People who have a chance of changing their mind, and then doing Good.
It's too easy to be trolled by someone who is wrong on the Internet! The more egregious, the more likes they get, and the more we feel like countering it.
But I believe it is much better to look for people who are kinda right, and instead try to help them become more right or do Good. This is my rationale behind starting the Torchbearer Community.
As part of this, I have a couple of rules of thumb, like: if someone directly insults other people, never respond to them. (And by default, mute them, except if they have good content.)
Now that we're nearing the end of this essay, I can share two more!
If someone cannot understand or acknowledge that there are more Human Values at play than the ones they have identified, disengage.
If someone cannot understand or acknowledge that there are Trade-Offs, disengage.
Conclusion
I think this is a pretty nifty way of thinking about morality in general.
On one hand, morality is about identifying human values that are relevant in a given situation. This is a more introspective investigation, where the answer is within. We must ask each other and ourselves what are the things we truly care about, the things that are good for us.
On the other hand, it's about discovering principles that let us resolve non-purely quantitative trade-offs. Dealing with this is more of a scientific question. How to manage trade-offs is hard. Not having good solutions involves a lot of hurting each other, acting in ways that are contradictory, participating in destructive endeavours, or just going for less efficient courses of action than we could.
On this, cheers!