Excuses and foregone conclusions
In this post I talk about excuses.
My definition of excuse is: any argument for which the conclusion is written first, and a reasonable sounding justification for the conclusion is written afterwards.1
Personal excuses
Imagine my friend says I’m playing video games too much. I look for a reason why it’s ok to play video games as much as I do, and I find one:
Step 1: Ok, so what if I didn’t play video games?
Step 2: I would have a lot more time in which I wouldn’t know what to do.
Step 3: This seems bad. I would just pace around my house bored and without anything to do all the time.
Final output: I have a lot of free time and I like playing video games. What’s the problem?
The problem here is that I only took 5 seconds of mental effort to come up with a plausible reason to keep playing video games. As soon as this happens, I can stop thinking and blurt out the response.
Another example: my roommate complains that I never wash the dishes.
Step 1: When would I wash the dishes?
Step 2: I never have time to wash the dishes (in the past few days which are easily salient to my memory).
Step 3: I’m very busy.
Step 4: My roommate is not as busy as me. That’s why they manage to wash the dishes more often than me.
Output: I just don’t have time to be as zealous as my roommate in washing the dishes. They’ll have to deal with it.
Again, as soon as a plausible excuse is found, the stopping criterion is hit. This is not a good way to change my mind to the correct belief.
How many times have you failed to find a reasonable sounding excuse to keep doing what you’re doing?
A serious inquiry into this question is much more difficult and can’t be done in 5 seconds.
In the video games example, it might require thinking about what my life would look like if I didn’t play video games for a while, enough time to get other hobbies etc., and whether I prefer my current life to that one.
In the dishes example, I would at the very least have to mentally list all of my engagements that are making me so busy, and check for each one if they are all really so critical that I can’t make 10 minutes per day to wash the dishes.
If we stop thinking as soon as we find a plausible reason to do so, we will usually fail to change our mind when we should have.
PR
The most trivial example is a company’s PR department, or CEO doing PR work, having to explain, justify or apologize for an unpopular choice made by the company.
Their job is only to do damage control on public opinion. The “algorithm” which is producing the PR speak is completely separate from the one making the decision, and the former has zero input into the latter.
All the relevant decisions have already been made by the time a matter is talked about publicly.
Take every time a company says: “We care about the safety of our customers / our impact on the environment / the privacy of our users / any externality X a company might cause”.
What information are we getting from listening to this statement? The company’s CEO or other representatives would say the exact same thing regardless of how relevant decisions are made internally.
So what is the purpose of this statement? To make people feel a warm fuzzy feeling? To fill air with sound while they're testifying to congress?
To better understand this, let’s imagine a case where the PR speak does indeed tell us something about the way internal decisions are made.
A non-psychopathic, individual human whose job is to make artisanal chairs. Say this person comes to you, and tells you “I deeply care about the quality of my products”. What can you glean from this statement?
This artisan probably goes out of his way to use high quality materials and puts a lot of work into each chair. If he receives a customer complaint about a poorly made product he sold, he’ll probably feel bad and this might spur him to look for ways not to make the same mistake in the future.
In this case, the same underlying factor (whether the artisan cares about the quality of his product) is causally upstream of both the statements he makes, and of the way his products are constructed.
In the context of a company, it is exceedingly easy for this guarantee to break. All it takes is for at least one of these conditions to materialize:
The people speaking the PR speak and taking decisions at a company are different people, so it really doesn’t matter what the PR people care or don’t care about.
Someone appears to have the power to affect internal decisions, but actually doesn't. For example, the CEO of a publicly listed company who would get immediately fired by the board of directors if they took a decision the board doesn't like.
The company CEO is the kind of person who is ok with lying or at least heavily sanitizing information about their own beliefs and actions.
The company CEO feels bad about what is being done, but it has already been decided by a strategic cost-benefit analysis what will be said and done, and the “feeling bad” is not enough to override this.
The company CEO does indeed care in his heart of hearts and does have power to affect internal decisions, but the matter being talked about is too complex for the gut feeling of “caring” to guarantee error correction. Refer to the “Personal Excuses” section, except this time it’s more powerful since the whole company is coming up with excuses instead of just one eloquent individual.
The purpose of this type of PR speak is to imitate the noises made by an individual non-psychopathic human when they deeply care about something, subject to the constraints set by the legal department.
However, unlike in the case of the chair artisan, we can’t get any information about how future decisions will be made at the company from PR statements.
Of course there are some exceptions to this “we can get zero information from PR” claim, and we’ll go into an interesting one in the next section.
Cambridge Analytica
The story
I will say Facebook throughout this section even when I refer to events that happened after the rename to Meta. Sue me.
Let’s look at an example of a scandal involving a big company: Cambridge Analytica. I think it’s a good case study.
You should read about it to understand this section, but the short version is that a consulting firm got a bunch of data about Facebook users, used it to perform a psychometric analysis of said users, and then used that to run microtargeted political ads in order to influence US2 elections in various ways.
Mark Zuckerberg went to congress and profusely apologized about the incompetence that lead to the data breach, as well as the decision not to notify users about it when they first learned about it in 2015.
And it wasn’t all empty words, the company was indeed restructured in some ways that probably reduce diffusion of responsibility on similar failures about handling user data.
Today, a third party would presumably3 struggle to get data out of Facebook which constitutes such a blatant violation of privacy, e.g. data detailed enough to build accurate psychometric profiles.
So, this is good, right? The way Facebook makes decisions was actually changed as a result of the scandal. Doesn’t this conflict with what I say in the section of this post on PR, that we can’t glean information about how a company will make decisions in the future through its PR communications?
Wait a second… what happened? We were talking about microtargeted propaganda and underhanded, nefarious influence on democracy, and now all of a sudden the story became entirely about the data breach?
Sure, the data breach was one of the things that people were pissed about, but what about the revelation that people’s online data enabled the building of psychometric models detailed enough to enable manipulative tactics that could sway their votes?
Not only that, but the fact that all of this data has already been conveniently collected by a single, private entity?
And what about the recognition that a machine had been built which enables actors (including foreign powers) to manipulate or weaken democracies?4
The old switcheroo
One way or another, it looks like the scandal has been reframed to be a data breach scandal, or at least that’s the part that received the most focus.5
For the sake of simplicity, let’s split the scandal into three parts:
The data breach.
The ability to make psychometric models and use them against people.
The ability to manipulate democracy.
What is it that makes the “data breach” aspect special, distinguishing it from the other two?
Well, this is easily answered by going through each of the 3 topics.
Does Facebook want to keep leaking data to third parties?
No! Facebook didn’t get paid for the data it leaked to Cambridge Analytica, and it has no interest in giving out data for free…
In other words, the data breach was literally a mistake from the point of view of Facebook, they lost reputation in exchange for nothing.
They didn’t do it for some gain, they did it out of negligence. For example, they may have not realized it would be (that big) a problem until too late, or couldn’t spare the effort to prevent it.
Does Facebook want to keep building accurate psychometric models about people?
Hell yes!!! Anything to optimize content delivery!!!6
Of course, there’s no line of code or row in a database that says something like:
User john_jones ranks 80 percentile in neuroticism.
It’s all a black box neural network, so that anyone at Facebook could claim they don’t know why this particular post was recommended to this particular user, or at least that they didn’t make it so on purpose.
Does Facebook want to there to be an easy way to influence, attack and / or weaken democracy?
This one is fuzzy.
I’m not aware of any reason to believe that they have a direct interest in doing this. But that doesn’t mean much by itself: oil companies don’t want to cause oil spills, they just want to extract oil and spend less money on… whatever safety measures are required to minimize the chance of spills.7
To answer this question, I imagined that I was one of the groups which hired Cambridge Analytica back in the day. Would I be able to pursue the same strategy today? If not, how close can I get?
It looks like, in this regard, some things have changed since 2015:
Facebook will not let you post ads about political or social issues if you are an entity which is not based in the target country. I can’t be bothered to check, rigorously, if this restriction is effective in practice.
From a quick search, I couldn’t find any rules that prevent an organization that is based in the US but funded by foreign entities to post political ads in the US. For example, this happened in Poland, and a state agency had to find out and request them to be taken down, so it looks like Facebook by itself might not catch this type of violation.Facebook limits the tools you can use to manually microtarget ads in the US, for example they don’t let you target location more precisely than a certain radius.
To me, these look like minutiae.
The biggest hole in this defense is that you can post anything8, and Facebook will happily microtarget it for you, as long as it drives engagement. It doesn’t even have to be an ad! You can just post!
Not only that, but Facebook’s models are presumably way more powerful than Cambridge Analytica’s9 basic big 5 psychometric model, and based on so much more data!
For example, whereas Cambridge Analytica only had access to user’s likes, Facebook gets to see a full log of all the content they ever saw or clicked on and how many seconds they spend reading / watching / dwelling on it.
So, if things improved since Cambridge Analytica, as far as I can tell, this is the extent to which they improved:
A foreign entity10 must post instead of buying ads;
It must compete for engagement against memes, AI slop and any other propaganda that might be running around on the platform.
Doesn’t seem like too bad a deal…
On the other hand, Facebook’s microtargeting will probably be so much better than anything you could’ve done by yourself, so it’s not clear whether Cambridge Analytica’s job would be harder or easier overall than it was in 2015.
Synthesis
Now we can apply my definition of excuse from the beginning of the post to various responses by Facebook to concerns around this scandal and adjacent topics.
Alice: You are collecting an excessive amount of data about me and treating it nonchalantly! I am pissed!
Facebook: We hear your concerns, we are committed to treating your data responsibly and protecting against data breaches.
Not gonna stop collecting it though, we need it to run ads, it’s our business model.
See that in each of these cases, Facebook gets plausible deniability about the issue. They show that they care about the issue, and they did what they could.
The purpose of the part in italics at the end is to show how there is no will to push the reasoning forward past the point where plausible deniability is achieved, in a way which would actually risk causing a change of mind about the core decision.
This is indeed how an individual, appropriately competent11, non-psychopathic human would think if they cared about what is making Alice angry.
They would at least consider if what Alice really cared about, possibly after being informed thoroughly and after much reflection, is that anyone at all is collecting the data rather than it being about data breaches.
Continuing:
Bob: I had no idea you were collecting enough data about me to estimate my personality traits and to show me targeted propaganda and I’m pissed!
Facebook: Our goal is to deliver the most relevant and engaging content12 to our users. Our content recommendation systems can sometimes produce unpredictable results, but we would not use the data for nefarious purposes like showing targeted propaganda.
Not gonna stop using the recommendation systems though, or even investigate if they make decisions based on insights on your personality in a way you would find alarming or unacceptable.
Also, we are aware that our algorithms are delivering microtargeted propaganda, but we didn’t do it on purpose!
And again:
James: I am worried about the effect of your platform on the integrity of democracy.
Facebook: We are committed to free expression, we are not willing to censor posts as this risks undermining open discourse and setting dangerous precedents.13
Not only we are not willing to selectively remove content, but we will selectively promote it if our system says this will drive engagement.
Inspired by the old post on Lesswrong “The Bottom Line”
And allegedly, the UK and others.
I say presumably because I haven’t tried myself.
From Wikipedia, the reason that Christopher Wylie blew the whistle in the first place was to “protect democratic institutions from rogue actors and hostile foreign interference, as well as ensure the safety of Americans online”.
From what I can tell, this is the part on which most of Facebook’s PR efforts focused, and the one that appeared prominently in the conditions of their settlement with the FTC.
“The best minds of my generation are thinking about how to make people click ads. That sucks.” - Jeff Hammerbacher, ex-Facebook engineer.
Forgive me, I’m not an oil extraction expert.
Subject to caveats, e.g. can’t post content promoting self-harm or directly call for violence, but politically charged content is allowed. Even for things that are prohibited, I’m not sure if moderation is effective in practice, and don’t have time to check now.
Just think about how much more powerful Facebook’s machine learning models developed in 2025 can be compared to what Cambridge Analytica could do at the time.
Which can’t afford having a shell organization in the target nation or wants to post propaganda so obvious that it would be banned instantly…
To be clear, this is a very high bar. We don’t get it automatically by being good people or even by being smart. For example, it requires being very good at avoiding thought-stoppers so that we don’t accidentally make excuses for ourselves. And we need to be good enough to do it in a complicated subject.
And ads.
This is the post-Trump excuse. The pre-Trump excuse would’ve been about how they did everything they could to fact-check posts.