25 Comments

Thanks for the guest spot, buddy - 10/10 would do again.

Expand full comment

There's just too much to respond here, so this:

- the folks you describe as "rationalists" have been around forever. They are the true scientists (and heretics), while the rest of the population are much closer to sheep. There's a reason why you see, "To thine own self be true." carved all over universities.

- there are many Overton Windows. They vary depending on place and time, and are the subject of fierce battles. Such a battle is going on now, with something equivalent to a blitzkrieg having taken place about five years ago. The blitzkrieg has finally been noticed, and the recently installed Wokists are now being challenged. If humanity is lucky they will be soundly defeated.

Expand full comment

> - the folks you describe as "rationalists" have been around forever. They are the true scientists

Nah, granted they are trying when I wouldn't say that others. It's worth reading ey but make no mistake he's an arrogant asshole with his head far up his ass.

Science is just the current socail stucture, the funding model is ovisously flawed the weird connection to the miltry is a historal artifact of the world wars. Rationalists have thier founding myth in hpmor, and other stories are cannon, and it's better then the outer science culture; but it's still bunch of flawed humans running the show.

Expand full comment

Very good essay, RC! I think you could extrapolate quite a lot from here about the actual functionality of EA. How well can organizations that care so much about optics outside of "do we achieve our goal X better or worse?" actually focus on doing something better or worse in terms of efficiency? Is there a practical difference between standard charitable organizations and EA branded organizations if EA orgs are so serious about looking good instead of doing good? I always took EA's competitive advantage to be mercilessly measuring and judging interventions, rejecting those that didn't pay off well compared to others, regardless of how nice they sound or look, or how they make you feel. Seems that is entirely off the table these days.

Expand full comment

The awkward thing is that looking good is often a prerequisite for doing good - e.g. if you want to influence policy or raise funds, you need to not be disreputable, which means optics is important.

I don't think EA should be too Machiavellian, seems likely to backfire given our generally poor social skills, but while being calculating pragmatists is an important part of the method and brand, "being racist" is definitely something to distance yourself from (as is "being fraudsters" in the SBF case). Obviously you can see this in a lot of ways, but this kind of disavowal is exactly what someone more concerned with outcomes than feelings would do.

Expand full comment

Maybe. Note that "being racist" isn't what this is actually about, however. Bostrom was referring to a statistical fact that, while possibly not true or an artifact of the testing process, turns up time and again. The quote saying "This was bad think and we don't like it" wasn't addressing the truth value of his statement, or even the implications, just that it was unacceptable to think. That is very different from being an actual racist.

I also would caution against equating "racist" and "fraudster". If racists were still really good at improving poverty in Africa, so be it. Fraudster implies they are going to lie about how good they are, or just steal your donations. Again, if someone wants to do good more efficiently, the former has little bearing on the results, while the latter makes you question whether they are even trying to achieve the results.

So, while I agree that to some extent looking good, like you are the kind of person someone should give money to because you will use it well, is a prerequisite for doing good, knee jerk and inappropriate virtue signaling reactions probably don't help that a lot, and more likely signal that your definition of "doing good" might not be the same as your stated goals. Especially when the problem with most charitable organizations is that their real goal is getting as much money as possible to funnel into the leadership salaries, playing so hard into political optics makes you wonder whether there is really a meaningful difference in EA organizations. Why not just say "That was a rather unfortunate statement, but it was long ago, and look at all the good he has done since. We need to stay clear eyed and focused on doing the most good we can, and disavowing anyone who made a poorly conceived statement decades ago does not advance that goal."

Expand full comment

I would say it's not unacceptable to think that, but it clearly is unacceptable to say in most social circles, and in the grand scheme of things I would not pick this hill to die on - even if talking about racial IQ differences isn't necessarily racist, it definitely comes with a lot of guilt by association. In the long term I'm far more concerned about the consequences of Effective Altruism being associated with racism than with the consequences of occasionally not making certain observations. I acknowledge this is a trade-off, but always being brutally honest is a great way to lose friends and alienate people.

I probably would go with something similar to that last statement personally, but I will note that Bostrom volunteered to fall on his own sword 25 years ago:

> "I think it is laudable if you accustom people to the offensiveness of truth, but be prepared that you may suffer some personal damage."

Expand full comment

Good article.

One way I think you can square the circle is that rationalists are very often "consequentialist" - if I understand correctly, they frequently think that the more rational way to approach an ethical issue is looking at the consequences rather than the intent or the means.

So assuming the EA people are consequentialist rationalists (and I have no idea), there's an argument that it makes perfect sense to disassociate the movement from Bostrom. Our hypothetical EA poohbahs believe that they personally make better decisions when using a rationist framework, but that framework has led them to believe that any harm done by putting out an anti-Bostrom statement is outweighed by the extra mosquito nets and water purifiers they'll be able to get placed if they're not known as a bunch of racists.

This is what drives some people crazy about Scott Alexander - they want him to be a crusader for "truth," but if I am reading Scott right, he doesn't hold "everything I say is exactly true" as a value in and of itself, and if he can nudge more people towards what he sees as constructive reasoning and debate processes, he's happy to be nicer than some of his fans would prefer.

(Also, he's naturally nice and it seems to really stress him out not to be nice, but rationalists are allowed to have aesthetic preferences too.)

Expand full comment

The description of rationalism here reminds me quite a bit of 4chan back in its heyday. The strategies adopted by the two communities are quite distinct: rationalists sought to remove all emotional inflection from belief by relying on pure logic and evidence, whereas 4chan pursued the same goal by deploying shock humor to batter down emotional programming. The basic ethos, that nothing should be off the table for discussion, was very similar; both communities saw the Overton window as a challenge, rather than a guardrail. Both communities also prioritized the argument over the arguer, with 4chan adopting this principle by default due to the simple fact that it was impossible to identify the arguer - every participant in a thread is simply Anon.

It's no accident that anons referred to their style as weaponized autism. It's also no accident that both communities trend overwhelmingly young, male, and brash. The kinds of smart kids who sit in the back of the class and only raise their hands when they've come up with a way to embarrass the nice lady teacher trying to impart her midwit values in the guise of educating them.

Of course, the big difference is that it would never occur to anons to unperson Bostrom for dropping an n-bomb, since they do it so frequently it's essentially a verbal tic.

Expand full comment

I would say that the difference between Rationalists and 4chan is that 4chan (in the ideal) go out of their way to be offensive, whereas Rationalists (in the ideal) go out of their way to discuss what is true. A Rationalist would be equally happy discussing the proper taxonomic classification of a newly discovered species of fungus, and discussing human IQ differences. A 4channer would find the first discussion pointless, since it offends no one (except maybe a single-digit number of hardcore biologists)

Expand full comment

That is indeed the primary distinction. However, 4chan retains a strong truth bias - arguments must stand on their own for the simple reason that the reputation of the author doesn't exist.

Expand full comment

This probably all qualifies as 'wrong in little granular ways', but:

> Most EAs were rationalists - one movement sprang from personnel acquired from the other.

This seems wrong or at least very misleading. The big rationalist names in EA are broadly 'in EA' in the sense that they identified with the rationalist movement, got involved in rationalist organisations, and because of EAs increasingly taking longtermism and AI seriously, those became viewed as EA organisations.

Most of the big names and shapers in EA who were at explicitly EA organisations from the start (say, Toby Ord, Brian Tomasik, Julia Wise, Will MacAskill, Holden Karnofsky, Elie Hassenfeld, Nick Beckstead, Rob Wiblin, Owen Cotton Barratt) were involved in rationalism minimally if at all, and inasmuch as they are now it's mainly because of the same confluence described above.

Contra Ro's comment below (and, perhaps, the description of this post), I would say rationalism *in practice* comprises a much broader set of assumptions than merely 'truth seeking'. The hyperfocus on Bayes Theorem is an example - ask a statistician and they'll probably tell you it's one useful tool among many equally important ones. There's a culture of expectation around having 'read the sequences' and internalised their jargon, for eg.

There's also a culture of convinction that AI is around the corner and will change everything, and of adherence to certain philosophical schools (preference utilitarianism over hedonistic utilitarianism; value pluralism over value monism; moral antirealism over moral realism; some kind of updateless decision theory over eg causal or evidential decision theory; and perhaps many worlds theory of quantum mechanics over other interpretations; etc). One might reasonably think any these things, but it seems to that their widespread popularity has more to do with selection effects from a certain type of people being drawn to a certain type of people than 'pure truth-seeking' - since there are many intelligent truth-seeking philosophers, mathematicians and scientists who reject each of them.

Expand full comment

this is a good post

i'm kind of confused what's going on here (this is karl's substack?)

as i assume you know, even a lot of rank-and-file EA ppl are outraged at they way bostrom is being treated

Expand full comment

Razib, Karl is graciously hosting a guest post from me. It's a cool substack feature, as such things go.

Expand full comment

cool. unless nick patterson or david reich wants to write a guest post i'll probably never do it :) one man show

Expand full comment

Way to crush my dreams, Razib. Here I was halfway to boning up on the entire field of genetics, and now all my hopes are dashed.

Expand full comment

don't. who knows where genetics might take u nerd...

Expand full comment

> places like r/slatestarcodex started actively purging all their wrong-thinkers

That's a rather misleading way to put it, I think. They didn't purge wrong-thinkers; they just decided to stop hosting culture war discussions. Not every forum needs to allow political discussions. It was done because of unmanageable harassment towards Scott. https://slatestarcodex.com/2019/02/22/rip-culture-war-thread/

These discussions were moved to a separate subreddit (/r/themotte), which later moved again (because of Reddit admins) to themotte.org. This was done to disassociate these from Scott; mods of the new place were mods on /r/slatestarcodex before.

Expand full comment

I mean, I was there at the time, and under a different name was pretty pissed off about it. We can bandy about whether telling all the wrong-thinkers to fuck off with anything potentially controversial to one of Scott's Witch-villages is "purging wrong-thinkers", but the next significant step is "ban anyone who has ever said anything controversial and who won't sign a purity statement".

But (potentially oddly) I think arguing about the exact semantics here is burying the lede, because in the best case scenario where all your assumptions are true, we move from a position like this:

1. We approach topics from a data-driven, well argued positioning meant to make us less wrong by getting around the common human cognitive shortfalls which often keep us from coming to correct conclusions

To something uncharitably like this:

1. Except now people are judging us negatively for doing that, so we are putting a purity standard in place and banishing any discussion that doesn't meet it, so it will be clear that we are the good guys and that we got rid of the bad guys

Or charitably, like this:

2. We are willing to talk about anything logically and without emotion unless someone sensitive on the internet would have a problem with it, in which case we won't talk about it anymore.

The uncharitable version there is an asshole, and we can argue that r/SSC isn't assholes. But that leaves us with 2, the "I am willing to talk about anything in a dispassionate, logical way so long as nobody gives me heat for it". 2 isn't an asshole, but it also doesn't offer anything distinct or special compared to "normal people" because it's using the exact same standard as "normal people".

That's sort of my bigger problem here. EAs can get away with "We are willing to talk openly just so long as it doesn't offend anyone at all" because their value-add isn't the rationalist-thought stuff; it's doing charity a certain way. But rationalism's whole thing is the rationalist-thought thing - without that it's just a bunch of indoor kids trying to one-up each other on lesswrong, and the whole world yawns at it, as with the mostly-boring r/ssc.

Expand full comment

Bravo.

Expand full comment

> I’m really, really interested to see what happens here.

100% they won't defend him

80% they will pretend it didn't happen and quietly downvote poeple who bring it up

Expand full comment
Comment deleted
Jan 31, 2023
Comment deleted
Expand full comment

One thing I'd say about rationalists is that a big portion of them really did mature out of "every little thing is a math problem" thinking, or never participated in the first place. So where I talked about, for instance, emotion being cut out of decision making, at least a few rat-adjacent people pointed out that even if you are making a sterile emotionless decision, emotion has to be taken into account as one of the factors influencing what decision you make.

You still occasionally run into the "I am a robot, I am talking like a robot and have a robot heart" guys, but I'd say they are far from the only flavor available.

Expand full comment
Comment deleted
Jan 31, 2023Edited
Comment deleted
Expand full comment

I think there's places it's more prevalent than others. LessWrong = morerobot. DSL and the slatestarcodexverse tend to = lessrobot.

I think there's a big difference between someone who came into rationalism from, say, a really early entry point. The sequences was/were really, really popular for a long time. Are they unreadable nonsense? Yes. Are they *incredibly wordy* unreadable nonsense? Yes. If you find a sequences guy, someone who still references and likes them, he's gonna be robot guy 9/10.

People who came to rationalism from a less acute angle, who like sidled in, tend to (in my view) be *less* like that. If you find someone who just really, really likes Scott, they usually aren't robots. It's like there's a way to encounter the movement where it's watered down to a reasonable level from the original foundational stuff.

Expand full comment

Beep boop, I am a robot. How do you do, fellow humans ?

I think the Sequences are a mixture of wordy unreadable nonsense; outright preaching; and useful philosophical ideas re-packaged in more accessible form. Those latter parts of the Sequences are worth reading; the other ones, not so much.

I also would like to stick up for robots a little: sometimes, the only way to discuss an issue is to do so like a robot, because people from different cultures are basically alien to each other, and all they have in common is physics and math. For example, discussing religion with someone of a wildly different faith (or even *gasp!* no faith at all) is IMO only possible in robotic terms. Ok, obviously it's possible to include emotions in the discussion, but not if you want to have a productive conversation (as opposed to a riot).

Expand full comment
Comment deleted
Jan 31, 2023Edited
Comment deleted
Expand full comment

Do you have a link to the 'cast yet?

Expand full comment