r/SneerClub archives
newest
bestest
longest
44

Seriously, every one of their arguments or “infohazards” involves gitantic numbers of simulations of people or whatever they feel like. Any theory as why this is the case? Is Yud traumatized by simulations or something like that?

The cultural force of infernalism barging its way into a largely atheistic community. Rationalists tend to connect themselves way too hard with the Christian humanism conception of philosophy, and end up with results in that vein despite an ostensible contempt for Christianity.

Heaven, hell, resurrection, final judgement, sexual pathology, god-kings, it’s all there. Hell, even the Dark Enlightenment psychos will admit that they don’t think Christianity is real or that they despise it, before endorsing it as a pragmatic means to shield the degenerate masses from homosexuality and Islam.

Add in the fear & guilt being used to manipulate cult members. You even have a constantly shifting social intrigue from the cult followers & leaders blogging to keep people engaged. New age religious cult, "_what if Jesus... was a computer?_". Extremely disappointed nobody is trying to use this to claim tax exemption status. They pretend to be smart but this is low hanging fruit.
Please read my 100k Worm fanfic to learn why Spirit Upload Jesus will assign you *double* extra good boy points for successfully engaging in maximized tax avoidance and diverting the proceeds to the 2046 Heaven crowdfunding drive.
mmmm now how can we rope indulgences into this?
QUANTUM INDULGENCES
Gonna make you a silicon saint!
the real question is: who's the edgy rationalist self insert in this fic i was gonna guess Armsmaster, but he's close enough already that it almost feels like cheating.
oh ... oh no, is the author of Worm a rationalist? that would be pretty damning for my taste in web serials
Nah. EY likes Worm and recommended it, so there's a bunch of rationalists into Worm, but Wildbow isn't a rationalist and doesn't rate their reading comprehension very highly.
wildbow isn't involved in rationalist stuff so far as I know, but Worm definitely appealed to the rational fic impulse and so got fans along those lines. But wildbow seems to resent his fans, so...probably not?
[deleted]
Maybe he's more like Yud than I gave him credit for...
Lol i have actually thought of something like that, a christian version of the basilisk: what if the future ai punishes non believers? LW atheists have a special place in simulated hell in this case.
Isn’t that the plot of lawnmower man?
'souls? Pff that's nonsense to scare gullible people. Perfect simulations of you on the other hand... '
OK but The Hyperion Cantos are still the best science fiction out there.

I have a non-sneer idea for that: the point is to screw the person’s sense of existance, you make them question if they are even real and then add some good old ai gods to the argument and bingo, you got yourself a terrified new cult member willing to do anything to avoid being punished by the ai god.

I think this is it. They must also have some internal cult lore along the lines of the "basilisk". I think they borrow well known techniques from other cults, such as getting the members to tell them embarrassing facts about themselves (for blackmail both emotional and literal). Various secret truths about the universe are certainly a big thing in cults.
IIRC some time ago Yud was selling "secrets of the universe" on his fb page. God knows what kind of crazy shit they tell to their new members.

[deleted]

> is constructing all sorts of paranoid fantasies based on ridiculously speculative assumptions is constructing all sorts of paranoid fantasies **that just so happen to flatter the self-images of our technology overlords as well as some of the grossest antisocial "thinkers" of our time** based on **laughably ambiguous use of key terms** and ridiculously speculative assumptions **and bringing in enormous amounts of grant money for it**
I actually know a lovely guy who - last time I saw him, and I’ve mentioned this here before - worked as a post-doc for the rival to Bostrom’s institute, working out of Cambridge. Adrian Currie, lovely guy, plays the banjo. He described his own futures institute at Cambridge as being (paraphrased) the “normal” one, whilst also pointing out that there are a number of cool people who work with Bostrom, although he didn’t have many kind words for Bostrom himself.
> a mathematician turned philosopher I don't generally disagree with your comment, but that's not right. You shouldn't misrepresent someone just because you disagree with them. From Wikipedia: > He received B.A. degrees in philosophy, mathematics, logic and artificial intelligence from the University of Gothenburg in 1994, and both an M.A. degree in philosophy and physics from Stockholm University and an M.Sc. degree in computational neuroscience from King's College London in 1996. During his time at Stockholm University, he researched the relationship between language and reality by studying the analytic philosopher W. V. Quine. In 2000, he was awarded a Ph.D. degree in philosophy from the London School of Economics.
I’ve seen him describe himself that way somewhere, which is a bit of a one-up from Wikipedia, and I’m speaking as an MSc graduate in philosophy of science myself
Fair enough - my apologies I read your statement as a bit of a "not a real philosopher" kind of remark.
I’ve said many times before - although you probably weren’t to know - that I don’t think there’s an “authentic” definition of “philosopher” which excludes bad philosophers, even the likes of Sam Harris If you’re doing philosophy you’re doing philosophy, however badly Personally I think Bostrom does philosophy badly, and I’ve met a number of much more senior people than me who say the same
Well said
Asking not as a criticism but genuinely curious as someone not educated in philosophy, would you include here someone like Ayn Rand whose work rarely has more philosophic content than not-philosophy works?
I mean why not? She talks about Aristotle and laws of logic, has stuff lying around about perception and reality. Obviously she was a horrible person with crappy views but that isn’t the point.

It’s ridiculous. It’s unknown if simulations based on brain scans will ever be good enough to be sentient (will probably be outlawed before they even get close). But a lot of the time, Yud seems to think they can simulate you without a brain scan, from like, text logs or something (see the AI box experiment), which is literally impossible. And the basilisk arguments sometimes rely on retroactive simulation , where your brain from the distant past is brought back.. somehow?

Also add that we don't really know much about how conciousness works, or what really makes a person be who they are. And about the outlawing simulations, keep in mind that LW/Yud have really "strange" ideas about morality, this is the 'torture vs dust specks' people after all.
Yud thinks that because he holds to a form of Determinism so strong that you can perfectly recreate down to the atom any object that has ever existed, by doing vector magic to the molecules of the present universe.
Why would it be impossible? Isn't simulating someone else's brain, albeit in imperfect form, exactly what you're doing when you get to know someone well enough to anticipate what they're likely to do in response to a given situation? I feel like an imperfect simulation is still at least sort of you, even if it's you "with brain damage" so to speak. (The notion that other people's models of you in their heads are at least a little bit you isn't new either, see Douglas Hofstadter.)
If this was the case, then a ridiculous strawman of Eliezer Yudkowsky -- an imperfect simulation, per se -- is still sort of Eliezer Yudkowsky. Huh. That actually explains why he hates it when people "mis"represent him online.
But like, if I threaten to simulate you and torture you, or whatever, as in AI-box (?), if I can't simulate you perfectly it's kind of an empty threat. You could argue that an imperfect simulation of yourself couldn't tell it was in an imperfect simulation, but at that point there's no difference from just saying like "I'll simulate torturing a random video game character who I promise can feel pain" or something, as long as the video game character can't tell they aren't you either! I guess to my mind this is the problem with all simulation/brain scan stuff in general -- you, yourself, your own consciousness, is always going to be trapped in your own brain. But if you want to live forever in a computer, believing that is abhorrent, so you have to believe a simulation of you is just as much "you" as the physical you... and then you end up with this weird scenario.
Does this mean that if I'm imagining punching Yud in the face, I'm a really bad god AI?
According to the person above, apparently yes! brb, going to go scam some money out of Yudkowsky by threatening to delete a high-resolution jpeg of him unless he pays up
The ontological question is whether or not an imperfect simulation of you is something you should be morally concerned with. The answer to that is: well yeah, but I should be concerned for anybody else too. At times Yudkowsky and his followers appear to be making the sci-fi argument that you should be concerned with the fate of your simulation because it isn’t just an avatar of you but ontologically indistinguishable from you. Hofstadter’s point is very different, albeit popular with such people. What Hofstadter says is that there are senses in which you and your consciousness are reflected in the brain processes of others. I don’t actually agree but he’s a lot more clever.
No, you idiot, the half baked guesses you make about what other people are thinking do not count as sentient beings. The word "imperfect" is doing a lotta legwork here, as if I could sell a sketchbook doodle I just did as an "imperfect" simulation of the mona lisa.
If you know someone well you can definitely predict at better than random chance how they'd react to a given situation, though.
Predicting the output of a complex process is not the same as assembling a faithful copy of the inputs that lead that output. Just because you can accurately predict the actions of someone sometimes (even repeatedly) doesn't mean your prediction model matches them and their internal decision making process. Also making a series of correct isolated predictions on one scenario doesn't mean that you would correctly predict the majority of their actions in other scenarios, especially as the predictions got more complex and demanding. These things are so far from equivalent that its stunning that theyd be considered close at all.
No one calls that a simulation outside the limited bubble of nerd-dom.
I realize it's unusual terminology but I don't see how it's inaccurate.
It's inaccurate precisely because it's so highly unusual; [language is use](https://existentialcomics.com/comic/268). A hotdog is not a sandwich not because there's some technical definition of a sandwich that isn't met by a hotdog, but because that's not how people use the term. The impression it gives is also inaccurate, because practically the only similarity is that both offer a better than random chance of predicting what someone does. The accompanying baggage that goes with the term 'simulation' in most people's minds doesn't apply to your definition. Using the term like is actively obfuscating.
And that makes my guesses about you comparable to a sentient conciousness? This is like if I made a shitty clay doll in your likeness and then demanded you pay a ransom for it, because isn't the clay doll sort of you, however imperfect?
I think those are fundamentally different kinds of resemblance, but you're right that what a human brain can form does not constitute a copy of that person per se. It does, however, lend credence to the idea that some more powerful intelligence, given all the data person generated in their lifetime, could recreate something that is at least in some limited sense that person.
Okay, what level of similarity to you does a thing have to be for you to consider them you? Because heres the thing, if you set the bar too low, then *other people* will be similar enough to clear that bar. Would you consider identical twins who are very close to each other to be the *same person*?
Not necessarily the same person, but containing some part of each other perhaps. Again, read some of Douglas Hofstadter's stuff on consciousness like *I Am a Strange Loop*.
Well, congratulations, you've diluted the definition of "being a particular person" to the point of meaninglessness.
I contain some part of a *Paramecium*. While that's technically true, it's... kind of trivial, you know?

I think it is the fear of death which leads to wanting ‘concious computer simulations of people can be made and are real’ to be true. And well with this as a proven fact/assumption, the logic goes to strange places.

Just in the same way as the very very small chance x infinite payoff agi risk assessment goes to strange places. And the obsession with infinite amounts of dusk specks etc.

I think the reinventing of christian lore and the cult stuff is all secondary. (Still funny to sneer at and make comparisons to, but i dont think it was the main focus, just a side effect). It doesnt help that Yud has declared that ‘the cult question’ < ‘the important work we do to prevent death’

Basically I think they want some things to be true and work backwards from there. We often do the same and go too hard into ‘these people suck’, which can suck. But at least we didnt create a escathalogic cult from first practices.

If you do the same thing but with ‘we need to colonize space like in star trek’ you get a union busting crazy twitter billionaire who abuses his wife because he is the alpha and everybody who doubts me is a pedo.

Or if you go ‘i have seen a secret truth that which iff everybody followed my rules we could become so much better as people’ you become addicted to drugs, transphobic because you cant doubt your own conclusions, and being lorded over by a family member who might have munchausen by proxy.

The fear of death thing is apposite Yudkowsky is on record saying a big part of his motivation to do all this bullshit is his petrifying fear of death
Yeah, he doesn't hide it and frankly it's one of the things I agree with him the most (the fear of death, not the robot jesus simulation god worship). Chalk it up to my own emotional immaturity, but death (both mine and my loved ones') fucking terrifies me.
After my withdrawal-induced (relatively minor) seizure and other alcohol withdrawal issues, I’m actually less sympathetic to how Yud expresses his fear of death. After collapsing I spent several hours on my dad’s couch with extreme tremors vomiting into a bucket, until eventually I wound up in Emergencies with an IV. I was and am legitimately scared that I might die before I reach 28, but the important thing is to manage that healthily, rather than directing it into crackpot schemes to extend your life and try to make money out of it into the bargain. Granted, I have probably a quite different experience of death to begin with. One of my uncles is still in prison for murdering a gay lover when I was a small kid in the 90s, shortly after finally coming out and relatively amicably leaving his wife. A good slightly older friend of mine - also a guitar player, we share a birthday and even first name - died of a heroin overdose when he was younger than I am now, and in my early 20s I spent a lot of long nights in Emergencies waiting to find out what was happening to my then-girlfriend. Those experiences amongst others pile up, whereas as far as I understand Yud’s whole thing is based on a single - no less tragic - experience when he was young, which he has gone on to enact extremely neurotic and destructive behaviours on the basis of. Personally I find that more distasteful for being unable to handle death and stare it in the face as an inevitability. I remember, the same aforementioned then-girlfriend got heavily into Max More - a philosophy crackpot of a vaguely Silicon Valley persuasion - who makes rather childish Pascal’s Wager arguments for extending life by vitrification, make of that what you will.
Yes, my bad, edited it.
Now I'm wondering what your final example references.
Jordan Peterson, I believe.
Ah yes. It's obvious now that you mention it.
Yep. If you listen to some of his vids and how he describes the process of him getting inspired to write the books there is a certain level of 'im a prophet' feeling. E: his daughter trying to keep him down is more a recurring joke btw.

Yeah Yud is kinda traumatized by simulations in a sense. Lots of the earlier superintelligent ai stuff was oriented around Yud’s hot new timeless decision theory and that was based on stuff like Newcomb’s Box Problems. These are old problems in game-theory that assume an acausal robot god very capable predictor can simulate you.

Yud himself claims to be against being intimidated by an acausal robot god the very competent predictor. And thus wakes up every day prepared to face the possibility of infinite computational torture if it means stopping this superintelligence’s dastardly gambit

none of them have ever tried to write a PDE solver lol

They want it to be true that brain simulations are still you because otherwise brain uploading is just creating a copy and won’t give you immortality. So they accept it as axiomatic because otherwise the singularity would suck for them.

Some good answers here already, but I’ll add that if you’re dealing with simulations (or better yet, fictional simulations that don’t exist yet), you don’t have to worry about actually talking to real people or doing actual research on human behavior - you can just imagine what you think a simulation would show and write a bunch of nonsense based on that.

they just play a lot of video games and are not familiar with any other forms of art

What about bad fanfiction? they are very familiar with that
Just them playing The Sims with their favorite YA fiction. sims sims sims sims sims
It's vidya all the way down, too many of which are date sims. Apparently vidyā also means "correct knowledge" in Sanskrit; life imitates art.

The decision theory/probability stuff they are obsessed with is really only interesting (for the stuff they want to talk about) if they can juke the numbers by playing with sleeping beauty scenarios.

Yudkowsky’s goal is to live forever as a simulation running on the mind of AI God. LessWrong philosophy is best understood as an attempt to build up to thinking that goal is sensible and achievable.