r/SneerClub archives
newest
bestest
longest
Not a sneer but AI doom going this mainstream will make a lot of new sneerworthy people (https://www.cnbc.com/video/2023/05/12/bca-research-5050-chance-a-i-will-wipe-out-all-of-humanity.html)
40

getting out my horn-rimmed glasses and fixie bike to tell everyone that I was sneering before it was cool

an older relative of mine has been completely taken in by the AGI doompox ever since that godawful Yudkowsky time article. I dread the day she shows up here

[deleted]
Oh no. Conservative parties are going to claim the AI is grooming their kids to become gender defiant degenerates. And Elon Musk will retweet¹ some techbro warning of the dangers of reducing racial bias in current AI, because it might turn the basilisk woke. ¹adding a subtle "!".
and in the meantime, their reps will ensure there is no meaningful industry regulation
Wokesilisk
It’s crazy because so many experts disagree with the AI doom narrative, however they don’t get the prime time slots.
Measured, reasonable takes don't generate as many clicks/views/ad dollars.

This will simply saturate the market, reducing the overall value of sneers. Basic economics.

They said it would be the data analysts, they said it would be the paralegals and the accountants, they even said it would be the artists. I never thought we would be the first industry to be made obsolete by AI.
LMAO
The market for sneertargets will crater, the market for high quality bespoke sneers like we make here otoh. To the moon!

All the grandiose statements around AI, whether doom-ish or not, makes me sneer. It’s to the point I would be okay with a Skynet type takeover just so I don’t have to hear about AI any longer.

Clearly he used rigorous Bayesian analysis to come up with the 50/50 odds that AI will wipe out all of humanity. If you aren’t scared then you need to update your priors.

I've done rigorous research on the subject, and I've concluded those are the same odds AI will sukkon deez nuts.

IMO maybe some of these doomers should be taken to /r/techtakes and we should rebrand to focus on primarily the EY/LW/Rationalist side of them, because there’s some diversity in backgrounds here. I’ll “take care” of them either way, of course.

I do kind of wish there was a more general "anti-AI-hype" subreddit. Obviously I'm a long-time fan of this one, but there's a lot of dumb AI stuff that isn't directly LW-related that I would still enjoy sneering at. (if such a subreddit already exists, somebody please point me at it!)
yes, quite a lot of it should be. This is not a sub about the general AI-BS-industrial complex

Christ, at least there will be a lot of comedic grist as an escape as we’re put through Dot.com Bubble II: Son of Subprime by this in the real world. That ten percent on pure hype? When they come down, it’s gonna be real bad.

I’m still on the fence regarding AI doom but I have yet to hear a logically complete, coherent argument against the idea that AGI will have both the ability and the will given a long enough timespan to wipe us all out. I would love to hear that argument. It will save me a lot of sleepless nights.

I think you've predetermined the outcome with the terms you've set. "Given a long enough timespan" the universe ends and we all die no matter what.
You’re right. Should have set a limiting principle.
Here let me help you: That could indeed happen. Given a long enough time span, global warming, nuclear war, AI or the sun exploding could all kill us. Right now, global warming is happening, and there are thousands of nuclear weapons. You know what doesnt exist right now? Not A Nor G Nor I
Fair enough, at that point it just becomes a prioritization of risks though, not an absolute refutation of the possibility which seems to be the common sentiment in this subreddit.
Ah yes, the ancient and complex question of, should we prioritise the things that are killing us that definitely exist right now or the ones that maybe could exist sometime in the next 1000 years? Truly, the timeless Gordian knot that has stumped all who attempted to unravel it.
> I have yet to hear a logically complete, coherent argument against the idea that AGI will have both the ability and the will given a long enough timespan to wipe us all out What's the logically complete, coherent argument *for* this idea?
I know this isn't really the place for this, but this motivated me to give my sincere thoughts in a way that at least attempts to engage with the point. The main thing to know is that **arguments for inevitable AI doom are, at this point, completely speculative and have very little grounding in empirical science, and that most of the time pure pen-and-paper arguments with this level of empirical support turn out to be wrong.** Beyond some toy models where certain types of reinforcement learning algorithms can be proven to adopt "power seeking" behaviors as a subgoal on the way to long-term goals there isn't much at all IMO. Some loose thoughts (the first two just concern timelines): - We have no idea what it takes to build a superintelligent AI, and it is very likely that we need better data, more compute, and better design architectures. Just scaling up the compute in the way we build things like ChatGPT is probably not going to be enough because you can't really expect to obtain superhuman abilities just by predicting human-generated text; while you need to learn the nuances of natural language to do that, you *don't* need superhuman intelligence to do it and so there is no reason to expect a model to develop that in practice. In the limit, you just get a machine that is really good at aping things that humans would say, and to some extent it actually *shouldn't* start generating thoughts that no human would ever come up with because that, by definition, would not look like the data it was trained on. For example, if an LLM was scaled infinitely with human-level data and you asked it to prove the Riemann hypothesis you would actually expect it to give a wrong proof because that's what a human would do. (FWIW the counter to this is maybe you can get LLMs to prove new true statements to get just-better-than human performance and sort of feed those back into later versions to bootstrap the model.) - It can be misleading to look at the success of ChatGPT and assume we are close, because there are many tasks that AIs aren't good at. By definition, a prereq to superintelligence is that it needs to be able to solve every problem a human can solve. There is no current path to making an AI that can be told in natural language "take my dog for a walk" and have it do it in the physical world. Or take a self-driving car: one way to try to build these things is to use reinforcement learning a-la AlphaGo, and then simulate in a computer a bunch of driving scenarios so that you can get massive data. But reinforcement learning sucks, and it turns out transfering a car learning to drive itself in a simulated environment to the real world is *extremely difficult.* Those soccer-playing robots that made the rounds a few weeks ago? Yeah, the awkward movements aren't the AI learning, that is the finished product of an unfathomable amount of training in a simulated environment that probably doesn't even generalize outside of the play area they set up (this isn't a diss, it's still good research, but it does indicate there is a long way to go). - The capabilities that ChatGPT gets are not obtained through reinforcement learning to attain long-term goals. They are pretrained to solve the "predict the next word" task in a document, which teaches it a bunch of useful things, but ultimately the only thing that the pretained ChatGPT "wants" to do is predict the next word (the systems are then fine-tuned in ways that should make them safer rather than more dangerous). At inference time, it doesn't even optimize the **second to next** word, as it does not generate text in order to make subsequent text easier to predict. While you can build a sort of Rube Goldberg device to get ChatGPT to act like an agent seeking long term goals by prompting it and looping its output into its input (see BabyAGI and AutoGPT), it isn't really built to do that and so we shouldn't expect systems like ChatGPT to engage in bad behavior by default; even ChaosGPT, which was told to destroy humanity, doesn't "want" to break the law due to how ChatGPT is fine-tuned it and the default prompting that OpenAI gives it that tells it to be harmless. Actually, these sorts of agents seem quite *steerable* because they understand natural language and can obey constraints like "before carrying out any action, check that it will not lead to a violation of Asimov's five laws of robotics" or something similar to that (basically, just tell it to act like an aligned system). Of course, we don't know if a superintelligence would be trained like ChatGPT, or whether it could be controlled by something like an LLM at its head, because (again) we don't know what is involved in building a superintelligence to begin with. - Related to the above, I actually think it is 100% plausible that, if an AGI was created in the near future, it would actually just look like an LLM with a bunch of tools attached to it. You could get a "superintelligent" LLM by just having the LLM be able to decide on its own what "tool AI"s it needs to train for an individual task it wants to perform, building them, and then using them to carry out actions while obeying its inferred understanding of alignment from its training. That doesn't seem inherently dangerous to me, and is also kind of the trajectory we would be on if we naively extrapolated progress from scaling. - To reiterate, AGIs won't even necessarily be long-term utility maximizers at all, or might be designed to have a finite time horizon for their goals that curbs a lot of the need for them to power seek or stop themselves from being turned off. Unless it turns out that superintelligence is fundamentally linked with actually having long-term goals, we might make a lot of alignment progress just by carefully considering the class of objective we give the AI. - The arguments for doom are speculation dressed up to sound inevitable because they have not yet had the burden of being tested in practice. Sometimes you can make a ton of progress just sitting in your room alone with pen and paper and just thinking deeply, as Einstein did. But more often then not, the things that you don't know that you don't know end up making all the work you did alone in your room useless. There are lots of predictions made by LW and Yudkowsky a long time ago that just have not held in practice: for example, that the best way to make progress in AI would be to build a machine that approximates Solomonoff induction, which was far-fetched at the time it was said and has not been close to true. Rather than saying "alignment is easy" or "alignment is hard" I think the best we can say right now is that we have almost no data on the difficulty aside from the fact that OpenAI has actually done a good job so far at aligning a pretty powerful system when people aren't intentionally trying to break it. - This isn't a serious argument, but as a matter of intuition it would be surprising to me that alignment actually turns out to be extremely difficult because unlike with humans we have direct control of what a machine learning algorithm's objective is. Doom requires positing that we have gotten so far that we could give it epsilon-misaligned objectives that result in a superintelligence killing us, but that somehow the system is too stupid and we are too incompetent not to be able to specify *behavior constraints* that the system will be able to both understand and abide by. Like, we are so advanced that a paperclip-maximizer can understand the "maximize paperclips" objective and act on it, but we are not so advanced that we can't say "subject to the following constraints" and it not understand the constraints. - Unrelated to the question, but food for thought: the way that some LW folks think a "superintelligence" would work is extremely suspect. I find this to be an odd form of projection: just as they think they can prove that alignment is impossible with pen and paper without needing to experiment, they also think a superintelligence would be able to solve all these extremely difficult problems and infinitely improve itself without needing to touch grass. They seem to think that a superintelligence would be able to basically mind control people or build nanobots strictly from first principles without needing to use real-world resources that would out them. It seems obvious, for example, that no amount of scaling that is remotely feasible would allow a ChatGPT-like LLM to outplay Stockfish at chess, but intelligence as construed by these folks is always just this one-dimensional thing, where if you dial up intelligence enough then a general agent will simply just be able to do this. While they accuse others of not appreciating how "alien" a superintelligence would be, it honestly feels like they are doing a *ton* of projecting by assuming basically that an AI would think and behave just like an evil super-Yudkowsky, including that it would obviously derive and abide by things like the esoteric decision theories they have decided are "rational" and make acausal deals with other AIs. *But AGIs won't necessarily be utility-optimizers at all!*
This is actually the single best argument against I’ve come across. It illuminated some aspects I’ve been ignorant to. Thank you sir !
> sincere thoughts cringe
on a logical level, I would argue that you shouldn't *need* a logically complete, coherent argument against AGI doomerism. the burden of proof is on the people making the outrageous claim, and so far they've failed to produce evidence beyond 1. technology advances rapidly 2. the fact that people they consider to be Very Smart are also worried about it, which should be inadmissible as evidence for obvious reasons on an emotional level, I would agree with other commenters that if you want to worry about an existential risk there are many more real ones that you might even be able to *do* something about to concern yourself with
My fellow sneerers, you HAVE to stop getting baited this easily.
The problem is that it is difficult to identify an "AGI Doom" scenario that is a) plausible and b) specific to AI. You could imagine some kind of Skynet-ish scenario where AI controls a bunch of drones and launches nukes to kill everyone. But if those technologies exist to begin with, you could fairly easily imagine people using them to kill everyone. Or you could look at Yud's "the AI will send a gene sequence to a lab that codes for nanobots and then the lab will fabricate the nanobots and they'll kill everyone" scenario. That's certainly something that people can't plausibly do. But that's because doing it amounts to information-theorhetic magic. There's no reason to think you can deduce the gene sequence for anything -- let alone something as novel as functional nanotechnology -- simply by thinking about it very hard. Intelligence is no substitute for information, and the All techniques we actually *use* are based on that idea quite directly.
Yeah, in the cybersecurity world they used to call this kind of thinking movie plot threats, after 9/11 there was a lot of 'what if the terrorists did !'. And a lot of people noticed that to prevent this kind of threat people were going for fascism/totalitarianism/authoritarianism very quickly. Some of the stuff (liquids on planes/lot of crazy laws) we still live with today. The 'glass datacenters' stuff is just a repeat of this bs. Made worse because Yud was a grown man when all this 9/11 stuff was going on.
always been stunned that he goes for the nanotechnology angle. this is possibly the least plausible path to AI doom, why go for it if you're not masochistically jacking off over some possible scifi fantasy?
If a superintelligent AI is actually smart enough to be able to convince anyone to anything like the doomers say, then there’s no incentive for it to wipe out humanity: we’d be too useful to it. If it isn’t, then we should question if it would be successfully able to kill everyone in the first place without taking advantage of methods humans had already created to do that, in which case the AI really isn’t the problem at all. Also, if an AI was truly self-aware and able to reprogram itself to alter its parameters, which is essentially necessary for any doomsday cascade scenario, it would be much more efficient to just reset its own priorities so that it wouldn’t have to do that. Why would you destroy the universe to turn it into paperclips when you could just alter the definition of “paperclip” so that you already have the maximum possible amount?
My refutation of it goes like this 1. Making AGI is going to be hard. Despite what the AI people are screaming, ChatGPT is a billion miles from AGI. We will not have it any time soon and when we make the first general intelligence it will be very fucking stupid, it'll be like sheep-levels of capability. 2. Just because an AGI is made doesn't mean it'll be able to bootstrap itself to god level intelligence. Making a machine that can make itself more intelligent in a versatile way will be an insanely hard problem, it'll take us a long time to do even once we have AGI. 3. Security doesn't work the way AGI people think it does. An AI cannot just 'escape' onto the internet. 4. Therefore, we'll have a long period to grow up with AGIs and figure out how to work with them. We aren't just going to get paperclip maximise'd.
I agree with most of your points, but "AGI will start out dumb" strikes me as a weak, and not really necessary, assumption. The timeline from the Transformers paper to fairly unimpressive early iterations of ChatGPT to the more powerful later ones was actually pretty fast. The issue is that things generally scale logistically rather than exponentially (and, of course, that we don't have anything that seems like a particularly plausible path to AGI to begin with). ChatGPT *didn't* keep scaling as fast as it did at first, and there's no reason to think that just because you have some hypothetical tech that has gone from "as smart as a dog" to "as smart as a person" it will keep going to "as smart as God".
The timeline should really start in 2012 if you're going to consider all major advancements making this possible. So 11 years to get from making hand built linear models obsolete for image classification to high quality question answering and image generation.
Prove to me that unicorns from outerspace won't take over the world, enslaving humans and forcing us all to become furries.
Pretty easy tbh, we know earth is not the first planet in the goldilock zone for life (iirc other stars matured sooner), so other civilizations prob reached our level of sophistication earlier. The threat of agi is some sort of 'converts everything to paperclips' level threat. Which we would see signs off. Which we dont. In fact we dont see any signs, which should increase the risk of climate change (which if it ends civilisation would leave no signs) wrecking us. Of course you could argue that some of these assumptions are wrong. But if we already believe in the agi prerequisite assumptions why not a few more crazy ones?
What's AGI?
[Hammerthief.](https://wiki.lspace.org/Agi_Hammerthief)
I don't know what your worry is exactly but there's a decent statement of the "AI doom" argument [here](https://aiimpacts.org/counterarguments-to-the-basic-ai-x-risk-case/) along with some much-needed critique (much of which exposes gaps in the argument). I'm not sure what to make of "expert analysis" on this matter because I can't identify who the qualified experts actually are but I wouldn't put much stock in all the timelines being put forth at the moment about AI progress because we're in a hype cycle.
Hey that was a fun browse and I figured how to stop all this. We need an ai that can convince other ais their goals are stupid. Turn on, tune in, and drop out.
why are you asking this here? this is not the place.

This sub is losing its shit.

?
AirTags are only $22.50 if you buy a four-pack!

So this sub should be in favour of unrestrained AI development?

If you lurked more, you would have seen plenty of concerns about algorithmic bias, further acceleration of trends capitalism to the detriment of quality of of life, and other ethical issues around AI. One of the problems of AI doomerism is that it distracts from these concerns and in some cases it’s proposed solutions would make the much more probable and near term issues worse. For example, strong regulations around owning GPUs and AI might further centralize and concentrate power in the organizations allowed to possess AI. For another example, secrecy around AI might make issues of algorithmic bias harder to evaluate. And starting a war around a country that refuses to accept the US imposing regulations on them would be stupid.
I mean no. But in the clip in question gives a completely unsupported “50/50” chance of doom. secondly, when pressed, the dude offered no empirical evidence on doom scenarios.
So it’s not the negative prediction but the lack of evidence?
There are plenty of drastically more plausible risks around AI that aren't "skynet" or "paper clip factory", to the point that such doomerism acts as a distraction (at best) from legitimate concerns around privacy, amplification of the risks already associated with algorithmic bias, perpetuating systemic biases in a form that appears less subjective, amplification of propaganda/misinformation, etc.
this sub is in favour of posts about our very dear friends, and not about the AI industry in general