We’re having way too much “Rationalist Debate Club” here, or just
normal “Debate Club” and not enough Sneering, so we’re locking this one
down before we start just banning people for being too serious.
I’m too lazy to read this thoroughly but I’m not quite seeing the
Sneer here - it’s about some guy in AI but he’s not a rationalist and
doesn’t seem infected by the LW/acausal robot god mind virus. Now, some
reactions to what he’s saying could be Sneered at, but I’m just not sure
of the Sneer nexus here.
It's eminently sneerable IMO. Relevant quotes from the article:
> Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze.
>“I don’t think they should scale this up more until they have understood whether they can control it,” he said.
Also inane comparisons with nuclear weapons etc.
It's exactly the same as the rationalist stuff, but phrased in a less insane way by a NYTimes journalist quoting a respectable academic.
Frankly this is the most important stuff to be sneering at. It's easy to see that these things are nuts when Eliezer Yudkowsky says them, but somehow when an elderly academic says them people start to think it's respectable.
Yeah at a first glance it almost looks like legitimate worries of misinformation, job loss, societal disruption… but that’s all added by the reporter writing the article, the direct quotes are more in line with Lesswrong doomerism, just much more carefully and moderately phrased than Eliezer has the sense to do.
I’m actually worried how the conversation is going to constantly get twisted around going forward. Someone or some organization brings up algorithmic bias or societal disruption, another person jumps in with doomerism fears. Many possible solutions to one problem actually disrupt solutions to the other: locking up AI behind layers of secrecy and security makes sense to the doomer but might make exacerbate the problem of AI being used as tool to further consolidate power by capital.
Yeah - some of these concerns can be read as reasonable, just not in the context of implying AGI.
E.g. learning unexpected behavior has already been identified as a major problem. Not because of AGI, but because it can pick up biases/flaws in the training data or inputs that people end up overlooking. It's the same risk as statistic models, just with more abstractions that can get in the way of humans recognizing it.
Google’s chief scientist, Jeff Dean, said in a statement: “We remain
committed to a responsible approach to A.I. We’re continually learning
to understand emerging risks while also innovating
boldly.”
We’re proceeding cautiously *aside to investor* but not TOO
cautiously am i right haha
I've said in the past that the rationalists are a modern-day version of the New Age movement, just using language lifted from science instead of spirituality. Given how many cults the New Age movement produced, we are absolutely going to see something similar emerge among the rationalists. And just as those New Age cults turned Americans against any religious tradition that wasn't either old-time Christianity (if right-wing) or hard-nosed secularism (if left-wing), I can see rationalist cults doing the same in damaging people's trust in not just AI, but the internet and Silicon Valley as a whole.
Kim Newman wrote an entertaining short story called ["Tomorrow Town"](https://johnnyalucard.com/fiction/online-fiction/tomorrow-town/) that I think predicted what a lot of these rationalist cults would look like. Its aesthetics are based on '70s retro-futurism rather than the 2010s Silicon Valley asceticism I think these guys are gonna cling to, but the central concept revolves around an experimental community governed by a "master computer" that it turns out isn't as logical, objective, impartial, unbiased, or smart as the town's credulous inhabitants seem to think.
Fucking hell. Chat GPT has genuinely driven people insane. The Hinton article is honestly perfectly reasonable, talking about the issues that automation will have is good. But that entire fucking thread is filled with people convinced skynet is going to happen.
It's not reasonable. This is some pretty nutty shit that's divorced from the reality of how industrialized technology works:
> Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze.
> “I don’t think they should scale this up more until they have understood whether they can control it,” he said.
Hinton apparently imagines that someone at the Pentagon is going to e.g. plug ChatGPT in to a squadron of F-35's and let it do its thing.
Just because someone respectable is saying it doesn't make it less crazy.
So a while back people were complaining because too many innocent bystanders were getting killed by explosions from military drone strikes. The military heard these concerns and addressed them by replacing the exploding missiles with [high precision missiles that kill exactly one person using a circular umbrella of giant knives.](https://www.theguardian.com/world/2020/sep/25/us-military-syria-non-explosive-drone-missile-blades)
Skepticism of the military industrial complex is one thing, but if you think anyone over there is plugging random shit they found on GitHub into weapons systems with reckless abandon then your skepticism is based on ignorance rather than politics or moral values or whatever.
The same is true of professional engineering generally. Nobody is creating skynet, and everyone is already aware of any problems that you personally are able to imagine. AutoGPT is a silly project and there's no reason at all to be afraid of it.
That's just the article paraphrasing what he says really badly. AI *does* learn unexpected behaviour, it's just that that unexpected behavior is stuff like "being racist when used to handle prison sentencing". It *is* true that we should be careful about scaling up automated systems, not because they're going to do Yudkowskyite bootstrapping but because they're really fundamentally stupid systems and the average joe trusts them like they're gospel.
And the point about millitary hardware is a good thing to worry about. Palantir's been marketing the fact that they're plugging GPT into millitary software. Again, it's not going to be skynet, it's going to be some dumbfuck ai system bombing a child because we're still fucking awful at training out the edge cases from a system that *looks* smart.
If you look whats directly quoted vs. what the journalist paraphrases/summarizes, it seems like Hinton is (saying with careful moderation) what is still Eliezer style AI ruin/doom, while the journalist is focused on plausible near term threats.
> That's just the article paraphrasing what he says really badly
*Why would you think that?* I don't think you should be extending anyone that much charity. Cade Metz isn't a clown, he doesn't just make shit up.
Consider these tweets from Hinton:
- [Reinforcement Learning by Human Feedback is just parenting for a supernaturally precocious child.](https://twitter.com/geoffreyhinton/status/1636110447442112513?s=20)
- [Extrapolating the spectacular performance of GPT3 into the future suggests that the answer to life, the universe and everything is just 4.398 trillion parameters.](https://twitter.com/geoffreyhinton/status/1270814602931187715)
Hinton clearly believes in the robot god. His perspective on this is divorced from reality.
You shouldn't let natural skepticism of the military industrial complex cause your imagination to run away. This stuff
>Palantir's been marketing the fact that they're plugging GPT into millitary software
is just marketing nonsense. Palantir is not an impressive company, their technology a fucking GUI for processing large amounts of documentation. It blows people's minds at the government because, before Palantir, they did this by having massive numbers of young military recruits sift through documents *manually*.
Plugging in GPT is just a natural extension of natural language processing for searching through documents. It's not magic, it won't be connected to any weapons, and it won't have any consequences in anyone's lives except for some junior military analysts who can now do their job by typing things like "how many tanks does Russia have on the border of Finland?" and get a sensible answer, as opposed to having to click some buttons to get the same result.
I love how this man, who is apparently mathematically and technically
brilliant, goes to all these lengths to avoid pentagon funding because
he doesn’t want war robots and somehow deludes himself into thinking
that if you do all the steps to make a war robot except putting the code
inside a robot that will somehow stop it from happening. Particularly
when he sells his work to Google.
And that Oppenheimer quote he liked to throw around that now leaves a
bad taste in his mouth. God, what a guy.
You can be incredibly technically intelligent and still be delusional. In this case the delusion being that developing this technology for a mega corporation like Google might allow one to have any control over it's application when all is said and done.
I suspect what really happened was more along the lines of the Oppenheimer quote that he seemed to like so much until recently. The work was exciting and also held the promise of an undying legacy, except now he has realised that that legacy might be one of untold destruction.
I don't want to call him a fool, but one might wonder how he could adopt the Oppenheimer creed for so much of his life and yet not see the ending coming when it was already there, spelled out by the man himself.
When I wrote my comment above I called him a fool as well, but thought better of it and deleted that word before posting. Western society and particularly America values intellect much more highly than wisdom. It’s easy for someone who has a lot of one to be blinded to the importance of the other, especially when vast sums of money come into the picture.
Still it’s breathtaking to see someone who can design such complicated and wondrous things give literally 0 thought to how to control them against misuse until it’s way past too late. The f-word remains on the tip of the tongue, even if it doesn’t ever go farther than that.
What exactly is the sneer here? Geoffrey Hinton is lightyears away
from Yudkowsky and the Rationalists. It seems like he is leaning more
toward Stuart Russell’s version of the “control problem”, which you can
have reasonable academic debate about imo.
Also, leaving a large corporation in order to freely criticise it
seems pretty based.
> Extrapolating the spectacular performance of GPT3 into the future suggests that the answer to life, the universe and everything is just 4.398 trillion parameters.
[Link](https://twitter.com/geoffreyhinton/status/1270814602931187715)
apparently the midpoint of all human data holds the answer to life.
Yeah this is kind of how I see it also. This is someone who is actually a leader in their field, right? This guy understands the development of these technologies. I'm not necessarily saying he's right, but this isn't EA or the basilisk or something.
It is exactly the same as what the rationalists believe, but phrased in a more respectable way. *It is literally the same as what the EA people believe about AI.*
Don't be impressed by this guy's credentials. Even well-credentialed people can be crackpots.
The sneer here, for me at least, is a man who frequently used one quote by Oppenhemier to console himself in his work and the potential ramifications of said work, and who now imagines he has found himself at the same point that Oppenhemier also reached wrt the ramifications of his life's work, and he didn't seem to see it coming.
Regardless of whether or not there is any reality to his realisation, him turning around now and saying 'I'm starting to have regrets', is a little amusing, in a Greek tragedy kind of way, given the rest of the context.
> What exactly is the sneer here? Geoffrey Hinton is lightyears away from Yudkowsky and the Rationalists.
I should have originally tacked no the NSFW to indicate no real sneer. Seemed like a lot of overlap here but all reasonable and not advocating worshiping an acausal entity or proposing fascism or eugenics as a solution like the rationalist communities.
Quotes from the article:
> Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze.
> “I don’t think they should scale this up more until they have understood whether they can control it,” he said.
Do a few find/replaces to substitute in some made up scifi jargon and you have a bona fide Yudkowsky quote. It's the same shit.
Hinton is someone I deeply respect and admire, so it’s a big deal
that he said this. Past few months have definitely shifted the Overton
window - seems like AI doomerism is everywhere now
From the article it seems he is very much talking about the potential for catastrophe. He isn't going full Yud style grey goo inevitability, he is saying that AI is powerful enough a technology that a worst case scenario could result in widespread harm.
It's like saying that genetic engineering or nuclear technology, used incorrectly, can result in widespread harm. This isn't unreasonable to me. If we hadn't been jaded by decades of Yud style AI fantasy doome scenarios this wouldn't even be a controversial position.
>It's like saying that genetic engineering or nuclear technology, used incorrectly, can result in widespread harm. This isn't unreasonable to me. If we hadn't been jaded by decades of Yud style AI fantasy doome scenarios this wouldn't even be a controversial position.
Agree - I think Yud style fear mongering gives a bad rap to the more moderate AI safety crowd
There is no moderate version of "I am worried that AI might autonomously decide to kill all humans and succeed in doing so". It is an extremist belief that is not based in any provable mathematics or empirical science.
Yes indeed, there are many sensible concerns about the responsible deployment of AI technology. Geoffrey Hinton mentions some of them for the article.
Geoffrey Hinton *also* says that he's worried about AI autonomously deciding to destroy humanity and succeeding, which is some really nutty shit and we absolutely should not be glossing over the fact that he believes this. It's the same thing Yudkowsky believes.
I'd like to note that "AI risk" is a stupid term because it's not describing anything new. Literally everyone already knows that using AI carelessly or maliciously will have bad outcomes. This isn't a novel observation, and the people talking about it in panicked tones aren't doing so because they're thinking clearly about how the world works.
Consider that the Facebook feed ranking algorithm already caused genocide, and it did so many years ago. Where were all these "AI risk" people when that happened? Why didn't they demand that Zuckerberg be tried for crimes against humanity at the Hague?
These people weren't complaining at that point because feed ranking algorithms are abstract, invisible, and therefore difficult to fear. ChatGPT, on the other hand, can have an almost-human conversation with you, and that's basically magic and therefore very scary.
>Geoffrey Hinton also says that he's worried about AI autonomously deciding to destroy humanity and succeeding, which is some really nutty shit and we absolutely should not be glossing over the fact that he believes this. It's the same thing Yudkowsky believes.
Sure, but considering he has some authority in the field - I'd maybe give him some epistemic deference and and assume his concerns are valid. Also I've always regarded him, along with Bengio, as really morally conscious people (see [here](https://www.nytimes.com/2017/06/23/world/canada/the-man-who-helped-turn-toronto-into-a-high-tech-hotbed.html) for eg), so I believe he is acting in good faith.
"epistemic deference"? I think you might be lost, this is sneerclub, not LessWrong.
Plus, even if you're a, ahem, *true believer*, you should still be able to read and understand the religious scripture for yourself. Or do you trust the priesthood to correctly interpret everything for you?
>"epistemic deference"? I think you might be lost, this is sneerclub, not LessWrong
I hate to break this to you, but "epistemic" isn't some LessWrong lingo - its a frequently used word in philosophy. Just because they appropriated it for whatever stupid claims they wanted to sound smart about doesn't change its meaning. I used it here to ofc demonstrate how much the iq difference here is and to signal that i'm one of the "*true believers"* scavenging sneerclub to get more converts. Really sorry for not knowing that "*epistemic"* is some rationalist dog whistle - ill refine my methods next time.
But to be real: i don't know much about climate change, but i can still defer (unlike *"epistemic"*, this isn't LW lingo btw), to experts because i trust they are acting in good faith and that they don't have some corporate/political interests to advance.
>you should still be able to read and understand the religious scripture for yourself. Or do you trust the priesthood to correctly interpret everything for you?
idk what the fuck this means - do you mean Hinton's a rationalist?
> idk what the fuck this means - do you mean Hinton's a rationalist?
Like, maybe you shouldn't just believe stuff because someone with a fancy job title says it
Hinton isn't just someone with a fancy job title - maybe read his [Wikipedia page](https://en.wikipedia.org/wiki/Geoffrey_Hinton). I have reasons other than Hinton's opinion to care about AI safety. Its just that an expert expressing similar concerns makes me feel more justified.
Sorry, to be clear you also shouldn't just believe stuff because someone with an impressive wikipedia page says it.
If you need Hinton's say-so to shore up your confidence in your belief about something then maybe you don't understand it well enough to be having such strong opinions.
>If you need Hinton's say-so to shore up your confidence in your belief about something then maybe you don't understand it well enough to be having such strong opinions.
I honestly don't know what you are trying to argue. Do I need to have PHD in Artificial Intelligence to have the *proper* beliefs? and with anything short of that, I'm just some dumb mortal and should just have no opinion about anything at all?
Do I need to have a PHD in climate science to be sure if my judgement of experts is accurate? Do I need to have a PHD in idk Christian theology to come to the conclusion that Christianity may not be true?
> Do I need to have PHD in Artificial Intelligence to have the proper beliefs?
About whether or not it makes sense to believe that artificial superintelligence is a meaningful concept, or about whether or not to be worried that it will emerge by accident and kill us all?
Like, yeah, kinda. It's not a simple issue.
And as Geoffrey Hinton is demonstrating to us now, it is entirely possible to be a world-famous researcher with a PhD and get this stuff wrong. So I guess even a PhD on its own isn't necessarily enough; you need to stay current with your knowledge too.
>Like, yeah, kinda. It's not a simple issue.
ok, since you like paying this game and since you clearly have some strong beliefs - what credentials do you have to justify them?
> used incorrectly, can result in widespread harm.
No, that's not what he's saying. He's saying that he's worried that AI will autonomously decide to destroy humanity.
> Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze.
> "The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
It's exactly the same as what Yudkowsky believes.
https://old.reddit.com/r/SneerClub/comments/134n1m4/the_godfather_of_ai_leaves_google_and_warns_of/jigw4n4/
I think a concern with general intelligence is a real concern and concern over it doesn't really belong to rationalists as a concept. Turing's writings and Norbert Weiner's writings covered lots of it (and on more narrow AI the latter even nailed lots of the impact on media ecosystems in ways similar to what came true with with recommendation systems), along with mountains of science fiction.
The stuff I see way wrong with the rationalists is the eugenics/HBD stuff, the weird fascism moldbug stuff with SSC, basilisk, all the cult/power/sex dynamics, Ayn Rand make-your-own-lingoism, Eliezer as the Chosen One, lack of concern with near-term AI issues, the extreme utilitarianism/Effective altruism/bastardization of Peter Singer. But I don't think Hinton has expressed any of that.
> The stuff I see way wrong with the rationalists is the eugenics/HBD stuff, the weird fascism moldbug stuff with SSC, basilisk, all the cult/power/sex dynamics, Ayn Rand make-your-own-lingoism, Eliezer as the Chosen One, lack of concern with near-term AI issues, the extreme utilitarianism/Effective altruism/bastardization of Peter Singer. But I don't think Hinton has expressed any of that.
Okay, but be careful, you're on thin ice: more with the Sneers, less with the "real concerns". This is not a place for reasoned discourse.
I think there is a difference here. This quote in particular:
> “The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
To me that says that he is actually also worried about AI acting autonomously by itself and not just as a tool that can be misused. That puts him on the same side as Yud and company compared to regular AI ethics researchers.
I think it's useful to distinguish here between people saying "there's some chance of something very bad happening" and Yudkowky saying "we're all 100% going to die soon"
Consider the following statements:
`There is a 100% chance that Jesus Christ returns to earth next week`
and
`There is a 5% chance that Jesus Christ returns to earth before 2100`
If you see a meaningful difference between these two things then you've lost an important bit of perspective: *they're both unscientific beliefs about the supernatural and its interests in human affairs.*
That doesn't make them false, necessarily. But it does make them unempirical and I, for one, will not be basing the decisions that I make about how I spent my time on such things.
And I certainly don't want fucking government policy to be based on such considerations.
I don't think this is a strong analogy, for a few reasons: for one, there are more ways to predict things relating to human society than just science, as any historian ought to tell you. Secondly, AI isn't Jesus. It's something that we're building ourselves, and are surprisingly successful at (though also surprisingly failing at, in some aspects).
An analogy to my own view would be "We've managed to create a bunch of these weird superhuman creatures, each somewhat stronger than the previous one. We might, in not too long a time, make one that's sufficiently godlike, and it'll still be just as weird. And once we have, it'll plausibly be too late to undo."
Which *is* substantially different from "We've made pretty powerful AI, so it's definitely going to kill all of us soon".
> We might, in not too long a time, make one that's sufficiently godlike
I understand that you believe this, but you should also realize that it's a religious belief.
There nothing in mathematics or empirical science that suggests that we can create an AI that will autonomously decide to destroy us and which will be so much smarter than us that it will likely succeed.
>There nothing in mathematics or empirical science that suggests that we can create an AI that will autonomously decide to destroy us and which will be so much smarter than us that it will likely succeed.
As much as I agree, such an opinion is not *in and of itself* a "religious belief". There's nothing in mathematics or empirical science that shows you can cure cancer, but I'd call a researcher who did an optimist, not a religious fanatic.
Both could be accompanied by or part of a religious system of belief.
I think cancer researchers would agree with you that "curing cancer" is more of a faith-based slogan than a scientific idea.
Notably the NIH does not give out grants for "curing cancer", they give out grants for things like "treating leukemia", which is quite a different matter.
>does not give out grants for "curing cancer"
I specifically chose curing cancer *because* the notion is implausible given present technology. Yes, I am aware medical research focuses on treatments for particular groups of cancers, or (speculatively) personalized therapies. Cancer isn't one disease, et cetera. That doesn't mean that belief in a future where cancer is functionally "cured" is a religious idea, just a speculative and optimistic one.
Sure, I can imagine a future in which cancer is functionally not a serious public health problem. That's not religious, although *to believe that it will happen* probably would be a bit religious.
But you know what would be *really* religious? If a famous elderly cancer researcher started to say things like "gene therapy is supernatural" or "mRNA vaccines will give us the answer to life, the universe, and everything". That shit would be very religious.
It's also literally what Geoffry Hinton is saying about AI
I am genuinely deeply alarmed by the fact that there seem to be so many emotionally dysfunctional cultists who are employed professionally in the AI industry.
I don't mind them being given funding to do fun projects etc. Clearly the folks at e.g. OpenAI are good at tuning transformer models.
But your average person - including politicians, business people, etc - has a pathological inability to distinguish between "good at writing and deploying industrial scale code" vs "has a clear understanding of the theoretical underpinnings of their work and/or the consequences of it for society at large".
The fact that well-paid people believe this stuff doesn't make it correct, it just means that making ML models is a different skill from understanding how the world works (or even how the models work). I'd prefer that people understand the limitations of their intellectual abilities before giving grandiose self-important interviews to the liberal arts majors over at the NYTimes.
Maybe this is true, seems more like this post has created somewhat of an impasse for many.
Hinton seems to be very respected in the field, and actual computer scientist focusing on ML. I think many here are having a hard time sneering, either because he is someone they respect, or because he is someone to whose knowledge on the subject they defer.
I wouldn't rush to claim astroturfing, unless you think that it is happening on this post specifically for some reason?
People seem to be unclear on the message that he’s communicating
here. He’s saying exactly the same thing that the rationalists
believe:
AI will become smarter than people, we will lose our ability
to control it, and it will destroy us all.
This is insane. There’s a temptation to believe that just because
[eminent scholar] says insane things that they’re no longer insane. No;
they’re still insane when Geoffry Hinton says them.
For those who didn’t read the article carefully, have a look at some
relevant quotes that support this interpretation:
Down the road, he is worried that future versions of the technology
pose a threat to humanity because they often learn unexpected behavior
from the vast amounts of data they analyze.
“The idea that this stuff could actually get smarter than people — a
few people believed that,” he said. “But most people thought it was way
off. And I thought it was way off. I thought it was 30 to 50 years or
even longer away. Obviously, I no longer think that.”
“I don’t think they should scale this up more until they have
understood whether they can control it,” he said.
None of these things would seem out of place if you saw them on an
unhinged LessWrong post by some basement dwelling teenager.
Geoffry Hinton clearly believes in the coming of the robot god. Jesus
christ just look at this
tweet:
Reinforcement Learning by Human Feedback is just parenting for a
supernaturally precocious child.
Remember folks, even eminent academics can be crackpots. People here
who’ve been in academia can tell you this.
Fucking Georg Cantor invented the rigorous mathematical concept of
infinity because he was trying to get insight into the mind of God
Himself. Sometimes real math does come from people who have
fundamentally non-secular motivations.
Take a breath, especially before you start tying increasingly eminent academics together with Yudkowsky. Being vaguely concerned AI will kill us is a bit silly but obviously isn't in the same category as being a Yudkowskite and being 100% sure there's going to be superintelligence robot apocalypse, on top of a mountain of other inane specifics (e.g. FOOM/nanobots).
I get you think it's literally physically impossible and requires belief in the supernatural or something. For all the valid analogies to religion, to me it's in the same category as humanity being killed by aliens - very unlikely (to put it mildly) and better left to sci/fi but not literally against the laws of physics. If some academic says that they think there's some small chance we'll eventually be killed off by aliens, I'll smirk at them but they're not in the same category as UFO abduction people.
As a professional scientist Geoffry Hinton should be restricting his public commentary on matters of importance to opinions that he can support with either provable mathematics or with empirical science. He can't do that with this opinion; it isn't base on any kind of evidence or sound reasoning. That's the definition of supernatural, which incidentally is a word that Hinton himself uses in describing what he thinks.
Eminence does not shield academics from scrutiny. On the contrary, it should increase the degree of their responsibility and the strength of our condemnation when they behave unprofessionally.
Eliezer Yudkowsky is much less culpable for saying nutty things about gray goo than Geoffry Hinton is for worrying that skynet will kill us all. The fact that skynet is less completely insane than gray goo is immaterial, because Eliezer Yudkowsky is a weird, comical, basement dwelling cultist with a middle school education, whereas Geoffry Hinton is a world-famous scientist employed by one of the most powerful corporations in the world.
Hinton is failing in his professional responsibilities and he should be given no quarter for it.
>opinions that he can support with either provable mathematics or with empirical science.
I think you've staked out a very weird and indefensible epistemology. Nuclear scientists can't opine to say that "Nukes will destroy the world"? That is neither provable empirically nor mathematically. What of claims about the future effects of climate change?
Not to suggest these opinions are of comparable reasonableness - I'm not defending AI risk itself.
>Hinton is failing in his professional responsibilities
I don't disagree entirely... But if you were in a position to try to crucify him for this belief professionally I think you'd likelier help the lesswrongians gain respect than bury *Geoffrey Hinton.*
I realize that fighting a religious mass hysteria is like trying to fight with the ocean, but frankly I'd feel remiss if I didn't at least try.
And I hope that anyone who actually understands this stuff who keeps their mouths shut because ThE EmInEnT GeOfFrY HiNtOn disagrees goes to sleep each night hating themselves for their feckless cowardice.
Anyway, I'm being far too serious for SneerClub, and too sarcastic, and thus I've probably overstayed my welcome for the moment - moreover your view seems to be canon.
But to for any who care: There are people in, say, physics who are silly and believe weird things, like the multiverse. Sometimes their beliefs are perhaps analogous to religion. People are weird, academics are often especially weird. Heck, weird people often do good research. This is how I view Hinton.
I come here to *sneer* at Yudkowsky because Yudkowsky practically *only* believes silly things. I can call some of Hinton's beliefs silly without going further, and while still respecting him for his contributions. Many people have a few silly beliefs. If AI risk is a "religious" view, then I'm not the equivalent of a hardline militant atheist who has to search for and denounce religion everywhere, perhaps excepting the extremes (creationists/rationalists).
Yeah, why not. Let me start ranting about how Geoffrey Hinton (and hell, add Stuart Russell and DeepMind and OpenAI and whoever else made dumb comments about AGI utopia or apocalypse last tuesday) is a religious cultist, as well as a number of my peers. Otherwise I'm a coward.
I come on here to laugh at rationalists, not become literally unhireable. Yudkowsky is a fine target for me.
I don't think everyone who ever expressed any existential concerns is literally religious in the same way committed Rationalists who subscribe to the canon are. I don't know about where OpenAI hires fit.
>so you agree that OpenAI hires straight out of the AGI crowd
And DeepMind was founded by an AI risk person. And so on and so forth. My point is that it's not just Hinton, and the idea of *taking a stand* against it all is infeasible even if I thought I was fighting a dangerous religion.
If your coworkers agree with this shit then you're working at a bad organization.
Despite what the cultists would like us to believe, the *vast majority* of AI professionals know that this is all bullshit.
I actually agree with this comment wholeheartedly. People seem to be shoehorning me into an AI risk proponent when I'm saying much as what you're saying - that Hinton is being silly but it is not unusual for scientists who do otherwise good work to make grandiose predictions, and no he isn't a religious fanatic in the Lesswrong cult.
This becomes an issue, he said, as individuals and companies allow
A.I. systems not only to generate their own computer code but actually
run that code on their own. And he fears a day when truly autonomous
weapons — those killer robots — become reality.
He’s pretty clearly talking about existential risks, beyond
misinformation or just weapons use under human control (given the
self-modifying part), so I think people will have to drop the claim that
no top experts are worried about these kind of scenarios, though you
already had to dismiss Stuart Russell as not being a leading expert in
the field to do that.
Then, last year, as Google and OpenAI built systems using much larger
amounts of data, his view changed. He still believed the systems were
inferior to the human brain in some ways but he thought they were
eclipsing human intelligence in others. “Maybe what is going on in these
systems,” he said, “is actually a lot better than what is going on in
the brain.”
I think that is pretty undeniable at this point in terms of breadth,
even though it is much less capable than a 5 year old in other ways.
“The idea that this stuff could actually get smarter than people — a
few people believed that,” he said. “But most people thought it was way
off. And I thought it was way off. I thought it was 30 to 50 years or
even longer away. Obviously, I no longer think that.”
You've got the direction of inference reversed. Stuart Russel *loses* his credibility because he buys into this nonsense. And now so does Geoffry Hinton.
EDIT: if anyone would like to downvote this, I invite them to provide the scientific basis for Hinton's opinion here. Or, failing that, I invite them to explain why a scientist should not lose public respect for making alarmist proclamations that are not based on scientific evidence.
We’re having way too much “Rationalist Debate Club” here, or just normal “Debate Club” and not enough Sneering, so we’re locking this one down before we start just banning people for being too serious.
I’m too lazy to read this thoroughly but I’m not quite seeing the Sneer here - it’s about some guy in AI but he’s not a rationalist and doesn’t seem infected by the LW/acausal robot god mind virus. Now, some reactions to what he’s saying could be Sneered at, but I’m just not sure of the Sneer nexus here.
Care to elaborate?
https://twitter.com/ESYudkowsky/status/1653054615263514625
My man’s shamelessness is unbounded
We’re proceeding cautiously *aside to investor* but not TOO cautiously am i right haha
Here is the reddit thread about it that exploded: https://np.reddit.com/r/technology/comments/134jnsn/godfather_of_ai_quits_google_with_regrets_and/
Reading the comments of that thread made me a little more worried that any day now we’re going to see AI Cultists. Like, an actual religion spring up.
I love how this man, who is apparently mathematically and technically brilliant, goes to all these lengths to avoid pentagon funding because he doesn’t want war robots and somehow deludes himself into thinking that if you do all the steps to make a war robot except putting the code inside a robot that will somehow stop it from happening. Particularly when he sells his work to Google.
And that Oppenheimer quote he liked to throw around that now leaves a bad taste in his mouth. God, what a guy.
What exactly is the sneer here? Geoffrey Hinton is lightyears away from Yudkowsky and the Rationalists. It seems like he is leaning more toward Stuart Russell’s version of the “control problem”, which you can have reasonable academic debate about imo.
Also, leaving a large corporation in order to freely criticise it seems pretty based.
Hinton is someone I deeply respect and admire, so it’s a big deal that he said this. Past few months have definitely shifted the Overton window - seems like AI doomerism is everywhere now
FWIW Hinton rejects any implication that he left Google so he’d be free to criticize Google, which “has acted very responsibly.”
also jfc the top reply
People seem to be unclear on the message that he’s communicating here. He’s saying exactly the same thing that the rationalists believe:
AI will become smarter than people, we will lose our ability to control it, and it will destroy us all.
This is insane. There’s a temptation to believe that just because [eminent scholar] says insane things that they’re no longer insane. No; they’re still insane when Geoffry Hinton says them.
For those who didn’t read the article carefully, have a look at some relevant quotes that support this interpretation:
None of these things would seem out of place if you saw them on an unhinged LessWrong post by some basement dwelling teenager.
Geoffry Hinton clearly believes in the coming of the robot god. Jesus christ just look at this tweet:
Remember folks, even eminent academics can be crackpots. People here who’ve been in academia can tell you this.
Fucking Georg Cantor invented the rigorous mathematical concept of infinity because he was trying to get insight into the mind of God Himself. Sometimes real math does come from people who have fundamentally non-secular motivations.
He’s pretty clearly talking about existential risks, beyond misinformation or just weapons use under human control (given the self-modifying part), so I think people will have to drop the claim that no top experts are worried about these kind of scenarios, though you already had to dismiss Stuart Russell as not being a leading expert in the field to do that.
I think that is pretty undeniable at this point in terms of breadth, even though it is much less capable than a 5 year old in other ways.