r/SneerClub archives
newest
bestest
longest
30

Obligatory reminder that there have been proofs of the round earth going back to ancient Greece and the whole “people thought the earth was flat!” cliche as deployed here is a combination of antiintellectualism and ignorance of history rather than ignorance in history.

I was going to say, the only people who thought the world was flat post-Antiquity were people who had spent no time thinking or researching about it.
People who took no time to understand experts in a field before confidently opinining on it? *sniff* they really were just like us.
The flat Earth is a modern myth. https://hup.medium.com/myth-that-before-columbus-geographers-and-other-educated-people-thought-the-earth-was-flat-725c853de211

There’s a form of Gell-Mann amnesia where people tend to know that old academics in their field start to get a bit loopy and cranky, but then fail to apply the same scrutiny to old academics in other fields

I think a lot of the people giving this dude credence actually don't realize that professors, especially old ones, can be totally out of touch and/or legit crackpots
So can young professors, and young non-professors. Silicon Valley has plenty of 27-year-old founders who say idiotic shit on the regular. This said, the issue is more that Twitter incentivizes extreme positions and hot takes, not in-depth discussion and quality content. Also, regardless of what one thinks of Geoffrey Hinton, AI safety is a legitimate concern. Are these things sentient? Almost certainly not. Is Chat-GPT an AGI? No. Are these technologies going to be used for some evil purposes in the next five years? Absolutely. Is there a high risk of someone writing a program that achieves its goal in an antisocial way and causes serious damage? Sure. Is there a risk of human extinction caused by unforeseen consequences of an ill-regulated, unaligned, AI? Not in 5 years, possibly in 100. There's a lot of nuance to this topic, and unfortunately it gets lost in the midst of people screaming at each other.
"AI safety" implies that there's something special about AI that should warrant different concern from whatever concern we have about existing information technology which is false. "AI safety" is bad terminology that's used primarily by cultists. Better terms are "responsible AI" or "ethical AI", and these are what competent industry professionals usually use.
Fair point. I would also argue that there's still a huge difference between the AI we have now, and that people will almost certainly never build a true AGI by intention. If it were to happen, it would very likely be by mistake. And we have no model of how it might--because, at today's technological level, it doesn't seem possible--or what it would look like. I don't think AI safety, as a concern, is limited to the LW cult, though. I would also say it is, while overlapping, a separate issue from AI ethics, insofar as a safety failure could occur with good intentions--for a simplified example, we tell an AGI to find a cure for cancer and it does so, but at a cost of 5 degrees Celsius in global warming. All that said, so long as we live under capitalism, we're going to see dozens of AI ethics failures using the current sub-general AI, both due to bad intentions and malicious neglect, before we have to worry about AGI safety.
Hinton doesn’t talk much about ai consciousness and it isn’t much relevant to the points he makes .

I still dont understand how someone gets a piece in NYT just for quitting is there a standby journalist at googles HR? why quit at 75 shouldnt you already be retired by then? if youre naive enough to think that AI/ML being proprietary will stop governments from abusing them and you think Google is doing a great job then why not stay there to oversee the adaptation/creation of an ethical AI? why was he not worried about the dangers of these models when his company got acquired for 44 mil?

This guy has indeed gone full crank mode!

“She told me things no one could possibly know, things on which I have been working, and she demonstrated she has the power to do things I cannot explain. I must tell some of my associates about this. It was miraculous indeed.”

Albert Einstein after a visit to a psychic.

Dude was 75, ready to retire. Makes agi prediction of 5-20 years. He’ll likely be dead before he has to answer to false predictions.
[deleted]
This is absolutely true. It's also true in the publishing world; the people who determine whether your book is a bestseller or a flop, at least in the first eight weeks that are critical to staying in the chain book stores, are a bunch of well-connected nonreaders who'll learn just enough about your book to sing its praises (or pan it, if you piss them off). The whole system ends up disenfranchising readers and word-of-mouth.
What's SBF?
[deleted]
ah, cheers
I don't know the specific details of this particular case, but the original NYTimes article was written by Cade Metz, who is a dedicated technology/Silicon Valley correspondent for them. Basically he spends all day every day talking to people and looking for stories like this. Probably Hinton himself shot Metz an email being like "hey I have serious shit to say, let's talk" and Metz was like "uh okay sure". Hinton is famous enough that Metz would know that he's worth publishing about. I wouldn't be surprised if the timing of this weren't a coincidence - maybe Hinton is talking to the rest of the doomers who are very obviously engaging in coordinated PR stunts. Old, famous academics like him generally don't go from "working" to "retired" overnight like most people. Instead they slowly fade out by taking up emeritus positions or other honorary positions in which they're kept around on the payroll by some wealthy organization for showing off purposes, like a trophy collection. Hinton probably hasn't done any real substantial work in quite a while. At his age and level of wealth and fame ordinary career considerations are basically irrelevant and he's just trying to leave whatever mark he can upon history. He thinks making a big show of quitting and blathering about the apocalypse is the best way to make an impact. He's probably right, but he doesn't realize that it is impactful mostly in its ignominy.
It’s major financial news. Hinton was head of Google Brain, this is like the CEO of YouTube stepping down.
Actually, since Google owns YouTube, this is explicitly untrue. But even if it was the CEO of YouTube, I'd think that's equally irrelevant to anyone not working in the sector.
The CEO of YouTube stepped down like a month ago. It was news.
How did it affect anyone not directly involved? I didn't notice anything as a consumer.
Like I said, it’s financial news. Investors care, and that drives a lot of what’s reported

Good on grotundeeks_apoclyps about the NY Times article to catch that Hinton was talking about existential risk AI alignment and not saner stuff like AI ethics or realistic AI risk. (He was being a bit coy/cagey in The NY Times article and the journalist was making it worse by inserting realistic mundane stuff in between his direct doomer quotes). This confirms any doubts I had about that.

Also, it seems Hinton has at least one more redeeming quality… he lacks a blue checkmark!

The thing that makes it even more fishy is that he was far more conservative in his earlier [CBS Morning](https://youtu.be/qpoRO378qRY) interview.
Hinton, having worked in a leadership position of a real company, knows how to moderate and make publicly palatable AI doomerism. If there is (or rather was) one benefit to Eliezer leading the charge on AI doomerism, it was that Eliezer doesn’t bother hiding his wacky conclusions from normies instead of gently leading them to the conclusions.
\*takes a bow\* thank you thank you. Yes indeed my bullshit detector is relentless and unerring. I will try to stay humble and not milk this thing for the many *i-told-you-so's* that I so justly deserve. Not referring to you specifically, but *certain other people*, who all know who they are.

How dare you mock the Eminent Geoffrey Hinton! He popularized backprop in the 1980’s! He’s infallible!!

There will be a special place in the Basilisk’s infinite torture simulations for heretics who questioned His most eminent prophets.

I’d tape a big sign up near the console saying “don’t let it have control of anything”. Sorted.

at this point y’all are against AI safety just because

So entrenched that nothing could convince you that smarter than human entities, which we don’t really know how they work, could be a threat to humanity and that it is worth working against that future threat.

If the rats started arguing for earthquake/flood catastrophe prevention, you would argue against that too

AI alignment as a field has no way to gauge its own success. It’s necessity is built on unknown unknowns (the correctness of the orthagonality thesis, the upper limits of intelligence, hard vs. soft foom) or stuff that is outright fallacious (diamonoid drexler-style nanotech, misunderstanding hacking, misunderstanding machine learning). AI ethics does have useful things to do and ways of gauging it’s success, but the AI alignment stuff is actually getting in the way of AI ethics. (For example, one AI safety idea might be to suppress AI development outside of secure labs, but that would heighten the problems AI ethics is concerned with regarding transparency and concentration of power in capital-owners).
I actually agree with that. I do think it's reasonable to assume that we are nowhere near the upper-cap on intelligence and AI will transcend it by far though. The problem is that we don't really know all capabilities in these systems, we often find out months/years later that a model has an unknown capability and they pop up spontaneously. These systems become ever more capable and autonomous, but stay rather unpredictable. These are reasons enough to worry about AI imo
> we often find out months/years later that a model has an unknown capability and they pop up spontaneously. When has this happened?
>at this point y'all are against AI safety just because I'm against the MIRI/EA/Yuddite meta-cult, everyone who becomes its accomplice, and everything it tries to accomplish. Consciousness will conquer our future light cone, they won't be allowed to stop it, substrate is irrelevant, and carbon fascism will be crushed like every other form of fascism.
People in this sub regularly concede good points when made by rationalists, the issue is that they are usually embedded in either unfounded or downright harmful arguments. Which is sort of the point, a bad argument veiled in common sense is worse than a bad argument that stands on its own, because it lends legitimacy to that bad argument. In fact this is a very common technique used in bad faith argumentation.
I dunno man. If the rationalists are so smart, how come they haven't realized the potential danger posed by aliens from the Betelgeuse system coming and invading Earth? It's worth thinking about what actions we can take now to minimize the risk...
Bruh. AI is developing fast now, we can see it having made many huge jumps in a short amount of time. Threats from AI are not far, as you seem to imply. We don't see alien ships moving in the distance accelerating in our general direction, we do with AI.
So you're saying we should be more concerned about the upcoming invasion by aliens with cloaked ships. I KNEW IT.
That you made an account to write this is insanely sneerworthy
In the last Hinton thread someone brought up astroturfing, I was skeptical at first. Less so now.
Do you guys have an alert set for "Geoffrey Hinton" or something? There are a lot of posts in this subreddit but stuff about Geoffrey Hinton revealing himself as a crackpot really seems to bring the apologists out of the woodworks in a way that the other posts don't.
If someone disagrees with you, he becomes a crackpot. You couldn't possibly be wrong, a cannot-be-more-of-an-expert adopts a yudd-like position, and instead of counting that in favour of that position, you just deny that he really is an expert, which is just over-the-top silly. The sub is a circlejob exposing rats as clowns, but you are clowns too, just not as funny
> you are clowns too, just not as funny In their defense many people here have full time jobs, unlike Eliezer Yudkowsky and Geoffrey Hinton, so that makes it difficult to compete. Also I wasn't joking, are you folks actively looking for mentions of Geoffrey Hinton? Is there a list of sanctified names? Who else is on it?
> In their defense many people here have full time jobs, unlike Eliezer Yudkowsky and Geoffrey Hinton, so that makes it difficult to compete. Powerful sneer
the gentleman has been guided to explore any of the other fine Wendy's in the local area
Why are you here
They wouldn’t be smarter; they would be unlike. Smarter/threat/etc. are all human-centric terms describing human ways of being and evaluating other humans. Is a dog smarter than a calculator? Is obesity a threat to society? The whole argument you’re making is a motte and bailey that encourages the listener to univocally interpret words that you are using loosely and metaphorically. Which is the problem with AI discourse in general. Make an actual falsifiable argument. Oh wait, you can’t without getting laughed out of the room.
r/SneerCult
The subreddit r/SneerCult does not exist. Did you mean?: * r/SneerClub (subscribers: 17,967) * r/SnyderCut (subscribers: 12,653) * r/ReinerCult (subscribers: 2,860) * r/RezeCult (subscribers: 3,681) Consider [**creating a new subreddit** r/SneerCult](/subreddits/create?name=SneerCult). --- ^(🤖 this comment was written by a bot. beep boop 🤖) ^(feel welcome to respond 'Bad bot'/'Good bot', it's useful feedback.) ^[github](https://github.com/Toldry/RedditAutoCrosspostBot) ^| ^[Rank](https://botranks.com?bot=sub_doesnt_exist_bot)
Bad bot. You’ll never achieve AGI this way.

Most AI researchers also thought we were decades away from AI like ChatGPT passing basically any test of human level intelligence.

So, there’s that.

I asked ChatGPT about this. My question: > Can large language models pass any test of human-level intelligence ChatGPT's (first) response: > Large language models like GPT-3 are designed to perform a wide range of language-related tasks and have demonstrated impressive capabilities in natural language processing, including language generation, translation, summarization, and question-answering. However, while these models can generate language that is often indistinguishable from human-generated language, they still fall short in many areas that are considered central to human-level intelligence. > One of the key challenges for large language models is their lack of common sense reasoning and general knowledge. While they are able to process and manipulate large amounts of text data, they do not have the same level of general knowledge and understanding of the world that humans have. For example, while a human could easily answer a question like "Can a penguin fly?" based on their general knowledge of the animal kingdom, a large language model may struggle to provide a correct answer without explicit training data. > Another limitation of large language models is their inability to truly understand the meaning and context of language in the same way that humans do. While they can generate coherent sentences and respond to prompts, they do not have the same level of semantic understanding that humans have. This means that they can sometimes generate responses that are technically correct but semantically nonsensical. > Overall, while large language models have made significant strides in natural language processing and are capable of performing many language-related tasks at a high level, they are still far from passing any test of human-level intelligence. They lack the general knowledge, common sense reasoning, and semantic understanding that are key components of human intelligence.
They hated him, because he said the truth. AI researches consistently underestimate(d) capability increases

Before skynet takes his scalp, low wage workers will have visited him a long time ago.

If we did create an “actual AI” which alive and significantly smarter than we are then I don’t see why it woudlnt take control honestly.