r/SneerClub archives
newest
bestest
longest
Does this count as sneerworthy (https://i.redd.it/uf3mluz3h2k91.jpg)
66

Not a capitalist coming up with the very obvious strategy of using both AGI and human labor. 😮

No the ai would just reprogram itself to be nice! It’s not possible that the people who pay millions in funding have control over how those funded projects turn out!!!
Excuse me sir the term is _sentient_ labor I'm reporting you to the sentient resources director immediately
They’d have to; you can’t extract surplus value from a machine.

I’m glad I’ll only have a couple years of homelessness and destitution before the rest of the fucking owl AGI comes online and ushers me into utopia.

Sorry, starving during the alotted transition period is just a skill issue. Cope!
Technoutopia machine go brrrrrr
We only get to try once with making a superintelligence that doesnt make humans extinct, no do overs.
Assuming superintelligence is a useful description. Assuming that making one is possible. Assuming superintelligences have the means of causing extinction events. Once you assume all of these things, don't you see how much of a risk this is??
What’s agi here
Artificial General Intelligence

Sure, YOU might lose your job and have your life crash down around you, but then obviously the magic computer will come and fix everything.

Me? No, I keep my job because the magic computer needs me. Obvs.

Programming ML models have entered the chat.
Our startup promises to achieve a fully sentient AGI by leveraging Github Co-Pilot.

Offer yourself up to God, and he shall grant you peace in the kingdom of heaven.

[deleted]

Yeah this is what fucking kills me about the "just wait for full automation and then we'll all get UBI" crowd. If the people who own the robots can have all their needs met by the robots, why would they bother to keep us alive? We have to overthrow the owners *before* we can have the robots! There's also plenty of other reasons to dislike the attitude of "just wait for the robots and then we can have socialism." These are largely people who are terrified that under socialism they might have to spend a few hours a week picking fruit or something. Hate to break it to you, but we might still have to, like, do shit in the socialist utopia.
Some bigshot economist person (forgot which) already said that a large percentage of the human population could easily be replaced by machines a while back. Certainly sounded very ominous. Capitalism is a paperclip maximizer.
If you're an accelerationist, then it's a solid approach to get a mass uprising. Disclaimer: I am not an accelerationist.

AGI basically just means “a computer program that can act like an autonomous agent, learn useful representations of arbitrary abstract concepts in an unsupervised fashion from data, and perform what humans consider inference and reason with them.” The ML we have so far doesn’t qualify, but basically nobody has any idea when we might develop something like that; it could be in ten years, it could be in 500. Nevertheless, even though these big text-to-image models don’t qualify, it would be weird not to be pretty shaken by them.

They take a lot of training data, but the result is something that can encode image subjects and topics, color schemes, art styles and even simply the vibe of a piece of art as locations in a mathematical feature space, and interpolate arbitrarily between them. And most of this progress has happened in the last, like, 2 years. I’m someone who likes to draw and is pretty good at it for a complete amateur, but has never pursued it as a serious hobby much less a career, and I feel a bit threatened by these things. I can’t imagine how a professional artist who has actually followed the speed of progress in these models would feel.

Ignoring all the possible negative externalities ML could generate in the short term and all the ways its progress could stall halfway, even in the utopian, benevolent AI god Culture future people envision you have to deal with the fact that basically every field of human endeavor has become, at best, a sport.

Take some solace in the fact that quite literally billions of dollars have been poured into automating medical imaging and radiology, and the results are still subpar enough that radiologists aren't going anywhere anytime soon. There's also the big problem that ML models can only really regurgitate correlations from the universe they were trained on. That universe might contain specific things, like maybe images of contemporary art, but that model cannot incorporate data that exists outside of that universe, as it will be completely ignorant to novel concepts, styles, etc. If a new art movement takes hold with a new style, old models won't be sufficient without expensive retraining. That leaves human beings to be creative, to innovate and be pioneers in the spaces they work in. At least from my access to GPT-3, DALL-E, etc, there's nothing to worry about if you're an artist or writer. There are some impressive results, but they're cherrypicked out of a sea of really awful results that human beings would never intentionally create themselves for their employers or clients. I think in our lifetimes, we'll see AI automate some drudgery, unfortunately drudgery that people rely on to support themselves. I don't think all writers and artists will be out of a job, but certainly some of them will be as tech reduces the need for more workers.
“AI” is literally just a marketing term, machine learning is entirely different from so-called “general ai”.
General AI doesn't exist, but if we ever develop it, will likely be based around what we call machine learning. Much of reinforcement learning could be considered an attempt to take machine learning in that direction.
Ok, then see my other post on this sub

I kinda think both people in this exchange are cringe and sneerworthy. The first person comes off like a Luddite; new technologies replace jobs, period, and that progress is inevitable.

The second guy sounds ready to start making necessary sacrifices to his new Machine God, which is infinitely creepier and more delusional. They’re both wrong tho, just in different directions.

Nah. Being afraid of losing a job like art is much more reasonable then being afraid of losing trucking or similar. One is a form of human expression that’s existed for millennia and is effectively part of our identity as people, and the other is a trivial task that only some people care about if at all. Also Luddism was a legitimate worker’s movement
lol, just gonna throw truckers under the bus for no reason… Back here in reality, tech jobs and training promised to manual workers in the 80s and 90s never materialised in the degree required to smooth over the transition from industrial to fully post-industrial western economies, and the effects of that are here today.
I think it sucks that they’re going to lose their job and they should be able to keep it, but tbh I’m sort of a slippery bastard and thought people would think I’m too much of a southern Luddite if I said that
AI art doesn't take away the human expression, only the ability to commodify it.
Tell that to the guy who’s already built or tried to build a career juggling art technician/moving work and his own fine art. Art, including good art, today exists in a market and that market is structured such that there’s no clean division “commercial art gets replaced by AI/great works of human expression hang around”. In different ways I feel like both you and /u/stonemuncher2000 need a dose of the reality that exists between here, now and the hypothetical collapse of capitalism.
True, but because we live under capitalism, commodity fetishism has become entirely conflated with whether something has value or not. The lack of ability to commodify art will mean that more and more people will care less about it, and the lack of public interest will mean less artists, of course, and so capitalism will push another aspect of humanity to only the fringes. Under a socialist society, the ability to magically make art with the press of a button would of course be a good thing. But we don’t live in a sane society, let alone a socialist one.
not making a value statement on which job is more important or worth protecting but transporting goods has been a thing for thousands of years as well. instead of trucks we used to do it on ships and in caravans with wagons and horses merchantry is a huge part of human history and could be argued part of the human identity
wtf are you talking about, no it isn't we've done artistic expression for literally the beginning of mankind
and we've been doing mechantry since nearly the beginning, almost as long if not the same time span artistic expression really took off when we stopped being nomadic hunter gatherers which is about the same time as merchantry and trade
This isn’t true whatsoever. We have material evidence of cave paintings during hunter gatherer periods and even if we have been doing shipping and handling just as long, the duration of time spent isn’t the reason it’s part of human expression. It’s part of human expression because the purpose of art is human expression. Merchantry is a means unto an end, while art is a means unto itself. Art can be the spiritual and philosophical basis of an entire society, while bartering and delivery can only be it’s material basis, at best. I don’t disagree that the material suffering caused by replacing trucker’s jobs is also bad, but conflating the two activities could only be done by someone who has never experienced the joy of creating something for themselves in their entire life.
you were the one who brought up "how long humans have been doing this" as a metric for why it's so significant >Art can be the spiritual and philosophical basis of an entire society, while bartering and delivery can only be it’s material basis, at best. the venetian republic identity was merchantry so you're just wrong you seem to be dismissive of trade out of principle, meanwhile it's tangibly improved the lives of literally billions of humans throughout history.
Do you not see how elitist this makes you sound? There are people who have their identity and self-worth invested in trucking, just as there are people who have there identity and self-worth invested in art. Losing their job will be equally devastating for both of them, even if UBI means they don't have to worry about money.
My job is ferrying rail workers to and from trains and hotels. Not entirely disimilar from being a trucker. I can't imagine making that my identity or seeing it as a source of a sense of self-worth. The thought is cataclysmically depressing. That's it? That's my existence? Van driver?
I get that, but the existence of those [tough guy trucker t-shirts](https://www.google.com/search?q=trucker+t+shirt&client=firefox-b-1-m&prmd=sinv&source=lnms&tbm=isch&sa=X&ved=2ahUKEwit4YX5r-v5AhUwFFkFHXYKCZIQ_AUoAnoECAIQAg&biw=320&bih=448&dpr=2) shows that some people do make trucking a big part of their identity
Sure, but it's kind of an identity about valorizing the unglamorous but necessary and worthy labor of trucking. If we had a way to make trucking unnecessary, the meaning of the trucking identity would shift. Art based identity is more often and directly about self expression, in a way trucking identities aren't.
Luddism was a poorly-aimed legitimate workers movement. They didn't want to collectivize the factories, they wanted to destroy them and go backwards to the workshop system, which gated out technological progress for centuries because of nonstandard measurements. I'm gonna be honest, I don't think AI art is an actual concern. No one wants that shit; look at NFTs. People bought them as speculative "Investments," but no one was framing their Bored Ape collection to display proudly to guests. My issue with poster #1 in the OP is the entitlement to work in a given industry. I'm sure lots of people in the late 1800s wanted to continue to breed and raise horses for transportation, but that was an objectively worse transportation method than what came after, even if cars are environmentally destructive. Same for the Luddites in their workshops. Economic ramifications aside, factories led to kinds of technological development that simply weren't possible when 15 small shops in the same area used different "inch" measurements, such that their products were not interchangeable. Opposing that progress was shortsighted. Opposition to individual ownership of that technology would have been more sustainable.
Considering what came about as a result of cars, I don't think that horses are an objectively worse transportation method than what came after. Horse shit might have been disgusting, feral horses might be bad, but horse farts weren't going to cause the Maldives to go underwater. In general, I also think this post shows a belief in an unrealistic single-path route of technological development. I don't view it as adequately demonstrated that large factories over workshops are necessary for further technological development, even if they were necessary for the particular route our technological development took.
Horrendously reductionist viewpoint. Technological progress may or may not be inevitable (the jury is very much still out on that, despite what technocrats would have you believe), but regardless, shrugging your shoulders and saying "jobs will be lost" is a total cop out. If it really is inevitable, then much greater effort should be spent reducing the social harm that tech progress can bring, which is immense. Look at the agricultural and (both) industrial revolutions. Entire lives and communities were sacrificed on the altar of some reified "progress"; reduced to subsistence or snuffed out entirely. Was it for greater long-term good? Potentially, but that in no way reduces the scope of suffering that has occurred historically as a result. All the first poster requests is that technological progress be undertaken collectively, with sincere regard for social good and human culture. How on earth is that sneerworthy?? Are we not a leftist sub anymore??? Why should Silicon Valley VCs be allowed to apply the "move fast, break things" credo to anything they want? Using Luddite as an insult just tells me you have very little understanding of who they were. They sought traditional recourse at first through the Crown but were rebuffed, so they took matters into their own hands. They were fucking heroes! It wasn't just their livelihoods at stake – entire communities based on textile manufacturing were at risk of withering away completely. I have a profound aversion to Whig history presented as objective fact, and technological progress presented as removed from its social context.
I agree that more should be done to prevent the downsides of technological progress, and vote accordingly - however, I wouldn't say the jury is out on whether or not the development of new tech is inevitable. Perhaps I should say "we have no evidence that it is possible to avert the tide of new tech, as it has never been done before." The actions of the Luddites did not prevent the factories from subsuming their jobs. We undeniably live in a world *not* built by Luddites. And whether you think they were right or not, the world we live in has standard measurements, which was a development brought about by industrialization and which would have been essentially impossible before then with many small shops; this led to further technological development, as components of various products could be built by different groups of people and remain compatible. Governments should take a more active role to ensure that people aren't crushed by the wheel of progress, but that progress is - or at least appears to be - inevitable.
or end business and government
I think we mostly agree... maybe. I am still pretty skeptical of techno-determinism as the way things actually are – I'm pretty sympathetic to alternative explanations like social construction of technology. It's specifically the assumption that technology determines further human action; that it is the primary antecedent to it, that I reject. I think social conditions of the last while (particularly under capitalism) can make it feel simply like this is true. Things don't necessarily have to be this way, even if they are in a market economy. Now that AI art is here, obviously it's not going away. You can't suppress it, nor would it necessarily be a good thing to suppress. But the manner in which it is integrated into culture is not determined by the tech itself. That's a product of social circumstance. And that's what that can and should be managed; mitigating the damage to livelihoods, and ensuring it remains mostly a tool of culture rather than becoming an end unto itself. The first commenter is explicitly only asking for progress to be directed in a socially responsible manner, not for the suppression of technology in general. Significant difference. Moreover, I suspect that many or most of the Luddites (should be noted here that they were not coherently political) would have been satisfied with better working conditions, a guarantee of labour, etc. rather than simply preventing the machines from existing at all. In this sense, at least some of the Luddites were asking the same thing, in a more violent way. I would also question whether or not suppression actually is impossible. It's conceivable that in some form of global order where war is never seen as a viable means that nukes would not be difficult to prevent arising, because there's no particularly meaningful incentive to build them. Likewise, in a political economy where efficiency is prioritised over other, perhaps less tangible measures, then obviously technologies that lead to greater efficiency will dominate everything else, often to the detriment of those other less tangible measures.
This post just came across my feed and I've never seen this sub before but I had the same reaction as you. I wasn't sure if I was supposed to pick a side or not but both sides feel...off.

Lol these people are fucking delusional. It’s like listening to cryptobros or NFT web3 clowns. Just completely divorced from reality.

We will never create a true AGI.

I don’t know what this subreddit is or what is to be sneered at but that’s my two cents.

This subreddit is dedicated to make fun of insane ghoulish rationalist, LessWrong, effective “altruist” types.
EAs are capitalists they would be against what that guy wants. For sure that guy is dreamy eyed about AGI but EAs would see an unfettered AI conscioisness as their greatest threat. They have spent hundreds if milliona trying to get people to approach the problem "safely."
Oh 100% but that’s why it’s sad, that person is probably quite a good person deep down but they’ve been convinced by silicone valley marketing and such that everything will be ok because of Ai Jesus
The EAs are calling for a 100 year cooldown on making AI happen... silicone valley isn't even trying to make AGi.
If Silicon Valley isn’t even trying to make AGI why are there several companies doing that, and since when are “The EAs” a single undiverse group, and where specifically is this call coming from for a cooldown that speaks for all of them?
Let me know when an EA donates to Carmack. Everything we have seen out of OpenAI and DeepMind is a strong tendency away from RNNs and toward transformers. That's because of a results based approach. Developing an AGI is going to be an inefficent process. It is so much easier for these companies to make non reasoning superhuman specialized machines. As Carmack called it the first AGIs are going to be like disabled children. I know of not a single EA who donates all of their money to an AGI startup and AGI reticence is a common EA paradigm *because* utility wise you should conclude bringing the singularity is the only logical route to save as many people and produce the most happiness. Unless you think it is bad somehow. Lex Friedman has people on regularly who think AGI is going to destory the world and turn everything into paperclips.
Cool, so we can agree you overstated that bit about EAs en masse calling for a 100 year cool-down, I and I think everyone here agrees that nobody said “the EAs” were contributing en masse to superintelligence or AGI either, so it was all a red herring on your part anyway. Nice! Similarly, all the evidence continues to point to the idea that ultimately creating AGI or AI or superintelligence is absolutely on the cards in Silicon Valley, even if the current specific projects people are working on don’t generally have “do it with this project” in mind, and plenty of them are still yudkowsky’s kooks who worry about robot Satan. Now that that’s settled, I’m going to go back to real life
I like how "EAs are calling for a 100 year cooldown" is to you "EAs en masse calling for a 100 year cool-down." Just search "slow down" on the Effective Altruism forum. Pages and pages of nonsense. Whole response is in bad faith.
Yeah that’s how I read it, because you never said “some people on their website wrote some stuff” You can call that bad faith or you can call that responding in good faith to you expressing yourself imperfectly It doesn’t help you much there that *you’re the one* who responded to your interlocutor with the EA stuff when they hadn’t even brought up EA
This subreddit is sneering at EAs for their nonsense, the dude brought in a dumb post by someone dreamy eyed about AGI, the *one thing* the EAs are reticent about. Then you're sitting here expecting "perfect expression" for a "good faith argument." 🙄 If *anyone* is actively working on AGI it is Schmidhuber, and Carmack will be in a few months, he just started, though. Outside of those two people I don't care if a Silicon Valley startup is claiming to make AGI, there is no evidence they are working on it. We're all dead in 4 years anyway according to Yudkowsky, maybe I should join you outside.
Right, so the person accurately mentioned EAs in passing (who are very much now fair game for this sub) as people who are fair game for this sub, and didn’t directly associate them to AGI stuff This sub certainly isn’t exclusively about either AGI or EA stuff The rest is your good or bad faith misinterpretation of what’s going on there, I don’t know if your misapprehension was that this place is exclusively for ragging on people who are into AGI stuff or EA stuff or something else, but there you go
I never got the impression this sub was AGI skeptic in general just the Eud variant of AGI unsupported nonsense and the hilarity in which EAs approach AGI with endless thought experiments about "safety."
Well I should certainly hope that it is
There's at least a broad consensus of massive skepticism towards the ideas of G, FOOM, singularity, and that an AGI poses any risk to anyone living today. The concept of a machine that's merely sapient, that exists outside of "x-risk", comes up so rarely among rationalists/LW that it's borderline off-topic here.
To someone not in the kkkw how would u describe less wrong and EA
> I don't know (...) what is to be sneered at 😒
Yeah, didn't he read Lenin's foundational political pamphlet, *What Is To Be Sneered At?*
You should make this your flair
I don’t think it is deluded to suppose we might see a true AGI. It’s deluded to be overconfident about it one way or the other. It’s certainly true that progress in AI has been rapid these past ten or fifteen years. Whether it will eventually amount to AGI is conjectural.
The delusional part is hedging our societal bets on it and then agreeing to sacrifice people’s livelihoods for the possibility
>It’s certainly true that progress in AI has been rapid these past ten or fifteen years I don't see that at all. Progress in machine learning algorithms that are labeled AI as a marketing term has been rapid these past ten or fifteen years. (The term "machine learning" is also a bit misleading in the context of AI. The machine isn't learning, it's still just executing a series of instructions.) The funny thing is, most of the ML concepts in use have been around for decades, some all the way back to the 60s. The main driver of progress in this area has been the increase in raw computing power to make running complex algorithms achievable in reasonable timeframes - even on modest hardware such as cell phone CPUs, which to be fair is impressive. But there's no reason to believe it in any way resembles true intelligence.
my man edit- another big advance that helps these ML techniques is the size of training data sets. We've digitized just about everything in the last twenty years, a huge amount of the progress we've seen on algorithmic generation is due to the amount of data it can sift through to develop the neural nets that actually do the "intelligence" work. Rather than get us closer to intelligent machines, of any definition, it's actually eroded a lot of what we think is intelligence.
Exactly. We’ve created a generalization of currently existing knowledge, not a being that can somehow conjure new knowledge from thin air (which can’t exist anyways lol)
Did anyone ever actually propose such a thing? Even in Yuds most deranged ramblings I recall it was something like "deduce Newtons equations of gravity from three frames of a ball falling" which is stupid but not *nothing*.
I mean, in concept all of supervised learning is basically regression. You have a bunch of training data points, and the trick is to find a function that best fits those data points without overfitting, i.e. mimics the underlying pdf from which the points were sampled and not the noise in the points. If you do that you have a model that can generalize. In this case the points are artworks, so if you learn a good model that isn't overfitted, then sample outside of the neighborhood of the training points, or in a region of the neighborhood where the points are sparse, congratulations, you've created novel art.
Well by generalization of knowledge I meant generalization of skill. We can make a ML algorithm that makes a new piece of art but not an ML algorithm that invents new entire concepts by itself.
It's designed to output pixel values, so it won't, like, invent a new kind of sculpture, but I wouldn't be comfortable declaring that it will never innovate within image generation. Train a paired model to predict whether a given image is appealing or unappealing to human viewers, and it could become just a search problem over the less sampled parts of the model. Whatever it generates will likely be abstract and impressionistic since it doesn't actually understand the physical world except as associations between image features, but it could end up at as novel as anything the average human fine artist does.
It already generates new pieces of art. I’m referring to an ML inventing new skills or movements.
What do you mean by inventing new skills or movements? Do you mean like, inventing cubism? Because I wouldn't count on near to medium term versions of these models not being able to do the equivalent of inventing cubism if you gave them time to rip and some degree of audience feedback (which human artists also get).
Impressionistic trippy AI generated art is it's own movement now.
Well you can tell it to create art no one has made before and it will manage to do it. With a bit of work you can likely make it bullshit out an explaination for it.
> The machine isn't learning, it's still just executing a series of instructions. This is a bit misleading - the point, for models that "learn," is that the instructions it's executing have their behavior parameterized by a large amount of training data. You can see the effects of this over time as a model becomes better trained, i.e. it "learns" and is able to perform better at the task it has been trained for. The instructions haven't changed, but it's been exposed to more data and it has "learned" from that data. This is true learning, in the sense of a dictionary definition: "gain or acquire knowledge of or skill in (something) by study, experience, or being taught." Underscoring this point, it's also possible to use the exact same "series of instructions" to perform entirely different tasks, depending on how the system is trained. > most of the ML concepts in use have been around for decades, some all the way back to the 60s Some of the basic concepts, yes. But there have also been many significant advancements in the details in every decade between at least the 80s and now. Modern "deep learning" was considered a breakthrough in 2006 with the publication of "A fast learning algorithm for deep belief nets" - the core breakthrough was to train layers of a neural network independently to get better initial weights, before training the network as a whole. That method was a breakthrough then, and was instrumental in reviving neural network research, but has since been superseded by other techniques. The field today depends heavily on techniques that didn't exist in the 2000s or earlier. It's analogous to almost any technological development, really - yes, the Wright brothers flew a working motorized aircraft in 1903, so "the concept has been around for decades," but that aircraft was a far cry from a modern passenger jet, fighter jet, or indeed any modern aircraft. > But there's no reason to believe it in any way resembles true intelligence. If nothing else, it's redefining what we think of as "true intelligence" and forcing us to think harder about what that means. A world champion Go player talked about giving up the game after being beaten by an ML model - something that we took as a sign of intelligence, very difficult to automate, ended up being done better by a machine that had "trained itself" by playing against itself. It's true that no ML model today "resembles true intelligence" in more than a superficial way, but it's not difficult to imagine that gap being closed significantly in future. (Consciousness/sentience is a whole other story!)
That's fair, I'll just add the caveat that actually learning and achieving the same result as learning might not make a difference.
The difference is that we have to choose to give it the data and what data to give, so even a massive-brain super-powerful machine learning application wouldn’t be able to find the most objective philosophical and morally correct action, for instance. Even if you program another machine learning application to curate the data you give it before hand, you’re still ultimately deciding what data the “philosophy AI” will get and therefore what it will decide is best. This is great when you have a specific output you want from a specific input (like with art or writing ML algorithms) but terrible when you don’t know what output you want. How do you prune a condition tree without knowing the conditions? In fact, this exact argument could be made against supposed General AI, too. We have to decide what to make the General AI want and we have no idea if that would be the correct thing. The best option would be releasing the General AI into the general population and just allowing it to live a normal life, albeit with greatly increased intelligence and stuff. What “rationalists” want is a god that can tell them what to do, with an objective basis. My point is that this is not possible, and implying it is is the root of all of this misinformation. In fact, to arrive at a completely objective conclusion we would have to have already achieved omnipotence and given all of that data to this theoretical AI. Where is this magic knowledge that rationalists worship coming from? The internet? We’ve seen how reliable the internet can be. We’re the ones who have to teach the AI how to walk and talk. And even slight bias in that can change how it turns out. And EVERYONE has slight bias, no matter how unbiased and objective you think you are.
>The difference is that we have to choose to give it the data and what data to give, so even a massive-brain super-powerful machine learning application wouldn’t be able to find the most objective philosophical and morally correct action, for instance. Sure, no one is arguing that, I don't think. I just don't see how finding the most objective philosophical and morally correct action has to do with AGIs, we as humans are intelligent yet none of us know what objectively correct action is in every scenario. >Even if you program another machine learning application to curate the data you give it before hand, you’re still ultimately deciding what data the “philosophy AI” will get and therefore what it will decide is best. Choosing the data that goes in doesn't mean we will know the output, like with AlphaGo, it started with a set of rules and ended making moves grandmasters couldn't understand. I don't see a "philosophy AI" being built, just an AI that is aligned with human interests, and we don't know how to do that yet. ​ >This is great when you have a specific output you want from a specific input (like with art or writing ML algorithms) but terrible when you don’t know what output you want. How do you prune a condition tree without knowing the conditions? That's the big alignment problem, right? We have no clue, I don't think. We have lots of people trying to work on AI and not as many working on this problem. >In fact, this exact argument could be made against supposed General AI, too. We have to decide what to make the General AI want and we have no idea if that would be the correct thing. Sure, again, that's the alignment problem. The issue I see is that people who get close to an AGI might not care so much to find "the correct thing" as much as they want to get it running at all, since being the first would be a huge motivation. That's a problem of "how do we create good AGI" rather than "how do we create AGI". >The best option would be releasing the General AI into the general population and just allowing it to live a normal life, albeit with greatly increased intelligence and stuff. Maybe, but there are many people who go through normal live and are not "moral", so that's complicated too. I'm not sure I'd call it the best option. >What “rationalists” want is a god that can tell them what to do, with an objective basis. My point is that this is not possible, and implying it is is the root of all of this misinformation. I don't see it like that, what it seems to me is that they think AGI can be very powerful, which it may or may not be, and in case it is, we need to make sure they're good rather than bad. Not only that, even if they're amoral, neither good or bad, that they understand what we mean when we say we want whatever. Again, the alignment problem. It is not a rationalist take only, people like Sam Harris agree with these points. >In fact, to arrive at a completely objective conclusion we would have to have already achieved omnipotence and given all of that data to this theoretical AI. Where is this magic knowledge that rationalists worship coming from? The internet? We’ve seen how reliable the internet can be. I think the idea is not to have a omniscient god, but rather, a superintelligence that is capable of solving problems that the bottleneck is intelligence. For instance, we might already have enough data and enough resources to cure cancer, and what we need is more intelligence devoted to solving that problem.
My point is that it would not be objectively correct or able to exponentially grow in the exact way techbros think it could, not that it couldn’t exist. Or more accurately, that AGI won’t look like robot god, but rather just a very intelligent person. It would be able to grow, but only through experimentation and learning like, y’know, a person. Think of it like this- It would need to conduct experiments to actually learn fundamentally new things, so it would be limited by material reality no matter how smart of an AI it is. The only other way to learn is pure computation but anything that can be used to extrapolate by itself we already know how to extrapolate (we might need more power then we already have but my point is that we already know the process). So, yeah, what we get is a very smart person conducting experiments and using computers to run computations of things we already know. Like, y’know, a scientist. This possibility probably isn’t talked about as much because making a cool scientist isn’t nearly as marketable as making Robot God.
I agree with lots of what you're saying, just not the conclusion that it would be human intelligence. I'm not saying it would be a god, just that we can't understand very well what a super intelligence is like. I think I get caught up on this part: " It would need to conduct experiments to actually learn fundamentally new things" This can happen at a time scale we can't keep up with, and even if we do, we might not understand fully how it got there, like we don't understand why AlphaGo makes the moves it does. It's not obvious to me and I don't trust the intuition that it will be a "cool scientist", unless by that you mean a cool scientist that figures things at speeds we can't keep up with and with methods we don't quite fully understand.
Well, we do understand what happened with AlphaGo, we just can’t track it at a micro level. We already knew everything about Go, it just hyperoptimized specific strategies through what was effectively just iteration.
How exactly does it circumvent time constraints in experimenting on the material world? It was able to "experiment" with Go because it could simulate a metric shit tonne of Go games in a very short time Most of the material world is nothing like a game of Go – it's incredibly computationally expensive to simulate, and does not have simple rules and win conditions.
In my current research for one, people collected a shit ton of data for years and I mostly go through them without having to do any experiments, all coding. Sure, I've ran many experiments for previous researches, but its not all like that at all, and that's not unique to my field.
What research do you do exactly? And what data are you referring to?
Does that matter? I'm currently researching coastal fluid dynamics. I use velocity data collected by acoustic Doppler current profilers. We leave them at the bottom of the ocean or under a ship for a long time, sometimes years.
But I guess that's my point. The bottleneck here for any ML eventually becomes the real-world data collected by the current profilers. Can an ML model do crazy stuff with the data? Absolutely. Will it develop abstractions of some kind that humans don't have? Absolutely. (AlphaGo or AlphaFold both obviously have high-order abstractions about Go and protein folding that we don't currently have, at least in the sense that the model is drawing connections between data in a way we currently don't). And maybe you could simulate the coast using fluid dynamics and a sufficiently detailed measure of the coast. But is that actually going to be more efficient in practice than just going out and measuring with current profilers? Obviously not. At some point you need to get the data; and to ensure that the data corresponds to the real world.
I get that, but would you agree we already have a shit ton of data on a lot of things and the bottleneck to a lot of it is intelligence? With no extra data collection it may be able to solve the Navier-Stokes, P vs NP, and a number of other difficult problems. I wouldn't call it a cool scientist, but a super inteligence. What is the part we disagree with? My claim is that AGI might be possible and we can't comprehend what an intelligence greater than ours would be like. I don't get people saying that it would be human level, if narrow AI is already superhuman in their narrow fields, why wouldn't a general one be superhuman too?
No, I don't agree with that at all. We don't have really any inkling of how scientific/theoretical progress actually occurs – intelligence is quite clearly only part of the story, and (despite what SSC would like you to think) may not even be particularly contingent on historical maverick hyper-geniuses. To me at least, it appears to be much more related to allowing ideas to gradually develop and percolate through society, each new idea forming a building block of subsequent ones down the line. Many of these new ideas rely on technology being built, new experiments being conducted, and then feeding all that back into theoretical work. This is a process that necessarily takes time. A lot of it may even just be down to randomness and dumb luck. It's not just about cranking the intelligence value super high. I think it's entirely possible or likely that we will build a "superintelligence" (though I'm dubious as to how meaningful such a concept even is, with us having no particularly good functional definition of intelligence) at some point, possibly even within the next several decades. But I'm deeply skeptical that such an AI would have godlike power, or constitute a singularity, or whatever else they say would happen. I strongly suspect there will be significant diminishing returns. I also reject the premise that we can't comprehend what a "superintelligence" will be like. We can't know exactly what will happen. But we can't make accurate predictions about many things at all. The entire MIRI research program is predicated on the assumption that you actually CAN comprehend what an ASI might be "like". I do actually think though, that more work should be done on AI safety (maybe counter to many others on this sub). I think orgs like Anthropic and even Redwood are doing interesting stuff. MIRI on the other hand appears more like a joke with each passing year. But that's an understandable predicament when Yudkowsky is your most senior research fellow.
I'm going to push back on this one a bit: It is not clear that Intelligence is a single thing, that it's general or that any of the developments of the last decade have gotten us closer to a machine that thinks (like us). Before the marketing guys got on board, the machine vision, generative neural nets, etc., were called machine learning, which is much more accurate as to what's going on. Essentially we've developed ways to train systems to produce consistent results, but we *know* those systems are not performing these tasks the way humans do, that they rely on brute force advances in computation that don't necessarily scale.
It seems like you are agreeing with me that it is conjectural, though to me it seems pointless to deny that there have been genuine advances, even if they are not sustainable. Computers are now solving certain problems like responding appropriately to natural language prompts in a way that was not possible before.
No, I'm agreeing with you on some points (there are *functions* that computers can do now that they did not do a decade ago) nor am I disputing that some (many) are valuable. I do not think any of them even approach the threshold for intelligence, and what we're learning from these black box systems is that a lot of the attributes we assumed were integral to "intelligence" are actually really not. On the natural language front: we can get machines to respond to natural language relatively reliably but along the way we learned that the machines are very much *not* doing it the way we do. In fact, they're doing it in almost the opposite way: analyzing large amounts of human conversations and pattern matching through brute force. Compare and contrast with natural language acquisition in humans: imitation and repetition are the core of early acquisition but they then generate novelty, spontaneity and originality in a way that no language model could, even in theory. The core issue is that "intelligence" is a term without a functional definition. It's not even clear that intelligence is a single trait rather than a bundle of functional behavior modules reified into one purely for social value. If you can't define a quality functionally, you cannot extrapolate it to "general."
I think it's a delusion that any AI whatsoever, as a technology, can transcend being technology, i.e. become a tool that wields itself in a way free from all-too-human ends and existing material conditions.
Are you saying AI would always have to be guided by human hands? I tend to think that we will be able to create an AI that can operate independent of humans. I’m not sure that it will actually have the godlike power or intelligence explosion that some seem to think it will. But I don’t see any reason why we couldn’t create an AI that didn’t need humans to go about it’s business.
We already have technology that doesn’t require humans to do it’s tasks. I just generally assume when someone says they don’t believe in AI they mean they don’t believe in omnipotent/god AI because saying that video games don’t exist feels a bit silly.
I mean, I'm not really making a claim about AI *per se* but technology. As a technology, AI will require an extrinsic telos from humans, regardless of whether AI is remarkably independent in the way that it achieves it. I figure this is the most we can expect from AI - cleverer tools to do things with. If AI were somehow capable of deciding its own ends, then it's simply not technology anymore - it's no longer merely a means but also an end in itself. And I'm skeptical that giving technology a 'mind of its own' is achievable through just more powerful and complex computing. However, if, in some way, humans achieved such an artificial intelligence, by the same coin, it'd only ever be accidentally useful to humans. And not some kind of alien 'paperclip maximizer' or maximizing utility or whatever - that's already reading too much of 'technology' back into what is fundamentally not. It'd more closely resemble an aimless teenager than a god.
It won't and I don't think it's delusional at all to be so absolute about it. The closest thing we will ever see in terms of an AGI is human beings and they are fucking terrible so there ya go. It's simultaneously woefully naĂŻve and absurdly arrogant to think that human beings *could* even create a true AGI let alone actually do it.
It’s really fucking weird this is being downvoted. Is this sub being brigaded or something?
It's not a defensible position, it's complete speculation on a question whose answer in the affirmative or negative is based on many things that are just unknown.
I’m curious, why do you think we couldn’t ever create an AGI? I mean, if we created a machine that just simulated everything a human brain did on a 1:1 level, wouldn’t that be an AGI? I don’t see anything about intelligence that would make it impossible for us to ever create one.
I have the same opinion, I don’t think any AI will magically become Jesus, but replicating the human brain has to be possible otherwise we wouldn’t exist.
I don’t know how you can be so certain about it. Computers can already solve certain types of problems better than people. In some cases, far better. I doubt there is anything magical about general problem solving either. It’s delusional to think AGI will solve all our problems, but in my mind not delusional to think it could exist at all.
AGI is fundamentally 1000% different than task specific problem solving. Computers were literally built for exactly certain types of problems. That's their absolute nature at its core. AGI is not even an inversion of that. It's nothing like it - at all. It's not like AAA baseball to major league, or baseball as a sport to football as a sport, or baseball to apples. It's like comparing baseball to the concept of understanding concepts.
Well, maybe you’re right. I don’t think that has been established yet though. I mean, our brains are not doing anything metaphysically different than what some future computer could do, as far as we know. I think your certainty about it is not well-founded.
I don't think you understand the difference between "true" and "just good enough" Even "just good enough" will put people out of work. And any capitalist is afraid of true agi.

This is why I’m a Luddite.

It is so important to realize that AI is being developed by people who are the farthest thing from altruistic do-gooders. They’re doing it because they can, they just see it as a really difficult problem, and there’s nothing humans like better than hard problems that can be solved with technology. Hard problems the solutions to which involve creating a humane, caring, empathetic society? They don’t like those, because they can’t be solved by technology and besides they don’t really understand them.

*they*... you are talking out of your ass and sound as pretentious as the tweet.
Just don’t try to sell me your AI.

dang, these people read one story about a lizard and they’re suddenly all major bootlickers.

This post turned me into a luddite.

The Luddites did nothing wrong™️
Correct. (E: makes me wonder if there ever was a feudal lord who broke the feudal contract sort off, by mass importing cheaper labour to replace his current workforce and what happend to them).
> importing cheaper labour Like, replacing serfs with other serfs who will pay more of their harvest or labor time to their liege? Why not just tax your current serfs higher? I can imagine a lord who replaces a landless peasant laborer with someone else who is willing to enter serfdom in exchange for the land tenure it gives, but that doesn't break feudal law.
> why not tax them higher Because im setting up a stupid thought experiment to compare the current capitalist wage stealing system being replaced by robots with the feudal system. And im wondering if there have ever been historical parallels.
I would see the first comment as sort of confusing in isolation, but when contrasted with the second I just feel forced to agree with it So same

The second poster seems awfully confident that the result of creating an economy that didn’t need human labor would be universal basic income and not you know… A lot fewer humans.

ok, hypothetically, the big men using the AI don’t need to pay humans anymore to do a plethora of jobs.

So, free things for everyone? Nah, if I were in his shoes I’d keep it.

We only then will need to build a giant conveyor belt to feed people to it for energy, and then we can all chill out and never mind purpose and meaning. Trust AI. We’re not merely suggesting it.

Second comment, to clarify

This could be valuable, imagine the potential of ai to give the average person less hours in their workweek? Afford to pay your needs with only 10 hours of work?

So they want machine learning communism. The results of that are predictable…

Why not just make regular communism instead of causing thousands of people to starve for something you’ll have to do a revolution for anyways because it’s not like rich people are just going to let you build Robot Jesus unhindered
That has failed in every country that has tried implementing it. I'd rather go for something else entirely.
Bruh
I'll tag in; I don't think you can use historical failures of socialist nations to conclusively prove they are bad in theory for several reasons. 1. They didn't all fail. Cuba is a thriving nation with better health indexes, literacy rates, life expectancy, etc. A lot of their surplus production goes into regional aid and they're pretty popular in South and Latin America. 2. Anyone even leaning towards leftist popular movements in developing nations during decolonizing post WWII got a CIA sponsored coup or extensive sanctions or both. It's hard to determine, experimentally, if a resource allocation system (economic) works if the experiment is interfered with constantly for ideological reasons. 3. Among capitalist democracies there's a clear correlation between "leftist" political policy and beneficial social metrics, such that leftist nations tend to have desirable social outcomes.
Which country do you currently live in?
[deleted]
Of course it is. For reactionaries, "turning communist" is an expression of dissatisfaction with whatever country they occupy - the specific circumstances are irrelevant.
looks like you wandered into the wrong subreddit, boyo
And? We can disagree on politics without starting a fight.
this isn't r/debatecommunism, it's r/sneerclub, if you want to rehash this boring debate go over to changemyview or something
I don't get why you keep bringing it up? 🤔
Euewwww a delusional centrist. [Get their ass!](https://www.youtube.com/watch?v=XgbphXbqOgY)
Yeah, everyone knows that [Kerala is a failed state, as the second least-impoverished state in India](https://en.wikipedia.org/wiki/Kerala) and the 8th most economically prosperous state out of 33 states. Those communists and socialists that keep getting elected there have totally failed.
Yes, we've seen the skulls.
Predictably good.