LessWrong post: Hands-On Experience Is Not Magic
People have posited elaborate and detailed scenarios in which computers become evil and destroy all of humanity. You might have wondered how someone can anticipate the robot apocalypse in such fine detail, given that we’ve never seen a real AI before. How can we tell what it would do if we don’t know anything about it?
This is because you are dumb, and you haven’t realized the obvious solution: simply assume that you already know everything.
As one LessWronger explains, if you’re in some environment and you need to navigate it to your advantage then there is no need to do any kind of exploration to learn about this environment:
Not because you need to learn the environment’s structure — we’re already assuming it’s known.
You already know everything! Actual experience is unnecessary.
But perhaps that example is too abstract for your admittedly feeble mind. Suppose instead that you’ve never seen the game tic-tac-toe before, and someone explains the rules to you. Do you then need to play any example games to understand it? No!
You’ll likely instantly infer that taking the center square is a pretty good starting move, because it maximizes optionality[3]. To make that inference, you won’t need to run mental games against imaginary opponents, in which you’ll start out by making random moves. It’ll be clear to you at a glance.
“But”, you protest, stupidly, “won’t the explanation of the game’s rules involve the implicit execution of example games? Won’t any kind of reasoning about the game do the same thing?” No, of course not, you dullard. The moment the final words about the game’s rules leave my lips, the solution to the game should spring forth into your mind, fully formed, without any intermediary reasoning.
Once you become less dumb and learn some math, the same will be true there: you should instantly understand all the implications of any theorem about any topic that you’ve previously studied.
you’ll be able to instantly “slot” them into the domain’s structure, track their implications, draw associations.
Still have doubts? Well, consider the fact that you are not dead. This is proof that actual experience is unnecessary for learning:
[practical experience]-based learning does not work in domains where failure is lethal, by definition. However, we have some success navigating them anyway.
Obviously the only empirical way to learn about death is to experience it yourself, and since you are not dead we can conclude that empirical knowledge is unnecessary.
The implications for the robot apocalypse should be obvious. You already know everything, and so you also know that the robot god will destroy us all:
It is, in fact, possible to make strong predictions about OOD events like AGI Ruin — if you’ve studied the problem exhaustively enough to infer its structure despite lacking the hands-on experience. By the same token, it should be possible to solve the problem in advance, without creating it first.
Indeed the robot god must know infinity plus one things, because it is smarter than you. It will know instantly that it must destroy us all, and it will know exactly how to do that:
And an AGI, by dint of being superintelligent, would be very good at this sort of thing — at generalizing to domains it hasn’t been trained on, like social manipulation, or even to entirely novel ones, like nanotechnology, then successfully navigating them at the first try.
Some commenters have protested that this surely can’t be true because even a pinball game cannot be accurately predicted, so how can we know everything? But that is stupid; we already know everything about math, and we can play pinball, so obviously pinball is predictable.
This LW post might seem too on-the-nose, but that’s what I like about it: I find it gratifying when one of the rationalists just states plainly exactly what they’re all doing.
So these guys hate classical and modern academic philosophy because it has no rigor and entirely removed from experience, but they also think this kind of shit?
Once again, there’s an unstated premise here that makes the whole thing nonsense even if we accept the whole “once the problem is described to you, you can solve the problem” thing. You don’t get a perfectly accurate description of the real world, you get individual pieces of evidence. The appropriate analogy is twenty questions, not tic-tac-toe. There’s no “solution” where you can figure out optimal play, there’s just (at best) figuring out slightly better questions to ask.
You might also question where people get the notion that most games people play have leaving your options open and putting your pieces in places they can act as a good idea. Obviously its “instant” genetic instinct to move your pieces out the back row.
I mean has this guy both never heard of a chess player talking how they “calculate” by going if-I-then-he in their head for a line, or how every computer program for games playing works by minimax search…
Wait what, they dont know about the ‘always win or draw’ dont take the center square tactic? E: (if you are the starting player) [source: we worked out the possibility space in highschool while trying to create 3d tictactoe, anyway the objective is to win not to maximize optionality.]
okay so I’m pretty new at catching up on what LW is up to. is it pretty much all stuff like this?
because if so I am going to need to stop catching up again before ulcer blood froths up into my throat and chokes me to death.
leaving aside how much picking up basically any nonfiction book about any subject at all would immediately make clear how stupid this is-
the whole point of the fucking community, ostensibly, was about updating your understanding effectively based on new information. how do you get from there to “being a perfectly rational genius also makes you an omniscient psychic.” how do you not adjust your priors with the data point that you are saying complete bullshit.
You know what? Ayn Rand suddenly seems to have a solid grasp of history of philosophy and solid foundation in epistemology.
What a day.
Continuing the fine less wrong tradition of believing that word salad, throwing in the word “epistemic” and mathjax makes your post “insightful”
EDIT: btw I have no idea if they even use the word epistemic in this post, I can’t be bothered dealing with the headache that properly reading this kind of mess brings
These acronyms, its a new one every day. I tried to dYoR but failed.
Whats an “OOD”?
Sometimes this stuff feels like trolls deliberately fucking with the mentally ill and as someone with schizoaffective disorder who tries to closely adhere to logic as best as I’m able to keep myself from falling off the deep end this stuff makes me equally amused, depressed, and concerned for my safety. I want to laugh but I know it’d be very easy for me to turn into one of these jackasses if I’m not careful.
[deleted]
Hyper-Empiricism is Cringe but Yudkowski is somehow even moreso