EDIT: Some users correctly pointed out that the title of this post does not literally reflect the content of the linked article, and is therefore misleading. See this comment and the answers below.
Language models seem to be much better than humans at next-token prediction - LessWrong
Seems like the human and language model are both being asked to repeatedly guess the next token in a real document.
ah yes, because there definitely is something that it is like to be a language model
[deleted]
[deleted]
Here’s the game they used to collect the data: http://rr-lm-game.herokuapp.com/