“Best humans still outperform”
One turning point in the history of cope around artificial intelligence
A few years ago I was tickled by an article headline in a serious academic journal:
Best humans still outperform artificial intelligence in a creative divergent thinking task
Remarkable! Man bites dog! It had become newsworthy, it was worth checking (and, I perceive, worth a little self-congratulatory celebration) that there remained any domain where mere man could still hope to possibly contend with the machines — at least, the best humans still could! (Could you?)
A message from the future
That was 2023. I think what stood out to me at the time1 was that this was in some sense early. Not early in the story of AI — although ChatGPT and StableDiffusion, each less than a year old, had captured the public attention in a way which earlier AI hadn’t, these were merely the latest in a long lineage of gradual developments — but an early sign of a reckoning, an attitude shift in how humanity would grapple with these new machine capabilities we were conjuring fitfully into being.
I’d already been worrying for years that things might get out of hand with AI (and had even started writing about it). I was hardly the first!2 But this had felt almost like a perversely secret concern (how can people not see what’s coming?? — but they didn’t), one which humanity at large appeared destined to ignore until either it was too late… or, if we somehow played it right, until a splendid apotheosis of world peace, unlimited bounty, health and longevity delivered by machine intellect. (In fact I think those remain real prospects, and it’s absolutely in our hands to determine which outcomes we get.)
What this headline implicitly spoke of, the subtextual worldview shift belied by the phrasing — “Best humans still outperform” — was that we had woken up and viscerally felt the reality that even the ‘best’ humans might genuinely need to watch their backs. The machines were coming. It was no longer (had never been) a joke or a fairy tale.
This headline, seemingly from a near future in which it was taken for granted that machines, in general, dominated human capabilities, showed what was coming. Headlines like it are now commonplace — perhaps more common than those (now almost boring!) headlines adding to the litany of tasks AI now outcompetes human experts at.
The world changed
The world changed. Not because the world had actually yet changed (much), but because humanity, in our limited and faltering foresight, had noticed that, soon, it might. That murky perception of the future, humanity’s near-unique hallmark and blessing, memetically reverberated and has worked its way into our collective discourse.
In this way, I’m incredibly grateful to the ‘ChatGPT moment’. Rather than implicitly relying on a plucky band of vaguely foresighted but ultimately underpowered ‘sci-fi weirdos’, humanity as a whole is entering the conversation. We’re all stakeholders in the trajectory of this world-transforming sphere of technology, and all kinds of people are beginning to act like it: people with skillsets and perspectives which we’ll need, which had been lacking, in earlier debates. Law theorists, philosophers, engineers, anthropologists, economists, statespeople. It’s a thickly textured problem. It’ll need more than people like me (aspiring polymath though I may be) to solve it!
These cultural conversation shifts are fickle but surely incredibly consequential. 2025 felt like another shift, to me, and 2026 so far — with AI producing genuine national security implications and at the centre of dirty political manoeuvring — seems to suggest that both the training wheels and the gloves are off, as Dean Ball recently put it. It’s a little scary: powerful and not altogether friendly forces have turned their eye to the potential potency of emerging tech, and they3 may wrestle for it, even under the risk that they destroy much in the process or that the tech spills entirely out of their control.
The world, changed
We can be doing better! People can get curious, find out what’s what, consider stakes and what realistic paths we might prefer. Don’t make the mistake of ‘nowsight’ bias — today’s AI are the least capable there will ever be! Take seriously where things might go, and notice if the conversation seems to miss something important that you understand well: it’s still early and the ‘experts’ are mainly that by virtue of noticing the importance of AI a little sooner than everyone else4. Let’s also grab the new tech building blocks we have and bootstrap the way we do foresight, collective intelligence, and coordination.
Don’t mistake me for naively assuming machines will blast through every bottleneck in short order. There’s a lot of adaptability, dexterity, and generality bottlenecks between here and self-sufficient machines. Perhaps I’ll write something about that soon.
(I intended to blog about it at the time, but… you know how it is with drafts.)
Quoth Turing, some time in the 1950s:
once the machine thinking method had started, it would not take long to outstrip our feeble powers… At some stage therefore, we should have to expect the machines to take control.
Even Turing was not first to perceive that thinking machines could pose takeover hazards.
I’ve been bemused several times recently upon being referred to as an ‘expert’, that mythical breed.



