The stupidity of AI fear-mongering
This blog, it has begun to dawn upon me, has become far too preachy and pessimistic—even for me. I’ve recently written essays on the perils of so-called “thought leadership”; how the recent (successful) lawsuit against the Internet Archive is a worrying indicator of our deeply misplaced cultural priorities; how Italian museums have lost the plot; how obnoxious political spin has insinuated itself into every nook and cranny of our lives; how honorary doctorates debase the core values of a university; how the French have lost whatever ability to think critically that they might have possessed and even how art history research has significantly declined from the rigorously objective standards set decades ago by the likes of John Shearman.
It’s time, I think, for a bit of optimism. And in my own perversely obdurate fashion, I’ve decided to do so by trumpeting something that’s increasingly receiving a lot of negative attention these days: artificial intelligence (AI).
I don’t pretend to be an expert in AI, but the few times I’ve knowingly encountered it—like when Photoshop improves its algorithms sufficiently to allow me to eliminate the ridiculous proprietary watermarks that the Uffizi Gallery insists on sticking on photographs of Renaissance drawings in its collection—I’m invariably delighted.
Lately, I’ve decided to independently produce my upcoming series of narrative art documentaries I’m working on in fully French, Italian, German and Spanish versions as well as English, through directly harnessing the rapidly advancing and hugely impressive AI linguistic technologies that are all around us.

But such unfettered enthusiasm goes very much against the societal grain these days, as dire warnings about the evils of AI seem to be a sort of cottage industry, particularly by those who trumpet themselves as “thought leaders” (OK, I’ll stop).
But in keeping with my new-founded spirit of optimism, I’m here to tell you the good news: they’re all wrong.
So far as I can tell, there are three basic fears about AI rumbling through the Zeitgeist:
- AI will eliminate a significant number of current jobs;
- AI will make it much easier for those determined to propagate false information to do their thing;
- AI will make us all irreparably lazy and stupid and therefore enable machines to take over our world and enslave us.
All of these concerns have their merits. But all are both wildly exaggerated and significantly at odds with the lessons of history. Relatedly, and even more to the point: none of them actually has anything properly to do with AI at all.
Let’s take each in turn.
It’s inevitable that AI, like any successful technology, will eliminate jobs, just like the advent of the automobile (“horseless carriage”) naturally diminished the number of farriers and the appearances of the dictaphone and word processor made knowledge of shorthand and precision typing largely redundant.
The phenomenon is not only not limited to AI historically, it’s not even limited to AI within our modern world—given the rapidly evolving definition of what, precisely, constitutes AI, singling out “artificial intelligence” as the one overarching technological disrupter often makes little sense at the level of algorithms, let alone in terms of anything most of us encounter in our daily lives.
So that’s the first point. Nobody is going to want to stop genuine technological progress and modern-day Luddites are deservedly going to fare no better than their automated loom-destroying predecessors (in particular, it seems abundantly clear to me that anyone thinking about becoming a professional translator these days should seriously reconsider).
But that doesn’t mean that we should just collectively shrug our shoulders and indifferently watch as those displaced by cheaper and more effective technologies swell the unemployment rolls. The solution to this centuries-old problem has long been mapped out by thoughtful and knowledgeable people: a caring society does its best to psychologically and financially support those whose livelihood has been made redundant by new technologies, while vigorously retraining those who are willing and able to shift into the new fields that the very same technology has created.

Because there will always be such new fields (those would-be translators, for example, might well turn their attention to the fascinating challenge of interspecies communication—see here).
There are those who maintain that, this time around, the number of new employment opportunities will be considerably less than those that have been eliminated. I have no idea what they base such a conclusion on (other than the fact that it terrifies large numbers of people and thus greatly assists them with their primary goal of getting substantial media attention to promote their daring “thought leader” credentials—OK, I’ll really stop now).
Of course it’s logically possible that the situation is different now because we’ve suddenly reached some sort of techno-capitalistic tipping point, but in order to make such an argument, you have to—well, actually make an argument. Whenever somebody stands up and says, “This time things will be strikingly different than at any other well-documented time in human history”, the onus is naturally on them to convince us of why that should be so. And that’s almost never even attempted in our current “AI is the ruin of society” social maelstrom, let alone convincingly formulated.
So that’s the first point.
Moving on to the second concern: unquestionably developments in AI will make it far easier for both people and governments to engage in odious activities: controlling information, averring manifestly untrue happenings and all the rest. Yes, yes, yes. But once again the problem here isn’t AI per se, but the understandable lack of faith we have that those in positions of power will be out to deliberately dupe us to satisfy their political, personal or corporate agenda.
Put another way, Hitler and Stalin controlled millions of people through the repeated bombardment of villainous lies explicitly constructed to support their own heinous ends. And they did so well before the advent of the personal computer, let alone AI.
Nowadays, there is considerable anxiety that the news media (for lack of a better term) is determined to significantly distort reality in order to satisfy its own corporate (and political) interests, together with the fear that nefarious external sources (e.g. Russia and its allies) are deliberately engaging in the spreading of misinformation to further its own global agenda.
Those threats are very real; and they are doubtless considerably increased by the new technologies, such as AI and social media. But the way to address them isn’t to simply target our ire at the technologies themselves, any more than focusing our efforts on the twin evils of radio and photography would naturally lead to the overthrow of the likes of Hitler and Stalin.
Instead, the way forward is clearly to equip the citizenry with the means to protect themselves from being led astray, through a combination of forward-thinking laws and regulations that can prevent and detect such abuses in their early stages and the active promotion of individual critical thinking skills.
Which brings me to the last of the three oft-mentioned worries: that AI will make us all sufficiently lazy and stupid that its machines will eventually enslave us. Aside from the fact that the specter of AI enslavement is clearly nothing more than a silly Hollywood creation that any reasonable person would not even bother engaging with (a truly intelligent presence would hardly bother going to all the trouble of enslaving us, but simply ignore us—just like there’s no imminent threat, so far as I’m aware, that we are determined to “enslave” ants or iguanas), there’s the obvious point that history amply demonstrates that whenever people are determined to be lazy and stupid, there’s little that can stop them from doing so.
Blaming AI for our apparently burgeoning penchant for laziness and stupidity is no more reasonable than blaming sudokus or video games or the Eurovision Song Contest for our seemingly unbounded collective resolve to waste our time.
Our current tragedy is not that the new “generative AI” might produce impressively engaging new content, it’s that not enough humans (particularly young humans) are feeling motivated enough to do so themselves. True, developments such as ChatGPT certainly make it much easier for students to pretend that they know things that they don’t care about, but students have been faking their way through educational hoops for as long as we’ve been mandating that they be jumped through.
The challenge, as always, is to find a way to expose people to the many deeply fascinating things out there that they might actually care about, while encouraging them to manifest their interest in the most creative and impactful way possible.
And for that, AI—like many other aspects of modern technology—is a simply wonderful tool to have in one’s armory.
Howard Burton, September 30, 2024
Howard is the author of six non-fiction books on various topics and the creator of Pandemic Perspectives (2022), Through the Mirror of Chess: A Cultural Exploration (2023), Raphael: A Portrait (2024) and Botticelli’s Primavera (2025) which is the first film in our Renaissance Masterpieces Series.
