“AI” Brainrot
1 February 2026
We have mentioned previously how intensely the Silicon Valley tech bros and bro-esses are pushing their latest Big Thing, that being what they are calling Artificial Intelligence. We have also previously looked at, at a fairly high level (i.e. more of an overview than a low level deep dive) that the idea of “Artificial Intelligence” is nothing more than a pipe dream and a marketing term.
It seems though, there is no shortage of people who still fall for Big Tech marketing hype, and are happy to be sold a pipe dream, which usually manifests itself as either:
- You can do more in less time (e.g. you’ll be 10x or 100x more productive), or
- You can do something without the previously required hard skills (e.g. you can code with zero coding skills or experience).
Generative AI, so Large Language Models and others that are aimed more at image and video generation like Diffusion Models and Generative Adversarial Networks are all sold with the two main promises listed above. As it turns out only a small percentage of actual people spend any money on this stuff despite the crack-dealer sales tactics, and that metaphor is entirely appropriate because the people that do end up spending money on it, either their own or someone else’s, get completely hooked like a drug or gambling addiction… One more prompt, just one more and the hallucinating chatbot will give me what I actually asked for, but it usually doesn’t, and after actually wasting more time (and money if they are a paying user) they end up settling for something that isn’t what they asked for.
The drug metaphor continues to work as in the same way people become more and more dependant on getting their “fix” as their use continues over time, their ability to think sensibly and rationally, and be able to do things they used to be able to do reduces.
Outside the “AI” panacea-grifting Silicon Valley area, the next set of groups that have completely bought into the dream in one way or another are:
- Governments
- Academia
- CEOs
Governments love it as it promises to grant them more powers to surveil and tyrannise their populations. Academia loves it as it panders to their biases, sycophantically feeds their egos and means they can do less work than ever. CEOs love it because they are mostly greedy ignorant cretins who get one-shotted by faked demos, are immediately and totally convinced by the hype and think they are going to make a name for themselves as they “transform” their companies into infinite money glitch machines and can fire all those pesky demanding workers.
In this article we’re going to take a quick look at an example from Academia, mostly for comedy value, and also for the ironic “cautionary tale” experience.
On the 22nd January 2026 the following article was published (archive) on the prestigious Nature Journal website…

We’ll go through the article and make some observations. The hitherto devastated Marcel begins…
Within a couple of years of ChatGPT coming out, I had come to rely on the artificial-intelligence tool, for my work as a professor of plant sciences at the University of Cologne in Germany. Having signed up for OpenAI’s subscription plan, ChatGPT Plus, I used it as an assistant every day — to write e-mails, draft course descriptions, structure grant applications, revise publications, prepare lectures, create exams and analyse student responses, and even as an interactive tool as part of my teaching.
Well that’s super-nice. I am sure the students paying for tuition at Cologne University are thrilled that their professor is just using a chatbot to do almost his entire job, and that they could have probably just cut out the middleman and saved themselves a small fortune. If nothing else, the fact this professor has seemingly no shame at admitting his near total reliance on a chatbot to do his job on a high-profile platform such as Nature proves this is apparently nothing to be embarrassed about because this is now part of The New Normal[tm] and it’s just what everyone does.
He continues…
It was fast and flexible, and I found it reliable in a specific sense: it was always available, remembered the context of ongoing conversations and allowed me to retrieve and refine previous drafts. I was well aware that large language models such as those that power ChatGPT can produce seemingly confident but sometimes incorrect statements, so I never equated its reliability with factual accuracy, but instead relied on the continuity and apparent stability of the workspace.
He found it “reliable in a specific sense: it was always available”. Kind of like herpes then. Oh it also “remembered the context of ongoing conversations and allowed me to retrieve and refine previous drafts”. Aside from the bizarre anthropomorphizing from a professor who should know better where he thinks he’s actually having conversations, that there is some actual conversing happening with a collection of transistors arranged as logic gates made from metal and melted sand, what he is describing is any kind of document filing system.
So he’s trying to make this seem like “oh yeah I know it can be wrong, I’m not stupid” and just how he depended on the “continuity and stability of the workspace”. Sounds like he could have managed just fine with a filing cabinet that would have cost maybe one or two months subscriptions to Chat Jippety.
Of course, it’s not remotely about the “continuity and stability”, it’s about just how much “work” he was able to not do, and palm off AI slop as teaching materials on his presumably paying students.
The unhappy Marcel then explains…
But in August, I temporarily disabled the ‘data consent’ option because I wanted to see whether I would still have access to all of the model’s functions if I did not provide OpenAI with my data. At that moment, all of my chats were permanently deleted and the project folders were emptied — two years of carefully structured academic work disappeared. No warning appeared. There was no undo option. Just a blank page. Fortunately, I had saved partial copies of some conversations and materials, but large parts of my work were lost forever.
Oh noes. I took away my consent for my data to be stored, and it removed all my data. “Large parts of my work” he says, were lost forever. Everything about that paragraph is just so mind-numbingly stupid it is almost impossible to fathom how this guy actually gets through each day without needing to be reminded to breathe in and then out, or not running himself over with his own car like Oliver St John-Mollusc.

There was “no undo option” complains the implacable Marcel. No way to undelete the data he selected to delete for data privacy reasons, because if there was, it wouldn’t exactly fulfil the specific needs of the action invoked, that being the complete removal of all personal data.
At first, I thought it was a mistake. I tried different browsers, devices and networks. I cleared the cache, reinstalled the app and even changed the settings back and forth. Nothing helped.
What? None of those things magically brought the data back? Outrageous! Surely now it’s time to get in touch with OpenAI and demand they restore the data he selected to delete and by law they are supposed to delete…
When I contacted OpenAI’s support, the first responses came from an AI agent. Only after repeated enquiries did a human employee respond, but the answer remained the same: the data were permanently lost and could not be recovered.
Here we begin the descent into pure retardation. After spending two years cheerfully and proudly replacing himself with AI slop, he is enraged that the company who owns the AI slop generator he has used to short-change his students with on a daily basis used an AI “agent” to initially respond to his stupid complaint. The fact this peak irony is completely lost on this buffoon only adds to the comedic value. Sadly for poor Marcel, the humans were also unable to assist with the request that magic happen.
Marcel is now going Super Saiyan with rage and adds a heading into the article. This dude is not messing around. The heading is:
Accountability gap
That’s right. Marcel is pounding the metaphorical meeting room table demanding accountability, embarrassingly for him, seemingly not realising that he alone was singularly accountable for becoming totally dependant on the chatbot, not making full backups, not understanding that relying on any single piece of technology is totally stupid and irresponsible, his fundamental lack of understanding of basic logic and principles of data privacy, and ultimately his choice to click the button. That was all on him and him alone. The only gap it would seem, is between his ears.
He says:
This was not a case of losing random notes or idle chats. Among my discussions with ChatGPT were project folders containing multiple conversations that I had used to develop grant applications, prepare teaching materials, refine publication drafts and design exam analyses. This was intellectual scaffolding that had been built up over a two-year period.
That he calls this “intellectual scaffolding” is simply laugh out loud funny. If only he had any semblance of intellectual capacity and honesty he would not be in this situation at all, but he chose to get the chatbot to do his job, and his sheer ignorance led him to delete his repository of generated garbage and guess what? In true Academia fashion it’s someone else’s fault, not his.
That he voluntarily chose to ‘sudo rm -rf /*’ his “work” is not the point according to Marcel, as he explains:
We are increasingly being encouraged to integrate generative AI into research and teaching. Individuals use it for writing, planning and teaching; universities are experimenting with embedding it into curricula. However, my case reveals a fundamental weakness: these tools were not developed with academic standards of reliability and accountability in mind.
Obviously he is correct in that the use of Generative AI is being “encouraged” in the strongest possible ways, but that does not absolve anyone of the accountability when they actually use it to just be sensible and understand a bit about the basics. It is like complaining that you hit your thumb while using a hammer and whinging that the hammer was not developed with carpentry standards in mind.
Marcel finishes off this feeble diatribe with two paragraphs with another complaint about OpenAI not having a backup of his private data he explicitly requested they delete for privacy reasons, and how OpenAI had ultimately “fulfilled what they saw as a commitment to my privacy as a user by deleting my information the second I asked them to.”.
So after all that, yeah they did what I asked them to, when I asked them to do it, watch out guys, don’t make the same mistake I did.
Chatbot induced brain-rot is a thing, and the unhappy Marcel is a very good example of it.