We live in a corrupted system. The way to tackle corruption is to first acknowledge it exists. Only then is it possible to come up with ways of dealing with it, but don't make the mistake of believing the system can or will uncorrupt itself.

Elon Announces The Singularity

So we just had this from everybody’s favourite benefit scrounger…

https://x.com/elonmusk/status/2007738847397036143

For anyone who gives the remotest shit what Musk has to say about anything and wonders what the Singularity is, just like the term “AGI” it depends who you ask, when you ask and if they are currently trying to raise more billions in their latest funding round, sell some more books or are doing the rounds on the tech bro/AI shill podcast circuit.

For example there is a helpful and nicely written article on the Wibu Systems website that is well worth reading. Actually it isn’t, its quite obviously AI generated trash that is attributed to a woman named Daniela Previtali, but it does neatly demonstrate the ludicrous moving goalposts syndrome that has become the hallmark of everything in the tech world, from Elon Musk announcements to “Expert” futurist prognostications about “AI” and related matters generally. In fact the goalposts are not merely moving any more, they are teleporting around like the crew of the Starship Enterprise.

Daniela’s blog post encapsulates this so well that it moves the goalposts and changes definitions multiple times in less than 700 words. It begins dramatically with the headline “When Software Thinks Faster Than You: Who’s in Charge?” and then after 82 words we have the subheading “Singularity is not about machines thinking”.

Wait, what? Then, among the many bullet-pointed lists and single line sentences Daniela’s ghost-writer Chat Jippety has generated, we have this list to explain “in practical terms, singularity describes a moment when software systems”

  • improve themselves autonomously,
  • make decisions at machine speed,
  • scale globally without friction,
  • and operate beyond the cadence of human oversight.

Let’s look briefly at those four things.

Are software systems improving themselves autonomously in 2026 and is this new? As always the devil is in the details as it depends on what is meant by “improving themselves”. Are we about to see this happen?

Well no. Self-improving could mean computer programmes that rewrite themselves to make themselves better, but no matter how much lying grifters like Elon Musk, Sam Altman and Mark Zuckerberg want you to believe that is either happening, or about to happen at some point in the future, it will not. The Halting Problem and other proofs/theorems tell us that unambiguously, and none of the fast-talking Silicon Valley spivs will be able to change that. Self-improving could mean machine learning. That is a thing, so-called neural networks do this to a degree. Are they new?

No.

We won’t go into too much detail, but the first artificial neural network was devised and proposed in 1943 by neurophysiologist Warren McCulloch and mathematician Walter Pitts. They developed a computational model based on algorithms and mathematics to simulate how neurons might work using electrical circuits. This was a theoretical model though and the first practical implementation came in 1951. Marvin Minsky and Dean Edmonds built the SNARC (Stochastic Neural Analog Reinforcement Calculator), which was the first neural network machine, simulating a rat navigating a maze using 40 “neurons”. The first trainable neural network was called the Perceptron and was developed by Frank Rosenblatt in 1957 and shown working in 1958. It was designed to recognize patterns, like the position of a black dot on a card.

Earlier than that we had Russian mathematician Andrey Andreyevich Markov bring Markov chains to the world in 1906. AI fanboys/girls will be at great pains to tell you that Markov chains are not the same as neural networks, and they’re right, they are not the same. But Markov chains did introduce the concept of probabilistic sequence modelling applied to (among other things) early natural language processing, which neural networks in the form of the modern day large language models have expanded upon.

In the same way cars have improved (some aspects of which are admittedly debatable) since Karl Benz invented his Motorwagen just twenty or so years earlier than the Markov chain was devised, the efforts to get machines to learn and predict have improved a lot, and go back over a century.

So 2026 neither marks the Rise of Skynet, nor the first instance of a machine that can “learn” in some capacity. Doing it faster, and more of it, with more processing and electrical power does not fundamentally change what is actually happening, in the same way that one of these…

…is not foundationally different to one of these…

This is not to say that the Bugatti Chiron is not an aesthetic and mechanical improvement on the Benz Motorwagen, or doesn’t provide a faster ride… of course it does. It is still however a mechanically propelled vehicle subject to the laws of physics and logic as we understand them.

What about the second claim in Daniela’s generated list, that of systems that “make decisions at machine speed”?

This tautological drivel is so ridiculously self-evident and redundant it barely requires any attention further than noting that of course a machine will make decisions, or indeed perform any task at “machine speed”. Perhaps this is the same speed as being referred to by Pfizer executive Janine Small when she said to the EU Parliament that they were “moving at the speed of science”. In other news, water has been discovered to have the innate properties of wetness and liquid.

How about claim three, to “scale globally without friction”? When you consider that the big “AI” companies have stolen all the data they possibly can to train their precious models and have now run out, to the point that they are now being sued and are talking about “artificial training data” being needed, I am not sure that meets the definition of “without friction”. Or that they have run out of energy providers and are trying to power these monstrous datacentre constructions with jet engines, talk about building datacentres in space, and presumably seriously plan on constructing a Dyson Sphere around entire stars to harvest energy. People’s electricity bills and the costs of everything is going up as a result of these huge companies leveraging their Ponzi empires to buy up all the power, water and components for computers that the rest of the world has become dependant on. The notion that this is scalable, all “without friction” is laughably false.

Finally we have claim four, that is “and operate beyond the cadence of human oversight”. What does that even mean? Setting aside the absurd usage of the word “cadence”, if a machine was to operate beyond human oversight, that could only be because a human or humans programmed it to do something and then deliberately chose to ignore what it then goes and does. If I turn on a car engine, put it in gear, drop a brick on the accelerator pedal and then turn my back and ignore the consequences, this is not a machine operating “beyond the cadence of human oversight”, it is blatant irresponsibility over something I am directly responsible for. Adding a few layers of abstraction and complexity to the process of sending a car speeding off into the distance does not make it somehow magical or not my responsibility when it inevitably crashes into something or someone. Factor in the young people who are now dead as a result of their interactions with “AI” it becomes clear that Daniela Previtali and the rest of the “AI” boosters do not care at all about anything other than money and status.

The article after correcting it’s initial claim about machines that think, switching to how the Singularity is NOT about machines thinking, finishes with a dramatic question about software that “thinks and acts faster than you”, because of course it does.

None of this makes any sense. These people have completely detached themselves from reality, and responsibility, either unknowingly or worse, knowingly. While people like Daniela (according to her bio) “is currently leading both corporate and channel marketing activities, innovating penetration strategies, and infusing her multinational team with a holistic mindset”, whatever any of that corporate word-salad means, people are dazed and bamboozled by this enormous clown-show, being told that if this sounds confusing its because technology is moving faster than humans can keep up with.

This is not true. Technology is not moving especially quickly. All forms of computing have been at the point of diminishing returns for quite some time. We used to measure advances in computing by being able to achieve the same amount of work or more, with less instructions, less CPU cycles, less resources and energy. This has largely not been the case for a while now. All the handwaving, CEOs running around on stage with gigantic screens behind them hyping up their wares screeching how we should give them our money or be “left behind”, all the billions of dollars literally being set on fire in the name of “AI” for what, a fancy auto-completer and meme-generator that is wrong most of the time?

AI bros and gals will accuse the doubters of not being smart enough to understand. There is nothing new under the sun, and the Emperor’s New Clothes trick will keep getting used for as long as it works. Try asking anyone who tells you how “AI” is going to take over all our jobs. Get them to explain how. Get them to explain to you how probabilistic text completers that require a constant influx of human input to (barely) function, are going to replace all the humans.

I am working on a more extensive piece of work that will demonstrate how all of this is just smoke and mirrors, and that “intelligent machines” are a myth and a scam and how this has been proven repeatedly over the years. I will announce it on this site as soon as it is ready. For now, as the wise man Flava Flav once said… “don’t believe the hype”.