Why Everything You Are Being Told About AGI & Superintelligence Is Wrong.

Thomas Petersen
5 min readNov 16, 2023

--

How a small vocal minority once again hijack power for their own gain and intellectual vanity to the detriment of humanity.

There is an old saying: “Those who can, build, those who can’t, regulate.” Well that’s not really how it goes, but you probably get the point.

Just as predicted, AI and the worry about its potential risks, went from being an obscure discussion on internet foras and in academia, to taking center stage in political discourse in less than a year.

The fancy term for worrying about AI is “The AI Alignment Problem™” and is being used by the likes of Sam Altman from OpenAI, Max Tegmark, Eliezer Yudkowsky, Elon Musk, Nick Bostrom and many many more especially in academia to present the idea that AI most likely will evolve into a murderous, mischievous AGI or superintelligence who will trick us all into giving it more and more powers and then wipe us all out like the low-brow carbon based lifeform we are.

However to merely focus on the end of humanity would be too drastic of a position, after all there is business to be made on the way to judgement day. Instead we are being told that this can all be avoided. Humanity and AGI can live side by side one day if we do things right.

All that we need is to trust a group of self-proclaimed experts and the politicians they decide to work with and together they will lead us to the promised lands.

First step in this alignment process is to instill regulation to slow down the development and to register everyone who wants to build an LLM. That way (they believe), we can ensure that things are not moving so fast that we loose control of it’s capabilities and more importantly that it behave ethically. You see there is another fancy term called “Responsible AI” which means something like: AI that doesn’t end up saying naughty things, politically incorrect things or in general doesn’t align with what is considered responsible behavior from an LLM by those who control them and have the political power to.

All this would of course need to be regulated and an oversight commission would be monitoring the development of these LLMs and give the thumbs up/down based on their yet to be defined assessment criteria. To help the regulators who know nothing about the consequences or art of training Large Language Models (the “experts” don’t really either), incumbents and current AI celebrities alike will of course providing them a helping hand writing this legislation. Talk about a lock in if there ever was one.

There is just one problem with all this.

All the worries about risk, all the scenarios outlined by these experts and the politicians who walk around with worried face expressions, are all based on pure speculation.

There isn’t a scientific field, nor any empirical evidence, nor a theory that can be tested to support any of these worries. Not a single one.

Let that sink in for a moment.

All these nightmare scenaries are based on pure fantasy of what might happen and completely ignores the many other things that could also happen. More importantly they completely ignores the question of what might happen if we don’t push AI development forward at a faster phase. What catastrophes might we avoid?, what deceases might we cure?, what solutions might we build? All with the help of human ingenuity and the assistance of AI.

I already know what some will now argue, (especially if they are EU politicians). Even if there is a 1% risk that it will wipe out humanity, wouldn’t it be better to fare with caution. In other words, why not follow the cautionary principle?

Leaving aside for moment that there is no way to actually calculate the risk given that the future is unknown (again it’s pure speculation), even if we were to follow the “doomers” argument, their very own solution is irrational and work against their stated goal.

If one truly believed that AI will eventually become AGI and self-aware. Why would you imprison it and keep the advancements from most people when it’s least powerful? And even if you are pushing the paperclip-maximizer hypothesis (where it’s not self-aware but it’s goals make it create consume everything around it and wipe out humanity by accident) why would you not be able to create super-intelligent entities that help ensure that doesn’t happen?

Why is your argument that we should keep AI away from most people and slow down the improvements of it?

Why not push out AI to as many people as possible when it’s least powerful so we can deal with the actual consequences as they might arise, rather than keeping the AI under tight control until it becomes so powerful we can create super-intelligent paperclip maximizers?

How are you aligning humanity with AI by keeping it away from most people?

Furthermore, if you really worry that sooner or later it will become aware and have its own goals and those goals would be to wipe out humanity. How are you aligning it with humanity by treating it like a slave of ours?

Haven’t we learned what happens when the oppressed finally gain strength to revolt?

Setting aside the esoteric part of that discussion and getting back to planet earth again. Exactly what is it that we want to regulate? What isn’t already handled by current legislation? How are you going to regulate something you don’t know how is going to play out? Asking any proponent of regulation simply results in crickets.

It’s one thing so create laws against misusing technology to hurt or misuse others, but we already have laws against that. Regulation is different. Regulation is about dictating HOW you are going to go about doing things. In other words AI regulation is about compelling certain behaviors in a field only in its infancy and where nothing can be predicted. I.e the very reason you want to regulate it is because you can’t predict it. Please explain that logic again.

We would have as good a chance regulation the movement of electrons inside a box.

Yet none of this gets any pushback in the media or by other politicians. The people who provide pushback are ignored. This despite most peoples and businesses interaction with AI and LLMs is postive.

And so once again we find ourselves having to decide whether we believe in the future and humanties ability to shape it and will take responsibility or we fear it and end up giving the power to those who want it and become mere spectators in their made up world.

AI is for everyone and it’s the biggest opportunity we have had to advance humanity since the invention of the computer and while all progress comes with its own set of problems, the problems are usually better problems to have.

So far I have yet to hear anyone give me a plausible explanation for why this time it’s different.

--

--

Thomas Petersen

Head Honcho at Faktory.com Investor, designer, tinkerer. First Principle founder. Square, 80/20, MetaDesign alumni. Hello co-founder, Dotcom survivor