My 2024 Predictions

Thomas Petersen
9 min readJan 3, 2024

--

As it has become tradition, I am once again putting my reputation on the line and sharing my thoughts and predictions about the year 2023 that has passed and the year 2024 that is upon us.

As per tradition, I will also share last year’s predictions so you can judge how well I have fared.

If you just want to read the predictions, scroll to the bottom of this essay. Before that, as always, I will also share some more philosophical/political observations.

2023 was in many ways a very predictable year, at least from my perspective. Boring, however, it was not.

What made it particularly exciting was that although many of the episodes that unfolded were predictable, their exact form and content were not.

On the personal front, I co-founded the AI platform Faktory with Jesus Rodrigues and was lucky to persuade Bob Lund and Laura Halaby to join the founding team. So for me, 2023 was almost entirely about AI.

That said, there was a particular incident that surprised everyone, in more than one way, and that was the Hamas attack on Israel’s civilian population on October 7th.

The incident itself was shocking enough. But perhaps more shocking was the relatively large support; if not directly for Hamas, then for the Palestinians.

This support was particularly widespread among the young on the left (but not exclusively), who did not hesitate to take to the streets under the slogan “From the river to the sea,” simultaneously accusing Israel of both wanting to eradicate the Palestinians and of having introduced Apartheid.

This stance became particularly pronounced when a series of incidents at American Ivy League universities involving harassment of Jews led to a Senate hearing that revealed how deep the relativization of morals and ethics actually have become.. When the leaders of these prestigious institutions were asked if it was okay that students on their institutions had called for the genocide of Jews, they answered “it depends on the context”.

I will not, like many, use the term anti-Semitism as it is too often brought out as a way to shut down debate. Instead, I will point to something else that is even more worrying.

Most of the university environment in the west has developed into fundamentally postmodernist institutions. This has resulted in EVERYTHING literally being being taught through the lens power dynamics.

We’re not just talking about fringe analyses of social structures from a small group of system critics. It goes much further. Students are divided into their intersectional affiliations and assessed, not based on their individual actions and achievements, but on their affiliation to being part of the oppressor or the oppressed.

And it is in this intellectual environment that a country like Israel and therefore Jews in general can be viewed as exclusively being oppressors and villains rather than victims, and the Palestinians, regardless of historical circumstances and support for repeated attacks against Israel, are seen as exclusively being the oppressed and morally superior.

During the Cold War, the USA and USSR waged a number of proxy wars in countries other than their own. Today, these wars are fought through social media and over who is the oppressor and who is the oppressed. Whether it’s about trans issues, climate, feminism, capitalism, or racism. There are power dynamics in everything and therefore a war to be fought for the (right) oppressed minorities.

This is a dangerous and problematic trend. Not because many of these areas are not important; they are in their own way. But because these battles are primarily fought by young people who, due to their technological skills, have a proportionally large amount of power (ironically enough) and can define “what we’re talking about” without having any personal skin in the game.

Those who shout the loudest have nothing at stake and can therefore without consequence take the side of the weak, thereby highlighting their own moral superiority.

But a society where weakness is valorized and strength vilified, is a society in crisis. It is a society without the courage to be anything but the victim. With the praise and adoration of the weak comes the rejection of all that symbolizes strength, and with it comes fear.

The climate is dangerous, growth is dangerous, meat is dangerous, plastic is dangerous, the weather is dangerous, language is dangerous, freedom is dangerous, AI is dangerous, masculinity is dangerous, energy is dangerous, femininity is dangerous, trans is dangerous, heteronormative is dangerous. The future, in general, is dangerous, and everything is done to avoid it.

But not believing in the future is the most dangerous thing we can do. Cause if we don’t believe in the future, we won’t invest in it. But the future always happens, whether we want it or not.

Unfortunately, large parts of the younger generation in the West have grown up and been taught to fear the future, suffer from collective anxiety, are on antidepressants, and have decided not to have children because they don’t think the future has anything good to offer.

In the west, sci-fi films are always dystopian and show how mankind has destroyed nature. However, if you go to China, sci-fi films are about how mankind overcomes the many challenges nature exposes us to (see Wandering Earth as an example)

There are two views of the future. One dystopian and fearful, the other utopian and hopeful.

With this as a growing trend over the past many years, it was no surprise that one of the other major political points of contention in 2023 became AI, or more precisely, the fear of AGI.

Two major events illustrated this fear. In November, Sam Altman was attempted to be fired as CEO from OpenAI by their non-profit board. It seems that the board reacted to claims that OpenAI had achieved a breakthrough in terms of self-thinking AI and that this scared some board members enough that they wanted to slow down development. They thought Altman was too aggressive in terms of getting AI out to everyone. They wanted it to be slowed down.

But how can you fire a CEO of a company that is doing as well as OpenAI (they turned over $1.5 Billion in 2023)?

OpenAI’s construction is a bit of a complicated affair. At the top, you have their not-for-profit alignment board. This has full control over OpenAI, their for-profit company.

The idea was to have a board that was not motivated by profit and guided by the very popular ideological movement “Effective Altruism” (a branch of ESG).

The movement mix altruism and capitalism together. That is, an attempt to do good with capital, not just create it.

On the surface, this is, of course, a sympathetic thought. But the OpenAI debacle revealed how difficult it is to realize such lofty ideological goals in reality. It showed us clearly what the downside is of having a board that is not motivated and guided by profit.

OpenAI was valued at around $90 billion and was about to offer “secondaries,” which meant that many of its employees were about to be able to sell some of their shares. Furthermore, Microsoft had invested many billions in owning part of the commercial part of OpenAI.

From one day to the next, the board succeeded in reducing OpenAI’s value to almost nothing when over 600 of the 700 employees threatened to leave OpenAI together with Altman and Brockman. The board stood firm. It is completely in accordance with the non-profit part’s charter to destroy OpenAI if found necessary.

Fortunately, this did not happen, but just consider that logic again. A movement whose primary purpose is to use capital to do good in the world has no problem destroying the very capital that is its raison d’etre.

However that wasn’t all we saw.

The whole reason for firing Altman was that people were afraid of the progress that had been made could be dangerous. There is a widespread fear among many, far into the ranks of OpenAI, that AGI will kill us all.

This fear is based on a series of fictive thought experiments and promoted particularly in academic institutions, often by people who do not work directly with AI but theoretically.

It is the same academics who were behind and pushed for two other major AI related events in 2023. First the attempt to introduce a pause in the development of AI and then the EU’s AI Act. A regulatory opus whose creation was to control AI from hurting society but suffering from exactly the same problem as all other EU regulations normally do. It solves absolutely nothing and just pushes the EU even further back in the field when it comes to technological strength and significance.

More fear, more impotence. There is nothing in the new AI Act that is not already solved in existing legislation. It is a rubber-band law where politicians can decide what it should mean on an ongoing basis. Talk about not just undermining the opportunities for European companies to compete but the rights of citizens being anything but protected.

As it turns out no one knows if AI will develop as some fear. What we do know is that just as continuing the development of AI could be dangerous, not developing AI fast enough can be just as dangerous. For who knows how quickly we will be able to solve the riddle of cancer, create new forms of energy, materials, and all sorts of other problems that are existential to our survival.

And thus we arrive at the real question, the one that is relevant to ask ourselves.

How do we deal with a future that is unpredictable?

Why is it that extreme caution is so revered these days when there has never been a point in history where humanity has benefited from fearing what will happen and what might happen?

There will always be existential risks, we will always face major problems no matter how many we solve. Because solutions to problems create entirely new problems. The difference is that the new problems are better problems to have.

The more knowledge we create, the more advanced we become. The more advanced we become, the more advanced our problems become, but also the more advanced solutions we can create.

This is how we have moved from a situation at the mercy of nature to a situation where we set the premise for nature and one day will be able to terraform planets throughout the universe.

May 2024 be the year where we face our fears, take risks, drop the victimhood, and create solutions for all that we are afraid of. We have never been better equipped to do so.

Well, that was actually the point of predictions even though they have become a fixed part of my annual forecasts. Let’s dive into what is the whole point of this writing.

First, the macro eco-political predictions. As always, none of this is investment advice:

  • Several states will attempt to ban Trump from running.
  • The Supreme Court reverses the ban against Trump running.
  • Biden will not run.
  • Newsom or Hillary takes his place.
  • The election in the USA will be won by Trump if he can handle the abortion debate and stay out of prison.
  • At the latest, when a new president comes in, Ukraine will be forced to make a deal with Russia that gives certain geographical concessions.
  • Covid what?
  • Nvidia will continue its ascent.
  • Our use of fossil fuels will also continue to increase from last year.
  • EU’s energy prices will rise further.
  • Inflation will remain high.
  • Private debt will rise further.
  • The first AI cult will arise.
  • The USA will follow suit with AI regulation.
  • Israel will continue to go after Hamas until the end of the year or until they are sufficiently subdued.
  • More will follow the New York Times and sue OpenAI for copyright infringement (or ‘duplicative language’ as they called Gay’s plagiarisms)
  • X will report the best financial result since Musk took over.
  • Support for nuclear power will really take off.
  • Support for wind and solar will further decrease and many more projects like in 2023 will be shelved.

And now for the technological ones.

  • Stable Diffusion will implode.
  • The first full-length feature film will be started.
  • OpenAI will introduce GPT-5.
  • It will be fantastic but not as big a leap as expected.
  • The first open-source model will reach the same level as a basic GPT-4.
  • Solana and Ethereum will win the crypto battle and become the primary standards.
  • We will finally get legislation in the crypto area.
  • AI agents will become the primary battleground when it comes to AI.
  • ChatGPT will still be the best and most successful LLM at the end of 2024.
  • We will still not be anywhere near AGI.
  • Investment in AI will continue to rise.
  • The first AI-based pop music will top the charts
  • We will see a series of cancer products get approved
  • Investment in fusion and thorium companies will be the highest ever
  • SpaceX will succeed with Starship and Superheavy
  • Google and Facebook will begin to see the impact of AI on their ad revenue.
  • More progress on the fusion front

May you all have a fantastic 2024

--

--

Thomas Petersen

Head Honcho at Faktory.com Investor, designer, tinkerer. First Principle founder. Square, 80/20, MetaDesign alumni. Hello co-founder, Dotcom survivor