In the cinema, when Artificial Intelligence surpasses human intelligence, it always tends to react in the same way: a plan for world domination .
However, there are 12 possible scenarios that can take place if this happens one day.
From utopia to dystopia
Max Tegmark (Stockholm, 1967) believes that artificial intelligence can present similar risks and opportunities for humanity. This MIT professor and director of the Future of Life Institute in Cambridge (USA) estimates that the arrival of a General Artificial Intelligence (AGI) that surpasses the human one is a matter of decades.
As he explains in his book Life 3.0: What does it mean to be human in the era of artificial intelligence , which we recently reviewed, depending on various factors, this new paradigm shift (a human invention that thinks better than humans) can trigger, at least twelve possible scenarios .
Some are very interesting, some are idyllic … but others, outright, seem like a dystopian science fiction nightmare. They range from peaceful human-AI coexistence to AI takeover, leading to the extinction or incarceration of humans.
- There is the benevolent dictator (a single benevolent superintelligence would rule the world)
- The protector god (humans would still be in charge of their own destiny, but there would be an AI protecting and taking care of us)
- The libertarian utopia (humans and machines would peacefully coexist)
- The dominators (AI destroy humanity).
- The zookeeper (a few humans would stay in zoos for AI entertainment, just like we keep endangered pandas in zoos)
- The Enslaved God (superintelligence created in isolation that is used by its human masters to create unimaginable wealth and technologies).
- The Equal Utopia (here the AI guarantees a basic income to all human beings and in addition private property rights have been abolished).
- The Guardian (an AI is created to interfere as little as necessary to prevent the creation of another AI).
- The Descendants (AI is like a parent pretending that humans don’t do that badly).
- 1984 (progress is constrained by an Orwellian surveillance state),
- Going back (civilization is extinguished because technology is boycotted to return to a pre-scientific age).
- Self-destruction (AI is never created by some technical pitfall and civilization collapses)
We explain them all in more detail in the following video:
If you want to go deeper into each scenario, we also recommend Max’s talk at TED:
What do we want the role of humans to be if machines can do everything better and cheaper than us? The way I see it, we face a choice. One option is to be complacent. Say, "Let’s build machines that can do everything we can do and not worry about the consequences. That is, if we build technology that makes all humans obsolete, what can go wrong?"