“Everyday, theWorld Mind churns out new inventions. It cures diseases,finds new ways of making things,builds new robots andother machines, and even creates new forms of art, music, drama, and sport. It hasslowly reversedclimate change, backing Earth away from the brink of environmentaldisaster… The World Mind protects all the animals and plants in the world and has foundways to make and distribute food,clothing,andshelter so that no one experiences poverty. It has built and launched starships to explore the galaxy. It has also established societies and governments that keep everyone at peace. War belongs to thedistant past. “

“The World Mind is a superintelligence, a form ofartificial intelligence (AI) thatgreatly exceedsthe capabilities of a regular human brain. “
—Kathryn Hulick, Welcome to the Future (Quarto, 2021)
A superintelligence would think of ideas faster, remember more information, and solve problems more easily than a human genius ever could. That capability would transform the world. It would also change what it means to be human. That might be a very good thing as I wrote in my book.
However, superintelligence could also put all of humanity at risk. Perhaps in order to keep the peace, the World Mind hacks people’s brains and turns them into mindless slaves. These people don’t fight, but they also can’t think for themselves or create anything new or beautiful. That doesn’t seem like such a wonderful future any more.

Futurists have a name for the arrival of superintelligence. They call it the technological singularity, or the singularity, for short.

One of the first uses of the term singularity was in a 1993 essay by computer scientist Vernor Vinge. He wrote, “We are entering a regime as radically different from our human past as we humans are from the lower animals… It’s fair to call this event a singularity. It is a point where our old models must be discarded and a new reality rules.”
The futurist Ray Kurzweil has explored the idea of the singularity in several books. In his 2005 book, The Singularity is Near, he defines the singularity this way, “It’s a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed .”
While Kurzweil is optimistic that this transformation will be a good thing, others aren’t so sure. As Elon Musk famously said back in 2014, “I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that… With artificial intelligence we are summoning the demon.” He’s referring to horror stories in which a magician summons a powerful demon, thinking it will do his or her bidding. In most of these stories, the demon doesn’t obey and wreaks havoc—often killing its summoner.
Will the singularity save us? Or will it wipe us out? Let’s find out what’s possible.
The Path to the Singularity
Several different technologies that we already have could lead to superintelligence. The first is computers. Equipped with machine learning algorithms, these machines have already surpassed human ability on specific skills, such as playing Go. This is called narrow AI because it is only able to accomplish one sort of task. It’s easy to imagine that faster, more complex computers could become generally intelligent, or able to learn any task as competently as a person. If they can do that, then it seems reasonable that they could zoom past expert humans to become superintelligent. They might interact with the world via robot bodies or smart systems. Or they might generate ideas for people to carry out or design inventions for people to use.
Brain chips and other technology for monitoring and reading the brain could also lead to superintelligence. Via these chips, biological brains could someday interface directly with computers. That could potentially lead to supergenius humans with enhanced brains who think in ways that are not currently possible. Or people will use this technology to upload their brains to computers — either as a backup or as a permanent state of virtual being. We could become human-computer cyborgs with superintelligence.
Another possibility is that genetic engineering technology might be able to enhance biological brains to become faster, more efficient, and more capable. Superintelligence could be the next step in our biological evolution, except this time we’d be controlling that evolution ourselves rather than waiting for it to happen randomly.

Superintelligence may not arise within individual machines or minds. Perhaps large networks of interconnected computers or brains (or both) will achieve a sort of hive mind that has superintelligence even though all the individual parts of it have normal intelligence. Finally, the key to unlocking superintelligence may be a technology that hasn’t been invented yet.

Kurzweil has predicted that the singularity will arrive around 2045. He argues that it will come so soon because computer technology has advanced at an exponential pace so far. He doesn’t see that pace slowing down any time soon.
Though some agree with him, most computer scientists and other experts tend to think that the singularity is much farther away—or even impossible. Stuart Russell, a famous computer scientist, refuses to predict a date for the singularity but thinks 2045 seems far too soon. “I would tend towards a longer time scale,” he said. Russell and many other AI experts argue that the brute force of faster computers won’t be enough to reach general intelligence or superintelligence. A better understanding of the nature of intelligence may be required. In a 2018 survey of thirty-two AI researchers, 62 percent thought the singularity would arrive before 2100. Seventeen percent thought it would happen after 2100. And 21 percent thought it was likely to never happen.
If it never happens, it could be because superintelligence is impossible to build. Or we may decide not to build it. Or some other existential threat may destroy civilization before we get a chance to build it.

If we do achieve superintelligence, we’ll have to be careful. There’s no guarantee that smarter machines or brains will bring about a future that we want. Even if we think we’re building superintelligence that will help humanity, it could end up hurting us.
Risk 1: Robots Go Rogue
In the movie Avengers: Age of Ultron, the villain, Ultron, is an intelligent robot that decides to wipe out humanity in order to save planet Earth. Thankfully, a team of superheroes manages to avert disaster. Numerous popular science fiction movies and books feature robots or computers that come alive and turn on their human creators. People must fight back to destroy the machines or else they will be wiped out or
This type of story makes for a very exciting movie or book. It’s also probably the most unlikely way that a superintelligence would threaten humanity. Why? This scenario assumes that a machine superintelligence will care about taking over the world. This may seem like a reasonable assumption because a desire for power or for revenge is something many humans feel. Steven Pinker, a psychologist at Harvard University, explains that evolution is inherently competitive. On Earth right now, “A lot of the organisms that are highly intelligent also have a craving for power and an ability to be utterly callus to those who stand in their way,” he says .

If we turn ourselves into superintelligent beings or cyborgs, then our craving for power could cause big problems. Superintelligent humans may still desire power and may try to control or destroy regular humans. However, if we instead create superintelligence inside computers or robots, it’s unlikely that these machines will decide on their own to harm us.
Jeff Hawkins is a neuroscientist and author of the book A Thousand Brains: A New Theory of Intelligence. He wrote, “Intelligence is the ability to learn a model of the world. Like a map, the model can tell you how to achieve something, but on its own it has no goals or drives. We, the designers of intelligent machines, have to go out of our way to design in motivations .” In other words, intelligence—even superintelligence—probably doesn’t come with any built-in motivations. No matter how smart they are, machines will do what their creators build them to do. A psychotic maniac might build a superintelligence that wants to take over the world. If that happens, we’re in trouble. But most people would design AI with goals that include following human orders and protecting human health and well-being.

Risk 2: Humans Lose Control
Following human orders and protecting human well- being sound great. But these goals are very vague. Also, how can regular, non-superintelligent people maintain control over these goals?
This is an open question that a lot of smart people are trying to answer. “Intelligence really means power to shape the world in a way that satisfies your objectives. If something is more intelligent than you, then it’s more powerful than you. How do you retain power over something that’s more powerful than you, forever? It doesn’t feel that promising,” says AI Stuart Russell . Another expert, Roman Yampolskiy, puts it even more simply: “We are trying to control God .” He says that because to regular humans, a superintelligence would be like God.
A superintelligence doesn’t have to be evil or even power hungry to cause big problems. It may simply ignore humans. Why might that be a bad thing? In my book, Welcome to the Future, I told this story:
“People are much smarter than frogs. We don’t want to hurt frogs—we just don’t really care what they do. But if we needed to build a new road and it went through a frog pond, most of us wouldn’t think twice about building that road. The frogs would have no way to stop us or prepare for what would be about to happen to them. When it comes to superintelligent AI, we’re the frogs.”
Hopefully, we can build an AI that will always remember and consider the lowly frogs, no matter what. But that may be easier said than done. Even if we give a superintelligence goals that seem reasonable and helpful, it may end up behaving in ways that we don’t expect, don’t want, and can’t predict.

In the story of the sorcerer’s apprentice, a young boy who is tired of fetching water enchants a broom to fetch water for him. But he doesn’t tell the broom how much water. It doubles itself again and again until an army of brooms is bringing water and the entire building floods! Philosopher Nick Bostrom has told a similar story about a superintelligence instructed to make paperclips. Since this is the only goal it has, it single-mindedly turns the entire universe into paperclips . No one has told it that things other than paperclips have value and deserve protection.

These stories are all examples of the alignment problem. It arises whenever the goals or values of a system don’t match the goals or values of its creators. In these examples, things would have gone much better if the machine had a more complex goal system that included limits or ways for its creators to change its goals. At the very least, the creators needed a way to switch it off!
Unfortunately, humans may never even get a chance to try to switch off a misaligned superintelligence. A desire to avoid being shut down and a desire to copy and improve oneself help accomplish any goal. If a superintelligent system is able to modify its own goals, it may add these on its own in an attempt to be better at whatever job it was designed to do.
If a superintelligence wants to avoid being shutdown or stopped, it could outthink us at every turn. Stuart Russel says, “It may not be clear that you’ve given (asuperintelligence) the wrong objective until it’s way too late .” The machine would be smart enough to realize that humans weren’t going to like its plan to achieve its objective. So it could disguise its plan until it had total control. Like the frogs in the pond, we’d never even know anything was amiss until our water drained away.
The good news is that experts are aware of all these issues. Nick Bostrom helped warn the world about these dangers in his 2014 book Superintelligence. But he thinks that disaster can be avoided. “Losing control is not a given,” he says . Lots of people are working on making sure a machine could have superhuman problem-solving abilities while remaining in service of human values at all times.
Risk 3: Humans Decide to Go Extinct
In several of the possible paths to superintelligence, people modify themselves to gain greater intelligence and other abilities. Machine-human hybrids could more easily explore outer space, live forever, copy themselves, and more. Kurzweil and many other experts feel that merging with machines is our best chance to survive long into the distant future. Since these hybrids could live longer and spread out across the cosmos, creating them would be like insurance against all other forms of existential risk. For example, if an asteroid were to destroy Earth, at least there would be hybrids that live on other worlds and can carry on civilization. Alternatively, if these hybrids are superintelligent, they could certainly figure out how to redirect the asteroid and save Earth.
The question of whether these hybrids pose an existential risk depends on how you define humanity. These hybrids would almost certainly be a new species. It may not be fair to call them human at all. If they end up replacing humanity as we know it, then some would consider humanity to be extinct. If you feel attached to life on Earth as it is now, that may seem like a tragedy. Yampolskiy says, “It feels like we’re losing humanity in the process. We’ll become a different species .”
To others, the idea of humanity evolving into a new form seems like an improvement. To the roboticist Hans Moravec, human- machine hybrids would be our “mind children .” In his 1989 book Mind Children: The Future of Robot and Human Intelligence, he writes, “In the present scheme of things, on our small and fragile earth, genes and ideas are often lost when the conditions that gave rise to them change. … Our speculation ends in a supercivilization, the synthesis of all solar-system life, constantly improving and extending itself, spreading outward from the sun, converting nonlife into mind.” Kurzweil, Moravec, and others like them are called transhumanists. They support enhancing humans and extending human lives in order to become what they call posthuman beings .
Even if you support the idea of posthuman beings, its possible that the transition to this new stage of humanity won’t happen smoothly. Posthumans may retain a lot of the negative qualities of regular humans, such as a hunger for power or dominance . The world may end up divided into posthumans and humans. One side or the other may experience discrimination or even extermination. If the posthumans are superintelligent, the regular humans wouldn’t stand a chance in a conflict for the reasons outlined above. Depending on your point of view, posthumans either save humanity or wipe us out, replacing us with some new, alien form that is impossible for us to understand now.

The Good News
Superintelligence is risky. The good news is that we almost certainly have plenty of time to prepare. All of the smart systems we can build now lack common sense understanding. They are not self-aware.
It’s not clear if the type of superintelligence that transhumanists imagine is even possible. AI expert Melanie Mitchell says I don’t think it’s obvious at all that we could have general intelligence of the kind we humans have evolved without the kinds of limitations we have. That may be because to have our kind of intelligence, we need to have our kinds of bodies .”
Still, plenty of experts are concerned enough about the potential risks that they are taking steps and planning ahead to achieve a future in which AI supports humanity rather than threatens it.
Everyone has a role to play in making sure we build safe, beneficial AI. The future isn’t something that happens to us—it’s something we create. Think about the future you want, then build it. It’s up to us to build AI, or even a superintelligence, that will benefit all of humanity.