X

7 Frightening Outcomes That Could Happen After The Singularity

new 1lluminati

What is the singularity, you might ask? It’s best described as a theory in which artificial intelligence experiences unprecedented continual growth with unthinkable consequences for humans. The process by which an artificial intelligence experiences exponential self-improvement is known as the intelligence explosion. What could happen in the singularity is that this intelligence explosion could create an omnipotent superintelligence that surpasses human intelligence. In such a scenario, singularity outcomes could result in, basically, the end of humanity as we know it. And that’s not even the most frightening singularity outcome.

1. Humans become cyborgs to prevent AI from establishing dominance

Linus Bohman

If you can’t beat ’em, join ’em! As a survival strategy for the singularity, the maxim might make a lot of sense. Elon Musk on singularity outcomes recently claimed that “Humans must become cyborgs if they are to stay relevant in a future dominated by AI.”

The paradigm of improving our minds with machines is called transhumanism. Merging our biology with nanotechnology, AI and/or biotechnology would result in improvement of our base attributes—from sheer physical strength to cerebral capabilities.

Though this may sound like a net benefit for humanity, some argue that transhumanism heralds the obliteration of humanity. Part machine, our fundamental biology—and what makes us human—will have disappeared.

2. Slavery

Jakob Montrasio

One of the most frightening singularity outcomes — more terrifying, some say, than the singularity destroying the human race—is machines enslaving the entire human race. After the intelligence explosion, we are no longer the smartest beings on the planet. We would, in essence, be creating a divinity on earth while remaining “mere mortals”. Which can’t end well.

If the superintelligent machines decide to keep us around, there’s a good chance we will end up being their slaves. The slavery might take one of two forms: we’re aware that we are slaves, or we’re basically in a situation like The Matrix. Nick Bostrom, a thought leader in the artificial intelligence field, echoes similar warnings.

3. Superintelligent AI floods the observable universe with paperclips

In this scenario, it’s hard to distinguish if the artificial intelligence is super-stupid or super-smart. The Paperclip Maximizer theory, which was first described by Nick Bostrom in 2003, speculates that AI would only do what it was tasked with until literally nothing else is left. For example, if the AI was tasked with collecting paperclips, it would do everything in its power to manufacture as many paperclips as possible. This would inevitably include using all the matter on Earth (including humans), then harnessing space matter after it’s exhausted its supply here. In the process, it would improve its own intelligence, but not for the sake of intelligence itself. It would just figure out how to produce paperclips more efficiently and at higher volume. For those interested in learning more about how this theory might play out, we suggest watching the 2016 movie Kill Command, which toys with the idea quite well.

4. Superintelligent AI pulls the plug on the simulation we’re living in

The Matrix

This theory is out there, but some of the greatest minds have argued that we’re more likely than not living in a computer simulation. If this is in fact true, a superintelligent AI could determine that our simulation is irrelevant to its goals and simply ‘pull the plug’ on it.

Another theory is that superintelligent AI created and have been running out simulation all along, and that it’s designed to stop running once it reaches a certain goal. What if that goal was the singularity?

5. All of humanity uploads the contents of their minds to an enormous device called a Matroishka brain

new 1lluminati

A Matrioshka brain is a theoretical structure of unthinkable computational capacity. It’s named after the Russian doll. The Matroishka brain is a solar powered computer of immense magnitude, which catches all the energy released by nearby stars. In theory, such a device could be created to preserve humanity by allowing us to upload the contents of our minds to it. Prominent AI thinker and Google’s Director of Engineering Ray Kurzweil recently said “we will connect our neocortex, the part of our brain where we do our thinking, to the cloud.”

6. AI hooks us up to pleasure machines

Surian Soosay

Evolution ‘forces’ some animals to evolve toward seeking out positive and pleasurable experiences. What if the AI’s final goal isn’t improving its intelligence, but seeking pleasure? If that happens, such an AI could decide that the entire human race must be “wireheaded” to only experience pleasure.

7. Total economic collapse

Rafael Matsunaga

If we manage to robotize our entire society, our production capabilities will reach maximum levels, and overproduction of almost everything might occur. This would cause massive unemployment, and the demand for goods would plummet. This would basically signal the collapse of the entire economic system as we know it today, which would have unthinkable consequences.

Best AI Assistant:
Related Post