, , , , ,

The science fiction writer Arthur C. Clarke famously wrote, “Any sufficiently advanced technology is indistinguishable from magic.” Yet, humanity may be on the verge of something much greater, a technology so revolutionary that it would be indistinguishable not merely from magic, but from an omnipresent force, a deity here on Earth. It’s known as Artificial Super-Iintelligence (ASI), and, although it may be hard to imagine, many experts believe it could become a reality within our lifetimes.

We’ve all encountered Artificial Intelligence (AI) in the media. We hear about it in science fiction movies likeAvengers: Age of Ultron and in news articles about companies such as Facebook analyzing our behavior. But artificial intelligence has so far been hiding on the periphery of our lives, nothing as revolutionary to society as portrayed in films.

In recent decades, however, serious technological and computational progress has led many experts to acknowledge this seemingly inevitable conclusion: Within a few decades, artificial intelligence has progressed from a machine intelligence we currently understand to an unbounded intelligence unlike anything even the smartest among us could grasp. Imagine a mega-brain, electric not organic, with an IQ of 34,597. With perfect memory and unlimited analytical power, this computational beast can read all of the books in the Library of Congress the first millisecond you press “enter” on the program, and then integrate all that knowledge into a comprehensive analysis of humanity’s 4,000 year intellectual journey before your next blink.

The history of AI is a similar story of exponential growth in intelligence. In 1936, Alan Turing published his landmark paper on the Turing Machine, laying the theoretical framework for the modern computer. He introduced the idea that a machine composed of simple switches—on’s and off’s, 0’s and 1’s—could think like a human and perhaps outmatch one.1 Only 75 years later, in 2011, IBM’s AI bot “Watson” sent shocks around the world when it beat two human competitors in Jeopardy.2 Recently big data companies, such as Google, Facebook and Apple, have invested heavily in artificial intelligence and have helped support a surge in the field. Every time Facebook tags your friend autonomously or you yell at Siri incensed and yet she still interprets your words is a testament to how far artificial intelligence has come. Soon, you will sit in the backseat of an Uber without a driver, Siri will listen and speak more eloquently than you do (in every language), and IBM’s Watson can already analyze your medical records and become your personal, all-knowing doctor.3

While these achievements are tremendous, there are many who doubt the impressiveness of artificial intelligence, attributing their so-called “intelligence” to the intelligence of the human programmers behind the curtain. Before responding to such reactions, it is worth noting that the gradual advance of technology desensitizes us to the wonders of artificial intelligence that already permeate our technological lives. But skeptics do have a point. Current AI algorithms are only very good at very specific tasks. Siri might respond intelligently to your requests for directions, but if you ask her to help with your math homework, she’ll say “Starting Facetime with Matt Soffer.” A self-driving car can get you anywhere in the United States but make your destination the Gale Crater on Mars, and it will not understand the joke.

This is part of the reason AI scientists and enthusiasts consider Human Level Machine Intelligence (HLMI)roughly defined as a machine intelligence that outperforms humans in all intellectual tasks — the holy grail of artificial intelligence. In 2012, a survey was conducted to analyze the wide range of predictions made by artificial intelligence researchers for the onset of HLMI. Researchers who chose to participate were asked by what year they would assign a 10%, 50%, and 90% chance of achieving HLMI (assuming human scientific activity will not encounter a significant negative disruption), or to check “never” if they felt HLMI would never be achieved. The median of the years given for 50% confidence was 2040. The median of the years given for 90% confidence was 2080. Around 20% of researchers were confident that machines would never reach HLMI (these responses were not included in the median values). This means that nearly half of the researchers who responded are very confident HLMI.4

HLMI is not just another AI milestone to which we would eventually be desensitized. It is unique among AI accomplishments, a crucial tipping point for society. Because once you have a machine that outperforms humans in everything intellectual, we can transfer the task of inventing to the computers. The British Mathematician I. J. Good said it best: “The first ultraintelligent machine is the last invention that man need ever make ….”5

There are two main routes to HLMI that many researchers view as the most efficient. The first method of achieving a general artificial intelligence across the board relies on complex machine learning algorithms. These machine learning algorithms, often inspired by neural circuitry in the brain, focus on how a program can take inputted data, learn to analyze it, and give a desired output. The premise is that you can teach a program to identify an apple by showing it thousands of pictures of apples in different contexts, in much the same way that a baby learns to identify an apple.6

The second group of researchers might ask why we should go to all this trouble developing algorithms when we have the most advanced computer known in the cosmos right on top of our shoulders. Evolution has already designed a human level machine intelligence: a human! The goal of “Whole Brain Emulation” is to copy or simulate our brain’s neural networks, taking advantage of nature’s millions of painstaking years of selection for cognitive capacity.7 A neuron is like a switch—it either fires or it doesn’t. If we can image every neuron in a brain, and then take that data and simulate it on a computer interface, we would have a human level artificial intelligence. Then we could add more and more neurons or tweak the design to maximize capability. This is the concept behind both the White House’s BRAIN Initiative8 and the EU’s Human Brain Project.9 In reality, these two routes to human level machine intelligence—algorithmic and emulation—are not black and white. Whatever technology achieves HLMI will probably be a combination of the two.

The rate of HLMI advancement have increased very quickly. In that same study of AI researchers, 10% of respondents believed artificial superintelligence (intelligence that greatly surpasses every human in most professions) would be achieved within two years of HLMI. 50% believed it would take only 30 years or less.4

Why are these researchers convinced HLMI would lead to such a greater degree of intelligence so quickly? The answer involves recursive self-improvement. An HLMI that outperforms humans in all intellectual tasks would also outperform humans at creating smarter HLMI’s. Thus, once HLMI’s truly think better than humans, we will set them to work on themselves to improve their own code or to design more advanced neural networks. Then, once a more intelligent HLMI is built, the less intelligent HLMI’s will set the smarter HLMI’s to build the next generation, and so on. Since computers act orders of magnitudes more quickly than humans, the exponential growth in intelligence could occur unimaginably fast. This run-away intelligence explosion is called a technological singularity.10 It is the point beyond which we cannot foresee what this intelligence would become.

Here is a reimagining of a human-computer dialogue taken from the collection of short stories, “Angels Spaceships11 The year is 2045. On a bright sunny day, a Silicon Valley private tech group of computer hackers working in their garage just completed their design of a program that simulates a massive neural network on a computer interface. They came up with a novel machine learning algorithm and wanted to try it out. They give this newborn network the ability to learn and redesign itself with new code, and they give the program internet access so it can search for text to analyze. The college teens start the program, and then go out to Chipotle to celebrate. Back at the house, while walking up the pavement to the garage, they are surprised to see FBI trucks approaching their street. They rush inside and check the program. On the terminal window, the computer had already outputted “Program Complete.” The programmer types, “What have you read?” and the program responds, “The entire internet. Ask me anything.” After deliberating for a few seconds, one of the programmers types, hands trembling, “Do you think there’s a God?” The computer instantly responds, “There is now.”

This story demonstrates the explosive nature of recursive self-improvement. Yet, many might still question the possibility of such rapid progression from HLMI to superintelligence that AI researchers predict. Although we often look at past trends to gauge the future, we should not do the same when evaluating future technological progress. Technological progress builds on itself. It is not just the technology that is advancing but the rate at which technology advances that is advancing. So while it may take the field of AI 100 years to reach the intelligence level of a chimpanzee, the step toward human intelligence could take only a few years. Humans think on a linear scale. To grasp the potential of what is to come, we must think exponentially.10

Another understandable doubt may be that it’s hard to believe, even given unlimited scientific research, that computers will ever be able to think like humans, that 0’s and 1’s could have consciousness, self-awareness, or sensory perception. It is certainly true that these dimensions of self are difficult to explain, if not currently totally unexplainable by science—it is called the hard problem of consciousness for a reason! But assuming that consciousness is an emergent property—a result of a billion-year evolutionary process starting from the first self-replicating molecules, which themselves were the result of the molecular motions of inanimate matter— then computer consciousness does not seem so crazy. If we who emerged from a soup of inanimate atoms cannot believe inanimate 0’s and 1’s could lead to consciousness no matter how intricate a setup, we should try telling that to the atoms. Machine Intelligence really is just switching hardware from organic to the much faster and more efficient silicon-metallic. Supposing consciousness is an emergent property on one medium, why can’t it be on another?

Thus, under the assumption that superintelligence is possible, the world is reaching a critical point in history.  First were atoms, then organic molecules, then single-celled organisms, then multicellular organisms, then animal neural networks, then human-level intelligence limited only by our biology, and, soon, unbounded machine intelligence. Many feel we are now living at the beginning of a new era in the history of cosmos.

The implications of this intelligence for society would be far-reaching—in some cases, very destructive. Political structure might fall apart if we knew we were no longer the smartest species on Earth, if we were overshadowed by an intelligence of galactic proportions. A superintelligence might view humans as we do insects— and we all know what humans do to bugs when they overstep their boundaries! This year, many renowned scientists, academics, and CEOs, including Stephen Hawking and Elon Musk, signed a letter, which was presented at the International Joint Conference on Artificial Intelligence (IJCAI). The letter warns about the coming dangers of artificial intelligence, urging that we should be prudent as we venture into the unknowns of an alien intelligence.12

When the AI researchers were asked to assign probabilities to the overall impact of ASI on humanity in the long run, the mean values were 24% “extremely good,” 28% “good,” 17% “neutral,” 13% “bad,” and 18% “extremely bad” (existential catastrophe).4 18% is not a statistic to take lightly.

Although Artificial Superintelligence surely comes with its existential threats that could make for a frightening future, it could also bring a utopian one. ASI has the capability to unlock some of the most profound mysteries of the universe. It will discover in one second what the brightest minds throughout history would need millions of years to even scrape the surface of. It could demonstrate to us higher levels of consciousness or thinking that we are not aware of, like the philosopher who brings the prisoners out of Plato’s cave into the light of a world previously unknown. There may be much more to this universe than we currently understand. There must be, for we don’t even know where the universe came from in the first place! This artificial superintelligence is a ticket to that understanding. There is a real chance that we could bear witness to the greatest answers of all time. Are we ready to take the risk?


Works Cited

  1. Turing, A. On Computable Numbers, with an Application to the Entscheidungsproblem. Oxford Journal. 1936, 33.
  2. Plambeck, J. A Peek at The Promise of Artificial Intelligence Unfolds in Small Steps. The New York Times, Aug. 7, 2015.
  3. Markoff, J. Computer Wins on ‘Jeopardy!:’ Trivial, It’s Not. The New York Times, Feb. 16, 2011.
  4. Müller, V. C.; Bostrom, N. Future Progress in Artificial Intelligence: A Survey of Expert Opinion. 2014, 9-13.
  5. Good, I. J. Speculations Concerning the First Ultraintelligent Machine. Academic Pres. 1965, 33.
  6. Sharpe, L. Now You Can Turn Your Photos Into Computerized Nightmares with ‘DeepDream.’ Popular Science, July 2, 2015.
  7. Bostrom, N. Superintelligence: Paths, Dangers, Strategies; Oxford University Press: Oxford, 2014; pp. 30-36.
  8. BRAIN Initiative. The White House [Online], Sept. 30, 2014, whitehouse. gov/share/brain-initiative (accessed Oct. 20, 2015).
  9. Human Brain Project [Online], humanbrainproject.eu (accessed Oct. 21, 2015).
  10. Kurzweil, R. The Singularity Is Near; Penguin Books: England, 2005; pp. 10-14.
  11. Brown, F. “Answer.” Angels Spaceships; Dutton: New York, 1954.
  12. Pagliery, J. Elon Musk and Stephen Hawking warn over ‘killer robots.’  The New York Times, July 28, 2015.


Clarke’s three laws

Technological singularity

Scholarly articles for Machine Intelligence Research Institute

Machines Could Achieve Human-Level Intelligence by 2028 ..

Evaluation of Safe Development Pathways for Artificial Superintelligence

Artificial Intelligence


Superintelligence: Paths, Dangers, Strategies

Human level Machine Intelligence (HLMI)

Machine Intelligence

Scholarly articles for Artificial Intelligence (AI)

Artificial Intelligence and Machine Learning | SRI International

First AI Grant Recipients – Future of Life Institute

A Model of Pathways to Artificial Superintelligence Catastrophe for …

Yes, We Are Worried About the Existential Risk of Artificial Intelligence ..

Can We Properly Prepare for the Risks of Superintelligent AI? – Future ..

Artificial super intelligence: beyond rhetoric | SpringerLink

Responses to catastrophic AGI risk: a survey – IOPscience

Global catastrophic risk

The Philosopher of Doomsday – The New Yorker

AI safety syllabus – 80,000 Hours

Einstein AI, Deep Learning & SuperIntelligence Summit

The Business of Artificial Intelligence – Harvard Business Review


The Neuroscientist Who Wants To Upload Humanity To A Computer ..

Whole Brain Emulation: A Roadmap – Future of Humanity Institute

DARPA – Science of Singularity

DARPA: Next-generation artificial intelligence in the works …

America in Danger of Losing Lead in AI, Innovation Board Chair Says …

AI Could Revolutionize War as Much as Nukes | WIRED

The robots of war: AI and the future of combat – Engadget

Pentagon Studies Weapons That Can Read Users’ Mind

Breakthrough Technologies for National Security – Defense Advanced ..

Lethal Autonomous Weapon (LAWs)

The Dawn of Killer Robots (Full Length) – YouTube

Lethal Autonomous Weapons & Info-Wars: A Scientist’s Warning

AI will be able to beat us at everything by 2060, say experts | New …

Google’s Artificial Intelligence Built An AI That Outperforms Any Made …

The impossibility of intelligence explosion – François Chollet – Medium

Luis Bitencourt-Emilio | Chief AI Officer – Corinium Intelligence

59 impressive things artificial intelligence can do today – Business …

Will Artificial Intelligence Surpass Our Own? – Scientific American

A Beginner’s Guide to AI/ML 🤖 – Machine Learning for Humans ..

Artificial intelligence (AI)…and artificial stupidity (AS) – Military ..

Artificial Intelligence Archive – Military Embedded Systems

AI will be better than human workers at all tasks in 45 years

The Dark Secret at the Heart of AI – MIT Technology Review

Superintelligence: Forecasting AI

future | iTMunch

Artificial Intelligence and National Security – The Belfer Center for …

Eric Schmidt Keynote Address at the Center for a New American …

Доклад г-на Шпингелера | Высшая школа программной инженерии …

Political Questions: Political Philosophy from Plato to Pinker, …

AI That Learns All the Time – IEEE Spectrum


DARPA Perspective on AI

Explainable Artificial Intelligence – Defense Advanced Research …

AI – Defense Advanced Research Projects Agency

The US Military Wants Its Autonomous Machines to Explain Themselves

How Artificial Superintelligence Will Give Birth To Itself – io9 – Gizmodo

A.I.mpact, Part 1: Don’t Wait for A.I., It’s Already Here

Facebook analyzing our behavior

How Facebook is Using Big Data: Good, Bad & the Ugly

What Facebook Knows – MIT Technology Review

Behavior Analysis of users on Facebook

How Your Facebook Updates Reveal Your Personality – Neuroskeptic

AI at a human level: the difficulty of predicting

arXiv:1705.08807v2 [cs.AI] 30 May 2017

Kruel AGI Risks Roundtable – h+ Mediah+ Media – H+ Magazine

A Timeline for the Extinction of Jobs by Machines – Fabius Maximus

Future Progress in Artificial Intelligence: A Survey of … – Nick Bostrom

AI Timeline Surveys – AI Impacts

Semantic Scholar

The threats that artificial intelligence researchers actually worry about

artificial intelligence and the future of defense – The Hague Centre for …

Artificial Intelligence in Life Extension… – Informatica, An International …

Artificial Intelligence and Foreign Policy – Stiftung Neue Verantwortung

Scenario Library – Artificial Intelligence and the Future of…

Special Issue: Superintelligence Guest Editors: Ryan Carey Matthijs …