Book Review: Our Final Invention: Artificial Intelligence and the End of the Human Era

By Jason Worth

There is a meaningful chance that at some point in the next few decades, intelligent machines that mankind is creating now, will kill us. That is the premise of James Barrat’s book Our Final Invention: Artificial Intelligence and the End of the Human Era. Barrat’s viewpoint, however sobering and concerning, is important, especially when considered amidst the cheerleading and optimism from Ray Kurzweil and the Transhumanist movement who gleefully proclaim an enlightened and beneficial merging of man and machine in the near future.

To understand Barrat’s concerns, we must first understand the machine that is being built. We are all familiar, to some degree, with the concept of artificial intelligence (AI) and the types of programs that currently use it. AI is built into the microchip that controls the anti-locking brakes in our cars, so when we slam on the brakes on a wet road, the computer calculates all of the various variables to bring the car to a safe stop without skidding. AI is built into Google’s and Amazon’s search algorithms, so that when we search on a given term or review a product for sale, these sites give us highly refined search results and alternative product suggestions. And, AI is built into IBM’s Deep Blue computer which was built specifically to play chess and is so good that it has beat human chess champions on multiple occasions.

These are the examples of AI that we commonly know of today. And they are benevolent examples. They make our lives easier, and they provide helpful suggestions which enable us to more quickly find information or products for which we are looking. Certainly, this AI is not a threat to us. But, AI is very rapidly evolving, and it is the future versions of what these tools will grow into that cause Barrat to lose sleep.

As AI develops, it is increasingly becoming more sophisticated and taking on even more and more tasks. For example, 70% of all of the equity trading on Wall Street is done automatically by high frequency trading computers using proprietary trading algorithms. And the “Flash Crash” of 2010, which caused the stock market to fall more than 1,000 points in a 36 minute period, because high frequency trading computers over-responded to certain market signals, is evidence of the dangers we can expect from well-intentioned AI. The Flash Crash also illustrates that although the actions of any one AI machine may not be problematic, the interactions of multiple AI machines responding to each other can create unexpected and negative outcomes.

The Amazon and Google predictive search tools that we are using today, and IBM’s impressive Deep Blue chess machine, are examples of “narrow” artificial intelligence. They are programs designed to do one thing, or a limited number of things, well. They by no means have the capacity to replicate the broad and deep decision-making abilities of a human mind. But, that is coming. And that capability is what AI researchers and journalists refer to as AGI, or, artificial general intelligence. AGI machines will be impressive, and they will be able to do much more than the AI tools to which we are accustomed today. But AGI will inevitably and certainly lead to ASI, or artificial superintelligence. ASI represents machines that have the capability to think and process data hundreds or thousands of times more quickly than humans.

Artificial superintelligent machines will be very formidable. If given the initial capability to analyze their own software code and improve it, which is anticipated, they will self-evolve rapidly. Depending upon what goals they are initially programmed to complete, and whether any limitations are placed upon their own self-evolvement, Barrat sees a future where these machines evolve very rapidly. As they write for themselves new software code and, hence, new capabilities, it is not inconceivable that they will proliferate themselves onto other machines across networks like the Internet, culminating, perhaps, in their ability to control all computer processes on the planet. Such a scenario is not unlike the dystopian vision of Skynet presented decades ago in the Terminator series of action films. Because these machines will self-evolve on their own accord, and will operate at speeds that are larger than, and increasingly exceeding those of mankind’s own processing capabilities, they will evolve very quickly in a fashion that scientists are calling an “intelligence explosion.” The penultimate danger to mankind will be the merging of ASI thinking and processing capabilities with the physical attributes of nanotechnology, in which case these supercomputer processors will be able to construct their own devices and, possibly, weapons. Taken to the extreme, Barrat foresees even the possibility of nanotechnology-wielding AI building their own interstellar spacecraft and dominating not just Earth but also the galaxy.

Mankind has created inventions that have caused unintended harm and danger to the human race before. Three Mile Island, Chernobyl and Fukushima are such examples. But what makes ASI uniquely so much more dangerous is its own inherent ability to evolve and improve on its own accord. With its ability to recursively self-improve, it will not be long before ASI machines far eclipse the capabilities of mankind. These capabilities can be for great benefit to mankind. Imagine if we took an all-powerful ASI machine and instructed it to come up with a solution for world hunger or a cure for cancer. Researchers are very hopeful about the amazing things that can happen when a superintelligent entity like that is put to work on currently intractable problems for which we have not yet found viable solutions.

But what will be that entity’s attitude toward mankind, and how will it treat us? This is what keeps Barrat up at night, and why he calls the advent of ASI machines our “final invention.” In his discussion with current AI researchers and scientists, he finds that too little attention is being paid to the potential threats that ASI might pose to mankind. Many AI developers have either not thought much about it, or have assumed, in a very anthropomorphic fashion, that these machines will be like us and therefore respect our wellbeing. But there is no reason to assume an inanimate self-aware machine will view us positively or care about our wellbeing. The human race is meaningfully more intelligent than our closest cousins, chimpanzees and gorillas. But if our smarter machine inventions treat us anywhere near the way that we treat primates, with our zoo prisons and medical testing laboratories, we have much to be concerned about from AI.

In fact, depending upon the goals initially programmed into the smarter-than-man AI machines (which may easily include an imperative for self-preservation), humans could be viewed in a sociopathic manner by these machines as something either in its way, competitive towards resources it needs (particularly electricity and silicone), or perhaps even worthwhile as a resource to be harvested for useful atoms (provided the AI machine has the ability to create objects from atoms using nanotechnology.)

Barrat finds that those AI researchers who have considered the potential dangers to the human race from future ASI machines assume that some form of safeguards will be built into the machines. Science fiction writer Isaac Asimov’s “three laws of robotics” are frequently cited as such safeguards. These are: (1) that a robot may not injure a human being or, through inaction, allow a human being to come to harm; (2) a robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law; and (3) a robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The trouble with blindly assuming some form of safeguards will prevent such danger to humans is evident when you consider that the robot in Asimov’s own story broke down and became useless when confronted with unsolvable conflicts across these three laws. A similar event occurred with the fictional HAL 9000 in 2001: A Space Odyssey, and we all remember he killed nearly every human crew member aboard. But, fiction aside, Barrat alleges that safeguards are not even being built into AI developments in the real world today! In the quest to build the first superintelligent machine, very few AI researchers are attempting to write such code into their machines. Many figure it will come later, after the initial AI capabilities are in place and safeguards are considered more necessary later. But we may find that our efforts to infuse an emotionless machine with a sense of morals or respect for mankind will be too late, once that machine begins to evolve beyond our capability to add to it traits such as morality after the fact. And, as Barrat points out, some 56 countries on the planet today are already developing battlefield robots. How do you infuse a respect for mankind into a machine designed to kill humans?

Computer scientists and professionals in AI-related fields (engineering, robotics and neuroscience) were polled recently. 10% believe that AGI, or human-level intelligence machines, will be created before 2028. More than 50% believe it will be developed by 2050. And 90% believe it will occur before the end of the century. Barrat believes it will happen much sooner than most experts anticipate. And, when you consider that much of this development is being done secretly, with DARPA funding and under the cloak of national security non-disclosure agreements, it is not surprising that some of the AI experts Barrat spoke with think 2020 is not too soon to anticipate human-level artificial intelligence (AGI). And, the time gap between AGI and ASI could be brief. AGIs tasked with the goal of self-improving into ASIs may do so rapidly, since each iteration of improvement will turn itself into an even smarter machine which will then improve again in a never ending form of compound improvement, by machines that can work ceaselessly 24-hours per day on their own self-refinement.

The development of ASI is the new arms race. The benefits afforded to the first company or country that develops ASI capabilities will likely be enormous. It is sometimes said that the country which controlled the ocean sea lanes in the 1700s and 1800s controlled the world. It is easy to understand how the countries which controlled the world’s capital markets in the 1900s and early 2000s controlled the world. By extension, the country or countries that first develop ASI capabilities in the mid-2000s will probably control the world. Just imagine the ferocious military advantage that one side has when it can send into battle machine gun-wielding, autonomous, tracked robots (like those being developed by a company called Foster-Miller for the U.S. Army, and briefly tested in Iraq) against non-similarly-armed enemies?

The benefits from possessing a superintelligent machine will span far beyond the battlefield. ASI development will happen, unquestionable. The question is, what will happen to mankind when superintelligent machines, capable of outthinking its developer by a thousand times, become fully autonomous and capable in their own rights. This keeps James Barrat up at night, and it should worry us, too.

Subscribe
Notify of