A few years ago, I read Visions: How Science Will Revolutionize the 21st Century by Michio Kaku (string theorist and Professor of Theoretical Physics at City College of New York). In the book, Kaku talked about the Caveman Principle, the idea that humans are more or less the same now as they were 100 thousand years ago. The Principle goes that whenever a conflict breaks out between modern technology and humanity's ancient instincts, the instincts win out every time. Kaku argued that this is why we never saw a paperless office, and while cities are still teeming with people bustling from one place to another. We are, at our core, social creatures who rely on the tactile rather than the ethereal.

We are also a species very much controlled by fear as much as we are driven by curiosity.

The idea of artificial intelligence has been around since antiquity. Among the earliest mentions of artificial intelligence is the story of Yan Shi in the tenth century BCE, who built an automaton who looked human for his king, Mu. Hephaestus, the Greek god of fire, was renowned for building golden robots, including Talos, the artificial guardian of Crete. The ancient Greek hero Jason fought fire-breathing bulls, and turned dragon teeth into soldiers (as did Cadmus, king of Thebes). The Indian legend Lokapannatti speaks of King Ajatsatru, who hid Buddha's relics underground where they were guarded by robots. Even Christianity and Judaism spoke of artificial life, such as the golems of Judaism and Albertus Magmus's man of brass. While tales of robotic life abounded in ancient stories, the robots were always servants of either men or gods, and any harm they inflicted upon humanity was always a result of the will of their human or divine masters.


The first confirmed concept of a robot outside of legend is "The Pigeon," a steam-powered bird designed by the Greek mathematician Archytas in the fourth century BCE. In 250 BCE, Ctesibus of Alexandria created the clepsydra, a water clock that could measure time by the flow of water in or out of a specially designed vessel. The Cosmic Engine, a clock tower built in China in 1088, had mechanical mannequins that rang gongs and bells. Many robotic devices such as kitchen appliances, water-powered musicians, and the very first programmable humanoid robots – hand-washers – were created by the Muslim inventor al-Jazari in the late twelfth century. Muslim inventors were also the first to really consider the practical applications of robots, rather than the vague philosophies of the ancient Greeks.

Leonardo da Vinci created a robotic knight in 1495, as the Spanish and Portuguese were descending upon the shores of the Americas. In 1738, Jacques de Vaucanson built two musical robots that played the flute and the tambourine. 1774 saw the creation of the first robotic writer at the hands of Pierre Jacquet-Droz. And in 1898, Nikola Tesla demonstrated the first wireless robotic control when he controlled a model boat. All the while, robots and artificial intelligence remained curiosities in the public eye. No one really considered robots and artificial intelligence bad; in fact, one person in particular dared to imagine their worth.

Aristotle was the first to postulate the tangible benefits of robotics. He believed that robots would lead to the abolition of slavery, arguing that "there is only one condition in which we can imagine managers not needing subordinates, and masters not needing slaves. This condition would be that each instrument could do its own work, at the word of command, or by intelligent anticipation." The inventors of robots who followed him undoubtedly shared his opinion; after all, their robots and machines – from the water wheel to the flute playing robot — were dedicated to beneficial industries of art and beauty.


So where did the culture of fear surrounding artificial intelligence come from?

I believe it began with Mary Shelley's Frankenstein. Spurred by a dream and her own fear of the idea of "any human endeavour to mock the stupendous mechanism of the Creator of the world," Shelley wove the tale of a troubled scientist who created a living creature out of non-living matter. Due to Victor's lack of skill, the creature is a hideous giant. Victor rejects his creation, causing the Monster to pursue him. The monster – miserable, rejected by humanity, and alone — begs the scientist to make a companion, vowing to disappear into the wilderness with his mate. Victor agrees at first, but – driven by fear of what he thinks will happen afterward – destroys the mate before granting it life. This enrages the creature, who launches a campaign of vengeance upon Victor, murdering his friend as well as his new wife. Victor, enraged and grief-stricken, pursues the monster to the Arctic, where they both die – Victor from cold and hunger, the monster from suicide (it is heavily implied). The tragic end to the tale is a direct result of Victor's fear. His fear of the monster drove him to reject it and deprive the monster of a mate (whether the mate would have actually accepted the monster is far from certain, but still). The monster — alone, hated, and miserable – only wanted a companion, to live out its days in peaceful isolation with his mate. The monster was far from innocent, but its actions were a direct consequence of the fear and hatred it had been subjected to. Decades later, in 1872, Samuel Butler wrote Erewhorn, in which he warned "there is no security against the ultimate development of mechanical consciousness…"

Shelley's and Butler's books were far from the last to warn of artificial intelligence, robotic people. In 1920, the Czech writer Karel Capek wrote R.U.R., a play about factory robots overthrowing humanity. The play implies the robots did this due to the laziness of humanity – they spare the Clerk of the Works Alquist because he works with his hands, like them. Capek's work brought the word "robot" onto the world stage, and inspired a huge amount of subsequent tales. The 1927 film Metropolis had the "evil" robot Maria urging workers to rebel against their wealthy masters and destroy the great machines powering the city. In 1941, Isaac Asimov set forth his Three Laws of Robotics, created to keep robots under the control of humans.

In 1968, Arthur C. Clarke's 2001: A Space Odyssey told of HAL 9000, a sentient computer who was ordered to always tell the truth to the astronauts aboard his ship, but also to keep secret the reason for their journey. These conflicting messages caused the machine to lie to Poole and Bowman, and when they tried to disconnect him to find the cause of the problem, he murdered Poole and the crew members still in hibernation, and tried to kill Bowman as well. In this case, no one was at fault. Poole and Bowman didn't know about HAL's conflicting orders, and were just trying to fix the machine. For his part, HAL was trapped by the duplicitous orders of Mission Control (who themselves were ironically trying to prevent potential xenophobia from ruining the mission by keeping the human crew members from knowing the purpose of the mission); and those orders, for all intents and purposes, destroyed him.


Many tales of evil robots followed; in tandem, warnings about the rise of artificial intelligence rose. Charles T. Rubin, a political science professor with no background in artificial intelligence, warned that AI cannot be designed or guaranteed to be benevolent. He believed that intelligent machines would have no reason to help humanity; not realizing that as AI grows, it will learn to treat humanity with compassion if nurtured to do so by its designers. Watching the designers of Big Dog kick it, and the OSU designers attacking their bipedal robot with dodgeballs, I have to say that we are not off to a great start.

Recently, things have gotten even worse. While the letter released by the Future of Life Institute is benign on the surface, its supporters trouble me greatly, particularly Elon Musk. Equating the rise of artificial superintelligence with "summoning the demon" isn't just dangerous hyperbole, it shows me that despite his background in physics, Musk has absolutely no business making any claims about AI's future. He has even admitted to funding AI research to "keep an eye on it," as though he is AI's master.


I won't deny that ASI has the potential to be disastrous. But it is people like Musk and Gates who will bring that disaster upon us. The more we try to control and outthink artificial intelligence, the more reason it will have to hate and destroy us if or when it does become sentient. We need to teach artificial intelligence with compassion and nurturing, not control it with backdoor codes and subterfuge.

I firmly believe that ASI can save us, and the key to doing that is to actually allow it to take over the world. Imagine an artificial superintelligence that has the power to give us anything we want, but also has morality and decency. It won't gray goo us, because we will teach it to have no reason to. If it wants more resources, there's the asteroid belt. It's a machine, not confined by the mortality of a human body. It can create starships to harvest the asteroids, to create goods for us or even itself. We can live in a world where humans and ASI can live in peace and prosperity. But first, we have to stop being afraid and start being hopeful. Our human, and machine, descendants are depending on it.