Google has launched into the DeepMind of artificial intelligence. The London-based company was bought by Google for $500 million.
DeepMind Technologies, had one of the biggest concentrations of researchers anywhere working on deep learning, a relatively new field of artificial intelligence research that aims to achieve tasks like recognizing faces in video or words in human speech (see “Deep Learning”).
Google has added skilled experts rather than specific products, marks an acceleration in efforts by Google, Facebook, and other Internet firms to monopolize the biggest brains in artificial intelligence research.
Vying with Google for talent are companies including Amazon, Microsoft, and also Facebook, which created its own deep learning group (see “Facebook Launches Advanced AI Effort to Find Meaning in Your Posts”). It recruited perhaps the world’s best-known deep learning scientist, Yann LeCun of New York University, to run it. His NYU colleague, Rob Fergus, also accepted a job at the social network.
So what is Google getting for its half a billion? A company that’s very good at making computers that think and act like humans.
DeepMind has not yet developed any commercial products. Its main asset appears to be its personnel, including dozens of experts in machine learning, a branch of AI that attempts to teach computers to think like humans. It’s best-known project was a computer system it taught to master Atari video games.
DeepMind co-founder Demis Hassabis, a neuroscientist and former child chess prodigy who spent 11 years in videogame design, including 2001’s “Black & White,” a “god game” that allowed players to choose between a path of good or evil, and 2004’s “Evil Genius.” In 2005, he began working in neuroscience and artificial intelligence, receiving a Ph.D. for the former from the University College of London in 2009.
Hassabis received renown and criticism for a 2007 paper, The Future of Memory: Remembering, Imagining, that linked the mind’s process of imagination with memory formation. He founded DeepMind in 2011, along with Mustafa Suleyman and Shane Legg.
There are concerns over Google’s recent purchases of their robot tech spending spree of seven different robotics firms, with a potential focus on automated manufacturing. Two of those companies are Nest Labs and Boston Dynamics.
Google also grabbed renowned University of Toronto deep-learning researcher Geoffrey Hinton and a passel of his students when it acquired voice and image research firm DNNresearch. Hinton now works part-time at Google. “We said to Geoff, ‘We like your stuff. Would you like to run models that are 100 times bigger than anyone else’s?’ That was attractive to him,” Norvig said.
It has garnered confusion as to why a search company needs robotics and customer data. To address this concern, Google has agreed to set up an ethics board to make sure that DeepMind’s artificial intelligence is going to be developed safely.
If this all seems a bit ominous, it is. The co-founder of DeepMind made a dire prediction back in 2011 saying:
“Eventually, I think human extinction will probably occur, and technology will likely play a part in this,” DeepMind’s Shane Legg said in an interview with Alexander Kruel. “All forms of technology that could wipe out the human species, he singled out artificial intelligence, or AI, as the “number 1 risk for this century.”
The ethics board is a surprising first for Google and is raising questions about why the company is so concerned with this specific technology when they allegedly can read your emails. The threat of artificial intelligence is this: One day there could exist computers that are as smart as humans and could eventually do away with the human race altogether. These computers will become the most intelligent beings on the planet and leave us at their mercy. If these computers are created incorrectly, it could possibly hamper our survival. Google’s ethics board must carefully weigh the moral implications of the artificial intelligence projects it pursues. The smart systems must operate on strict moral rules and standards.
The majority of the funders and founders of DeepMind have been outspoken about raising awareness of the potential risks if artificial intelligence isn’t controlled, so one can reason that they will urge Google to stay within these guidelines.