Computers have become so smart during the past 20 years that people don’t think twice about chatting with digital assistants like Alexa and Siri or seeing their friends automatically tagged in Facebook pictures.

But making those quantum leaps from science fiction to reality required hard work from computer scientists like Yoshua Bengio, Geoffrey Hinton and Yann LeCun. The trio tapped into their own brainpower to make it possible for machines to learn like humans, a breakthrough now commonly known as “artificial intelligence,” or AI.

Their insights and persistence were rewarded Wednesday with the Turing Award, an honor that has become known as technology industry’s version of the Nobel Prize. It comes with a $1 million prize funded by Google, a company where AI has become part of its DNA.

LeCun reflected on his career and the early days of deep learning with Bengio and Hinton in the 1980s in a blog at Facebook.

“All three of us got into this field not just because we want to build intelligent machines, but also because we just wanted to understand intelligence — and that includes human intelligence,” said LeCun.

“We’re looking for underlying principles to intelligence and learning, and through the construction of intelligent machines, to understand ourselves.”

The award marks the latest recognition of the instrumental role that artificial intelligence will likely play in redefining the relationship between humanity and technology in the decades ahead.


The teamwork and AI

Working independently and together, Hinton, LeCun and Bengio developed conceptual foundations for the field, identified surprising phenomena through experiments, and contributed engineering advances that demonstrated the practical advantages of deep neural networks. In recent years, deep learning methods have been responsible for astonishing breakthroughs in computer vision, speech recognition, natural language processing, and robotics—among other applications.

While the use of artificial neural networks as a tool to help computers recognize patterns and simulate human intelligence had been introduced in the 1980s, by the early 2000s, LeCun, Hinton and Bengio were among a small group who remained committed to this approach. Though their efforts to rekindle the AI community’s interest in neural networks were initially met with skepticism, their ideas recently resulted in major technological advances, and their methodology is now the dominant paradigm in the field.

From award announcement


“Artificial intelligence is now one of the fastest-growing areas in all of science and one of the most talked-about topics in society,” said Cherri Pancake, president of the Association for Computing Machinery, the group behind the Turing Award.

Although they have known each other for than 30 years, Bengio, Hinton and LeCun have mostly worked separately on technology known as neural networks. These are the electronic engines that power tasks such as facial and speech recognition, areas where computers have made enormous strides over the past decade. Such neural networks also are a critical component of robotic systems that are automating a wide range of other human activity, including driving.

Their belief in the power of neural networks was once mocked by their peers, Hinton said. No more. He now works at Google as a vice president and senior fellow while LeCun is chief AI scientist at Facebook. Bengio remains immersed in academia as a University of Montreal professor in addition to serving as scientific director at the Artificial Intelligence Institute in Quebec.

“For a long time, people thought what the three of us were doing was nonsense,” Hinton said in an interview with The Associated Press. “They thought we were very misguided and what we were doing was a very surprising thing for apparently intelligent people to waste their time on. My message to young researchers is, don’t be put off if everyone tells you what are doing is silly.”

Now, some people are worried that the results of the researchers’ efforts might spiral out of control.

While the AI revolution is raising hopes that computers will make most people’s lives more convenient and enjoyable, it’s also stoking fears that humanity eventually will be living at the mercy of machines.

Bengio, Hinton and LeCun share some of those concerns — especially the doomsday scenarios that envision AI technology developed into weapons systems that wipe out humanity.

But they are far more optimistic about the other prospects of AI — empowering computers to deliver more accurate warnings about floods and earthquakes, for instance, or detecting health risks, such as cancer and heart attacks, far earlier than human doctors.

“One thing is very clear, the techniques that we developed can be used for an enormous amount of good affecting hundreds of millions of people,” Hinton said.