Artificial Intelligence, Science, Technology
Leave a Comment

Facebook AI Director Yann LeCun on His Quest to Unleash Deep Learning and Make Machines Smarter

YannLeCunblogjpg-1423239900596

Deep Learning expert Yann LeCun leads Facebook’s AI research lab.

NOTE: this article was NOT written by David Wolf, it was POSTED by David Wolf. It originally appeared on ieee Spectrum.

Artificial intelligence has gone through some dismal periods, which those in the field gloomily refer to as “AI winters.” This is not one of those times; in fact, AI is so hot right now that tech giants like Google, Facebook, Apple, Baidu, and Microsoft are battling for the leading minds in the field. The current excitement about AI stems, in great part, from groundbreaking advances involving what are known as “convolutional neural networks.” This machine learning technique promises dramatic improvements in things like computer vision, speech recognition, and natural language processing. You probably have heard of it by its more layperson-friendly name: “Deep Learning.”

Few people have been more closely associated with Deep Learning than Yann LeCun, 54. Working as a Bell Labs researcher during the late 1980s, LeCun developed the convolutional network technique and showed how it could be used to significantly improve handwriting recognition; many of the checks written in the United States are now processed with his approach. Between the mid-1990s and the late 2000s, when neural networks had fallen out of favor, LeCun was one of a handful of scientists who persevered with them. He became a professor at New York University in 2003, and has since spearheaded many other Deep Learning advances.

More recently, Deep Learning and its related fields grew to become one of the most active areas in computer research. Which is one reason that at the end of 2013, LeCun was appointed head of the newly-created Artificial Intelligence Research Lab at Facebook, though he continues with his NYU duties.

LeCun was born in France, and retains from his native country a sense of the importance of the role of the “public intellectual.” He writes and speaks frequently in his technical areas, of course, but is also not afraid to opine outside his field, including about current events.

IEEE Spectrum contributor Lee Gomes spoke with LeCun at his Facebook office in New York City. The following has been edited and condensed for clarity.

IEEE Spectrum contributor Lee Gomes spoke with LeCun at his Facebook office in New York City. The following has been edited and condensed for clarity.

Yann LeCun on… (In the actual article, the lines below are live links that take you to the portion of the interview indicated. At the bottom of this page is a link to the whole article.)

  1. Explaining Deep Learning . . . in Eight Words
  2. A Black Box With 500 Million Knobs
  3. The Pursuit of Beautiful Ideas (Some Hacking Required)
  4. Hype and Things That Look Like Science but Are Not
  5. Unsupervised Learning: The Learning That Machines Need
  6. Facebook Does Deep Learning
  7. Can Deep Learning Give Machines Common Sense?
  8. The Inevitable Singularity Questions
  9. “Sometimes I Need to Build Things With My Hands”

  1. Explaining Deep Learning . . . in Eight Words

    IEEE Spectrum: We read about Deep Learning in the news a lot these days. What’s your least favorite definition of the term that you see in these stories?

    Yann LeCun: My least favorite description is, “It works just like the brain.” I don’t like people saying this because, while Deep Learning gets an inspiration from biology, it’s very, very far from what the brain actually does. And describing it like the brain gives a bit of the aura of magic to it, which is dangerous. It leads to hype; people claim things that are not true. AI has gone through a number of AI winters because people claimed things they couldn’t deliver.

    Spectrum: So if you were a reporter covering a Deep Learning announcement, and had just eight words to describe it, which is usually all a newspaper reporter might get, what would you say?

    LeCun: I need to think about this. [Long pause.] I think it would be “machines that learn to represent the world.” That’s eight words. Perhaps another way to put it would be “end-to-end machine learning.” Wait, it’s only five words and I need to kind of unpack this. [Pause.] It’s the idea that every component, every stage in a learning machine can be trained.

    Spectrum: Your editor is not going to like that.

    LeCun: Yeah, the public wouldn’t understand what I meant. Oh, okay. Here’s another way. You could think of Deep Learning as the building of learning machines, say pattern recognition systems or whatever, by assembling lots of modules or elements that all train the same way. So there is a single principle to train everything. But again, that’s a lot more than eight words.

    Spectrum: What can a Deep Learning system do that other machine learning systems can’t do?

    LeCun: That may be a better question. Previous systems, which I guess we could call “shallow learning systems,” were limited in the complexity of the functions they could compute. So if you want a shallow learning algorithm like a “linear classifier” to recognize images, you will need to feed it with a suitable “vector of features” extracted from the image. But designing a feature extractor “by hand” is very difficult and time consuming.

    An alternative is to use a more flexible classifier, such as a “support vector machine” or a two-layer neural network fed directly with the pixels of the image. The problem is that it’s not going to be able to recognize objects to any degree of accuracy, unless you make it so gigantically big that it becomes impractical.

    Spectrum: It doesn’t sound like a very easy explanation. And that’s why reporters trying to describe Deep Learning end up saying

    LeCun: …that it’s like the brain.

NOTE: This fascinating interview goes on quite a bit longer. To see it all, use this link:   http://spectrum.ieee.org/automaton/robotics/artificial-intelligence/facebook-ai-director-yann-lecun-on-deep-learning

Advertisements
This entry was posted in: Artificial Intelligence, Science, Technology

by

I made my bones as an advertising copywriter. My TV, radio and print ads have amused millions of people and helped sell tons of cleaning products, coffee, macadamia nuts and other goodies. But I prefer that other kind of fiction: short stories and novels. My first published novel, Mindclone, is a near-future look at the amusing and serious consequences of brain-uploading. It’s garnered mostly five star reviews. The sequel is percolating in my brain even now. Stay tuned.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s