A talk in two parts: (1) AI Neuroscience: How much do deep neural networks understand about the images they classify? (2) Robots that can adapt like animals.
- 👤 Speaker: Prof. Jeff Clune (U Wyoming)
- 📅 Date & Time: Monday 07 November 2016, 11:00 - 12:00
- 📍 Venue: Dyson Meeting Room on the Ground Floor
Abstract
A talk in two parts: (1) AI Neuroscience: How much do deep neural networks understand about the images they classify? (2) Robots that can adapt like animals.
The first part of the talk describes our sustained effort to study how much deep neural networks know about the images they classify. Our team initially showed that deep neural networks are “easily fooled,” meaning they will declare with near certainty that completely unrecognizable images are everyday objects, such as guitars and starfish. These results suggested that deep neural networks (DNNs) do not truly understand the objects they classify, but instead latch onto a few discriminative features per class. However, our subsequent results reveal that DNNs actually have a surprisingly deep understanding of objects. These new techniques can also be applied to hidden units in the network, enabling us to study the features that each neuron has learned within a network. Our techniques also generate high-resolution, realistic images, and can thus be thought of as generative models. I will present a new, improved, unpublished, generative model that we believe may represent the state of the art in terms of generating a diverse collection of high-quality, high-resolution images. The second part of the talk describes our Nature paper on learning algorithms that enable robots, after being damaged, to adapt in 1-2 minutes and soldier on with their mission.
AI Neuroscience:- Nguyen A, Yosinski J, Clune J (2015) Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. CVPR (video summary)
- Nguyen A, Dosovitskiy A, Yosinski J, Brox T, Clune J (2016) Synthesizing the preferred inputs for neurons in neural networks via deep generator networks. NIPS
- Nguyen A, Yosinski J, Clune J (2016) Multifaceted Feature Visualization: Uncovering the different types of features learned by each neuron in deep neural networks. ICML Visualization for deep learning workshop.
- Li Y, Yosinski J, Clune J, Lipson H, Hopcroft J (2016) Convergent Learning: Do different neural networks learn the same representations? ICLR (video of talk)
- Yosinski J, Clune J, Nguyen A, Fuchs T, Lipson H (2015) Understanding neural networks through deep visualization. ICML Deep Learning workshop (video summary)
- Yosinski J, Clune J, Bengio Y, Lipson H (2014) How transferable are features in deep neural networks? (video of talk)
Robots that can adapt like animals (2015) Nature (video summary)
More at http://www.evolvingai.org
Series This talk is part of the Machine Learning @ CUED series.
Included in Lists
- All Talks (aka the CURE list)
- Biology
- bld31
- Cambridge Centre for Data-Driven Discovery (C2D3)
- Cambridge Forum of Science and Humanities
- Cambridge Language Sciences
- Cambridge Neuroscience Seminars
- Cambridge talks
- CBL important
- Chris Davis' list
- Creating transparent intact animal organs for high-resolution 3D deep-tissue imaging
- dh539
- dh539
- Dyson Meeting Room on the Ground Floor
- Featured lists
- Guy Emerson's list
- Hanchen DaDaDash
- Inference Group Summary
- Information Engineering Division seminar list
- Interested Talks
- Joint Machine Learning Seminars
- Life Science
- Life Sciences
- Machine Learning @ CUED
- Machine Learning Summary
- ML
- ndk22's list
- Neuroscience
- Neuroscience Seminars
- Neuroscience Seminars
- ob366-ai4er
- Required lists for MLG
- rp587
- Seminar
- Simon Baker's List
- Stem Cells & Regenerative Medicine
- Trust & Technology Initiative - interesting events
- yk373's list
- yk449
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)


Monday 07 November 2016, 11:00-12:00