Mathworks: MATLAB and NVIDIA Docker: A Complete AI Solution, Where You Need It, in an Instant
- đ¤ Speaker: Dr. Jos Martin, Senior Engineering Manager - Parallel Computing
- đ Date & Time: Monday 25 November 2019, 13:00 - 14:30
- đ Venue: FW26, Computer Laboratory
Abstract
MATLAB ’s deep learning, visualization, and C+/CUDA code generation technology make it a uniquely complete solution for your entire AI workflow. In MATLAB , you can easily manage data, perform complex image and signal processing, prototype and train deep networks, and deploy to your desktop, embedded or cloud environments. Using GPU Coder technology MATLAB generates CUDA kernels that optimize loops and memory access, and C+ that leverages cuDNN and TensorRT, providing the fastest deep network inference of any framework. With MATLAB ’s NVIDIA docker container available through the NVIDIA GPU Cloud, you can now easily access all this AI power, deploy it in your cloud or DGX environment, and get up and running in seconds. In this presentation we will demonstrate a complete end-to-end workflow that starts from ‘docker run’, prototypes and trains a network on a multi-GPU machine in the cloud, and ends with a highly optimized inference engine to deploy to data centers, clouds, and embedded devices.
Series This talk is part of the Technical Talks - Department of Computer Science and Technology series.
Included in Lists
- All Talks (aka the CURE list)
- bld31
- Cambridge Centre for Data-Driven Discovery (C2D3)
- Cambridge talks
- Chris Davis' list
- Department of Computer Science and Technology talks and seminars
- FW26, Computer Laboratory
- Guy Emerson's list
- Interested Talks
- ndk22's list
- ob366-ai4er
- rp587
- School of Technology
- Technical Talks - Department of Computer Science and Technology
- Trust & Technology Initiative - interesting events
- yk449
Note: Ex-directory lists are not shown.
![[Talks.cam]](/static/images/talkslogosmall.gif)

Dr. Jos Martin, Senior Engineering Manager - Parallel Computing
Monday 25 November 2019, 13:00-14:30