GPU Coder™ generates readable and portable CUDA® code that leverages CUDA libraries like cuBLAS and cuDNN from the MATLAB® algorithm, which is then cross-compiled and deployed to NVIDIA® GPUs from the Tesla® to the embedded Jetson™ platform.
Learn more about GPU Coder: https://goo.gl/iur976
Download a free Deep Learning ebook: https://goo.gl/2u1M99
The first part of this talk describes how MATLAB is used to design and prototype end-to-end systems that include a deep learning network augmented with computer vision algorithms. You’ll learn about the affordances in MATLAB to access and manage large data sets, as well as pretrained models to quickly get started with deep learning design. Then, you’ll see how distributed and GPU computing capabilities integrated with MATLAB are employed during training, debugging, and verification of the network. Finally, most end-to-end systems need more than just classification: Data needs to be pre- and post-processed before and after classification. The results are often inputs to a downstream control system. These traditional computer vision and control algorithms, written in MATLAB, are used to interface with the deep learning network to build up the end-to-end system.
The second part of this talk focuses on the embedded deployment phase. Using representative examples from automated driving to illustrate the entire workflow, see how GPU Coder automatically analyzes your MATLAB algorithm to (a) partition the MATLAB algorithm between CPU/GPU execution; (b) infer memory dependencies; (c) allocate to the GPU memory hierarchy (including global, local, shared, and constant memories); (d) minimize data transfers and device-synchronizations between CPU and GPU; and (e) finally generate CUDA code that leverages optimized CUDA libraries like cuBLAS and cuDNN to deliver high-performance.
Finally, you’ll see that the generated code is highly optimized with benchmarks that show that deep learning inference performance of the auto-generated CUDA code is ~2.5x faster for mxNet, ~5x faster for Caffe2, and ~7x faster for TensorFlow®.
Watch this talk to learn how to:
1. Access and manage large image sets
2. Visualize networks and gain insight into the training process
3. Import reference networks such as AlexNet and GoogLeNet
4. Automatically generate portable and optimized CUDA code from the MATLAB algorithm