Deploying Deep Learning on Embedded Devices - When FPGAs Make Sense
Designing deep learning, computer vision, and signal processing applications and deploying them to FPGAs, GPUs, and CPU platforms like Xilinx Zynq™ or NVIDIA® Jetson or ARM® processors is challenging because of resource constraints inherent in embedded devices. This talk walks you through a deployment workflow based on MATLAB® that generates C/C++ or CUDA® or VHDL code.
For system designers looking to integrate deep learning into their FPGA-based applications, the talk helps teach the challenges and considerations for deploying to FPGA hardware and details the workflow in MATLAB. See how to explore and prototype trained networks on FPGAs using prebuilt bistreams from MATLAB. You can further customize your network to meet your performance requirments and hardware resource usage, generate HDL, and integrate it into an FPGA-based edge inference system.
Related Products
Learn More
Featured Product
Deep Learning HDL Toolbox
Up Next:
Related Videos:
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other bat365 country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)
Asia Pacific
- Australia (English)
- India (English)
- New Zealand (English)
- 中国
- 日本Japanese (日本語)
- 한국Korean (한국어)