Optimize TensorFlow Models For Deployment with TensorRT

4.6

59 个评分

提供方

3,860 人已注册

在此免费指导项目中,您将:
1.5 hours
中级
无需下载
分屏视频
英语(English)
仅限桌面

This is a hands-on, guided project on optimizing your TensorFlow models for inference with NVIDIA's TensorRT. By the end of this 1.5 hour long project, you will be able to optimize Tensorflow models using the TensorFlow integration of NVIDIA's TensorRT (TF-TRT), use TF-TRT to optimize several deep learning models at FP32, FP16, and INT8 precision, and observe how tuning TF-TRT parameters affects performance and inference throughput. Prerequisites: In order to successfully complete this project, you should be competent in Python programming, understand deep learning and what inference is, and have experience building deep learning models in TensorFlow and its Keras API. Note: This course works best for learners who are based in the North America region. We’re currently working on providing the same experience in other regions.

必备条件

您要培养的技能

  • Deep Learning

  • NVIDIA TensorRT (TF-TRT)

  • Python Programming

  • Tensorflow

  • keras

分步进行学习

在与您的工作区一起在分屏中播放的视频中,您的授课教师将指导您完成每个步骤:

指导项目工作原理

您的工作空间就是浏览器中的云桌面,无需下载

在分屏视频中,您的授课教师会为您提供分步指导

审阅

来自OPTIMIZE TENSORFLOW MODELS FOR DEPLOYMENT WITH TENSORRT的热门评论

查看所有评论

常见问题