Chevron Left
返回到 Optimize TensorFlow Models For Deployment with TensorRT

学生对 Coursera Project Network 提供的 Optimize TensorFlow Models For Deployment with TensorRT 的评价和反馈

4.6
59 个评分

课程概述

This is a hands-on, guided project on optimizing your TensorFlow models for inference with NVIDIA's TensorRT. By the end of this 1.5 hour long project, you will be able to optimize Tensorflow models using the TensorFlow integration of NVIDIA's TensorRT (TF-TRT), use TF-TRT to optimize several deep learning models at FP32, FP16, and INT8 precision, and observe how tuning TF-TRT parameters affects performance and inference throughput. Prerequisites: In order to successfully complete this project, you should be competent in Python programming, understand deep learning and what inference is, and have experience building deep learning models in TensorFlow and its Keras API. Note: This course works best for learners who are based in the North America region. We’re currently working on providing the same experience in other regions....

热门审阅

LS

Jun 3, 2021

Great workshop, all the concepts were very well explained.

AA

Mar 14, 2022

The first to introduce such a rare and important topic.

筛选依据:

1 - Optimize TensorFlow Models For Deployment with TensorRT 的 10 个评论(共 10 个)

创建者 Awais A

Mar 28, 2021

创建者 Jorge G

Feb 25, 2021

创建者 Luis S

Jun 4, 2021

创建者 Abdelrahman A

Mar 15, 2022

创建者 Fabian I M N

Apr 20, 2021

创建者 Nusrat I

Apr 16, 2021

创建者 Chandra S

Dec 13, 2020

创建者 Maftuna E

Sep 10, 2020

创建者 Vignesh R

Jul 8, 2021

创建者 Yilber R

Oct 1, 2020