Hello, and welcome to the Intermediate Intel Distribution of OpenVINO toolkit tutorial. My name is Rio and I will be your instructor for this course. In this video, we will introduce our main subject Intel Distribution of OpenVINO toolkit and then we will go over plans for the tutorial. OpenVINO stands for open visual inferencing and neural network optimization. It is a toolkit for application developers to deploy inference applications. Now, if you are familiar with deep learning field, you may know that there are many different deep learning frameworks out there that can be used for inference, and you may be wondering what the benefit of yet another framework is. So let's begin by discussing the value add that the toolkit brings to the table. First and foremost, the toolkit is for deploying a machine-learning application. It is not a framework for training models. With this focus on deployment, the toolkit was designed to address a problem that an application developer might face. Diversity in hardware and frameworks. Let's elaborate on this issue. There's a wide variety of different compute devices that can be used to deploy computer vision applications. From Intel, we have CPUs, [inaudible] , VPUs and FPGAs, and unlike in the case of training, where the only metric data scientists generally care about is how quickly a model can be trained, there are a lot of factors to consider beyond speed when actually deploying machine learning applications. For some applications, power costs might be a problem. Then images per watt might be more important than images per second. For other applications, device maintenance might be an issue. A system that can survive decades without maintenance maybe favored over machine that it's powerful but requires frequent maintenance, and yet for others, how much devices cost might be an important issue. Each device has strengths and weaknesses making them more suited for one situation or another. An FPGA might have low power requirements but may not be able to handle a large volume of images. On the other hand, a server CPU might be able to handle a large volume or requests, but may also require a lot of power. On top of all of this, there's sometimes a need for heterogeneous computing. This is when an application takes advantage of two or more types of devices. Which devices to use and what part you assign to each may be yet another consideration. With this variety of devices to choose from, an application developer often needs to be ready to deploy the application on many different types of devices, and to do this, a developer sometimes has to learn multiple libraries and workflows, and maintain several versions of the source code. To add to the complexity, there are many machine learning frameworks that AI developers use. We have TensorFlow, Caffe, Torch, an Apache MXNET to name a few. Each framework generally has different format for a machine learning models, different libraries and syntax, and most importantly, varying degrees of support for different hardware. An application developer may not have a say in which framework the computer vision model they use is trained on and may even get models from several different frameworks. So normally do developers need to handle multiple device types, they may have to handle different frameworks as well. This process of mixing and matching between frameworks and devices can be quite time consuming, and developers may opt not to use a device or model that might potentially be beneficial because of this. This is the issue that the toolkit is trying to solve. One interpretation of the role of the toolkit is a translator between the frameworks in their models and the hardware to be used for inference. The toolkit is designed so that the workflow and code is nearly identical regardless of the choice of framework or hardware. Using the tools provided by the toolkit, developers can directly take a model from wide range of frameworks and use them for inference in a wide range of devices with little to no change in the code. This is the value proposition of the toolkit. Now, let us move on. When learning a tool, one of the most important things to do first is to understand what it can do and can't do. We've already discussed the main use case for the toolkit, but let's quickly go over some other things that toolkit can do and can't do. First and perhaps foremost, Intel Distribution of OpenVINO toolkit is specifically for inference. It can't be used to train computer vision models. So this tool is for developers who already have a model that they want to deploy. It is not for data scientists who want to train a model. Next, the toolkit can be used for any number of deep learning tasks not just vision, and its core toolkit takes the multi-dimensional array as input and outputs a multi-dimensional array. The input array is usually images, but it does not need to be images. With that said, the main focus of toolkit is computer vision and operations for computer vision get preferential support and development. Additionally, the toolkit can be used to do some traditional computer vision. Before Neural networks became the dominant tool, there are other tools being used for computer vision tasks. These tools are sometimes referred to as traditional computer vision tools. The toolkit comes with the libraries OpenCV and OpenVX that support some traditional computer vision tools. Finally, the toolkit will not interpret for you the result of the machine learning workload you run using the toolkit. As deep learning models generally have different outputs depending on the model, the toolkit simply wouldn't be able to determine what the numbers in the upper arrays mean. So you have to know what the model you deploy returns and interpret them yourself. Now, let's talk a little bit about this tutorial. The main audience for this tutorial are developers who want to learn how to use the toolkit. We will be focusing on practical examples, and going through code snippets and programming exercises. If you're interested in a more general overview of the toolkit, I recommend taking the beginner tutorial for Intel Distribution of OpenVINO toolkit instead. The tutorial is split in two courses. In the first course, we will focus on learning to use the tool. We cover the various libraries and tools that you need to run inference. In the second course, we focus on the actual deployment. We discuss strategies for deployment application, what to think about when comparing results on different hardware and discuss optimization techniques to squeeze out extra performance out of your hardware resources. The programming language used for this tutorial is Python. An understanding of Python and some common libraries like NumPy will be essential for some of the discussions in the videos. I will not be going over Python concepts in this tutorial, so I recommend taking some basic Python courses first if you're not familiar. This tutorial does not require any prior knowledge of machine learning or deep learning to complete, but these are highly recommended. Throughout the tutorial, I'll be referencing concepts from machine learning and deep learning as it applies to the topic at hand, but I will not be going into depth about these. Finally, the tutorial does come with several quizzes and programming exercises. While only the quizzes are graded, there will be questions that can only be answered if you did the exercise. This is it for this video. Thank you for joining me. In the next video, we will be looking at an overview of the toolkit workflow.