- [Rudy] As part of securing your workloads, you need to protect your application endpoints. These endpoints are more than likely the ones that end users go through to interact with your application, and therefore, it's critical to protect them. If we look at the traditional server-based infrastructure model, with, say, an EC2 fleet. And as a reminder, EC2 is the computational building block at AWS, where you can choose specs, such as memory, disk space, and OS, and then spin up an EC2 instance itself. You can start with one, and then you can scale out horizontally, even vertically, to have a whole fleet of servers. When you deploy fleets, you're going to spread the traffic across them like honey, in that you want sticky sessions. And, a way to achieve this is with an AWS service called Elastic Load Balancing. There are a few different types with the two notable ones being the Application Load Balancer, or ALB, and Network Load Balancer, or NLB. The ALB operates on the individual request level, or Layer 7, and the Network Load Balancer operates on the connection level, or Layer 4. If you'd like to learn more about these load balancers, please make sure to check out the Resources section. In our example, we're using the NLB, and you can see that we're load balancing traffic across our fleet, using a round-robin algorithm, which rotates traffic in order between the instances. Our fleet hosts a website, and since we want to secure it by default, we require that the traffic is always transported over HTTPS. HTTPS, in this case, is a secure extension of Hypertext Transfer Protocol, or HTTP, and means that requests will be encrypted and decrypted. In order to achieve this on our EC2 instances, we route the encrypted traffic through the NLB and then terminate them on the instances themselves. Requests will then be decrypted on the instances, and the application can proceed further. This is a recommended practice for securing traffic to your EC2 instances. But, it does incur a small performance impact for the actual decryption and encryption. So can we improve upon it? Of course, we can. By using the NLB, we can terminate our HTTPS traffic on the load balancer itself, which will free your EC2 servers from the compute-intensive work of encrypting and decrypting all of your traffic. You also get some added benefits, like simplified management, access logs, and improved compliance. For a full list of these benefits, check out our Resources section. The next service we'll cover is called API Gateway, where API stands for application programming interface. It's a fully managed service, which allows you to create, publish, maintain, monitor, and secure APIs at any scale. By using API Gateway, you can expose certain application logic via REST or even WebSockets. In essence, you're only allowing access to needed parts of your application, and therefore, you're not opening, say, your backend for the whole world. This notion of protecting your backend, and other critical systems, such as your database, is a best practice for any architecture, and we highly recommend it when designing your next workload. Now, say, we're exposing a createBeeHive API call via our API Gateway. How do we make sure only authorized users can use it? Well, you can add optional authorization processing, such as AWS Signature Version 4, or SigV4, even Lambda authorizers. SigV4 works via AWS credentials. These being your access and secret keys, which authorize access by signing those requests to your service. When you create your API Gateway service call structure, you essentially generate a custom SDK for your service, and this custom SDK will handle the signing for your requests. If you don't want to use permanent credentials, you can even use Amazon Cognito to retrieve temporary, role-based ones, so you can make calls to your API. The other option for securing your API Gateway is via custom Lambda authorizers, which are actually AWS Lambda functions. And as a reminder, Lambda functions are the serverless compute offering on AWS, and allow you to write your code as reasonable functions, in a language of your choice. We have several officially supported languages, such as Java, Ruby, Python, even Node.js. But, you can include any language via custom runtimes. Now, these functions are executed in response to demand, so they can scale up and scale down as demand varies, with a maximum execution duration of 15 minutes. For a full explanation of AWS Lambda, please check out our Resources section. Lambda authorizers determine access to APIs using a bearer token strategy, such as Open Authorization, or OAuth, which is an open standard for token-based authentication and authorization. This means that if a Lambda authorizer is configured, API Gateway calls the Lambda function with the incoming authorization token, and depending on the strategy you implemented, it will return IAM policies, which are used to authorize the request. If the policy returned by the authorizer is valid, API Gateway will then cache the policy associated with the incoming token for up to 1 hour. And, that's how you can secure your API Gateway. But, that's enough of this for you busy bees. Thanks for watching, and hopefully, you learn more about securing your AWS endpoints. I mean, after all, wasn't that the point of this video? [punchline drum beat] Okay, okay, my producers in the back are cringing, so I'll bid you adieu until next time.