Hi again, Evan here. In this video, I'm going to walk you through three labs that provide solutions to help your engineering teams drive cost optimization. One quick way to optimize cost is to get rid of resources that you are not using. In our first lab, you'll learn how to identify and then delete unused IP addresses automatically. In the next lab, we'll walk you through identifying unused and orphaned persistent disks, another easy way to reduce your cost. The third lab, we'll show you how to use Stackdriver, Cloud Functions, and then Cloud scheduler to identify opportunities to migrate storage buckets to the less expensive storage classes. In this lab, we'll use Cloud Functions and Cloud scheduler to identify and clean up wasted Cloud resources. In Google Cloud Platform, when a static IP address is reserved but not used, it accumulates a higher hourly charge than if it's actually attached to a machine. In apps that heavily depend on static IP addresses, In large-scale dynamic provisioning, this waste can become very significant overtime. What are you going to do? You create a Compute Engine VM with the static external IP address and a separate unused static external IP address. Then you deploy a Cloud Function to sniff out and identify any unused IP addresses and then create a Cloud scheduler job that's going to run every night at 02:00 AM to call that function and delete those IP addresses. Once again, just remember that GCP user interface can change. Your environment, may look slightly different, though I'm going to show you in this walkthrough. Let's take a look. We find ourselves back in another quick lab. Now this one is all about using Cloud Functions to do magical things. In this realm of world, we're just going to be creating Cloud Functions to clean up resources aren't being used. This particular case we're going to be creating a function that's going to clean up unused IP addresses. The actual function code, as you're going to see way down there, is just a little bit of, I think it's written in JavaScript just functional JavaScript code. The great news is you don't have to write the function yourself, Google Cloud engineers are provided a lot of these functions on their external GitHub repository, which is cool. If you wanted to just use literally the things that you're using inside of this lab right now and you can copy and paste into your own project at work as well. Highlighting the things you're going to be doing, you first need to create a virtual machine. Like we said in the introduction, you're co-creating a couple external IP addresses, one that you're going to use and one that's going to be unused. Then you going to be building that code that's actually going to go through, sniff out any that aren't in use and then bring them down. Now that's only if you manually triggered it. The second part of this lab is actually to schedule that Cloud Functions to run, in this particular case, nightly at 2 AM, I believe and that'll automatically invoke that function and then do that cleanup automatically. Once you set it up ones which is grade, it'll just run in perpetuity. A couple of different things that I want to highlight. The first thing that you need to do is inside of the lab, you'll be working off of code that already exists inside of this GitHub repository. In fact, this is for the last three labs, anything that has to do with cleaning up things isn't going to be based on this repository here I'll show you just very briefly. The first lab is going to be on the unused IP addresses, the second cleanup lab is the unattached persistent disk or PD and the third lab is going to be a migrating storage to a cheaper storage class if that bucket isn't being used that actively. sounds cool. The code for the unused IP address is just this JavaScript code here. Again, you don't have to write it, you're going just have to invoke it, deploy it for your own functional code. But you can see what it's doing. It says, all right, you can actually view the watcher. The function is called. There are how many IP addresses that are out there for each of them. If it's not reserved, then you could potentially delete it. If you can't, it'll say could not delete the address. Then boom, just 60 or so lines of JavaScript code that basically says, there is statuses associated with these IP addresses. I want you to have some logic around them to basically say, okay, we'll iterate through all the IP addresses that people on my team are created throughout my project and then remove those ones that aren't used. This is the actual nuts and bolts of the lab, is this public JavaScript code here in the GitHub library. Let's take a look at how you actually deploy and invoke them. After you've cloned that project code in what you need to do then is you need to simulate a production environment. You're going to create the unused IP address and the used IP address. Then you're going to associate them with a particular project and then you want to confirm that they're actually created. I'll actually show you just this one command right here. This says, hey, G Cloud, compute how these commands are structured by the way, is Google Cloud. What service or product do you want to use as compute engine? Then for IP addresses, which is just called addresses, and is one of the list and I only want to filter those for my particular reasons, just a flag filter that you can have. I've actually already run through this labs. You can see that there is no IP address that doesn't say it's not in use, cause I actually already ran the function and it deleted it. But as you work your way through your lab, you'll have a long list of unused IP addresses that it'll trim down to just the ones that are in use. So it's pretty cool. So most of the magic again, since this is using the command line to deploy the Cloud function, is going to happen here, but once you actually validate that it works and it cleaned up those IP addresses that weren't in use, what you can then do at the end of the lab is basically say, Hey, I don't want to come into the command line every time and invoke this function. I'll just show you what that function invokes, looks like right now, deploy it, trigger it. Here we go. So after you deploy it and you get ready to work, the last part of the lab is actually to schedule it. So it uses the G cloud scheduler. Cloud schedulers are relatively new product, it's essentially just like a glorified cron job that Google will manage all the maintenance and the hardware behind the scenes for you. You can use the VM, I use the SSH command line terminal to create it. But then I also like to go into and see where it actually is, something like the Admin tools here. Tools, we want cloud scheduler with the little clock here, and one of the things that you can do is, this one was the unused IP addresses, the next lab you'll be creating one for the unattached persistent disk jobs, you can instead of invoking it via the terminal, you could do the "Run now " as well, which is kind of cool. So it goes lightning fast because just running that JavaScript code and then just killing all the IP addresses that are unused, which is great. So I'm a big fan of saying after you've created all your work inside of the terminal, you can view all of your jobs either via the terminal or within the UI as well and then boom, it automatically runs at a frequency and again, much like a cron job, this denotes at 2:00 AM every night. Their website utilities is out there that help you convert time into a cron job syntax right here. So don't worry about that too much. Alright, so that's the first deep dive that you've had into using Cloud function to do something a little bit more sophisticated than hello world. In our first clean up in this use case was removing those unused IP address. So go ahead and try that lab and then all that knowledge that you're going to be learning there will make the next two labs very easy because you going to be doing the same things operating off of the same repository. Good luck. In this lab, you'll use Cloud Functions and Cloud scheduler to identify and clean up wasted cloud resources. In this case, you'll schedule your Cloud Function to identify and clean up unattached and orphaned persistent disks. You'll start by creating two persistent disks and then create a VM that only uses one of those disks. Then you'll deploy and test a Cloud function like we did before that can identify those orphan disks and then clean them up so you're not paying for them anymore. Let's take a look. Here we oriented the quick lab for cleaning up those unused and orphan persistent disks. Again, one of my favorite things about those quick labs is as you're working your way through the lab, you'll get those points as you complete all those lab objectives automatically. Quick lab is smart and knows whether or not you did the work or not but it's also really fun to get those perfect scores all the way at the end. So as you scroll down and look for this lab, you're already starting to get familiar with Cloud Functions. Again, those are those magical server-less triggers that can look for things to happen, be triggered, and then do other things. So the other lab that you worked on just before this was cleaning up those unused IP addresses and you set that up as running as a cron job via the Cloud scheduler at 2 AM, same general kind of concept for this lab except you don't care about IP addresses, here you care about persistent disks. Those are those hard drives that are attached to the virtual machines because again, inside of Google, you have the separation of compute and storage. Just because you get a virtual machine doesn't mean that you need to have that number of virtual machine running 24-7 just to keep that data alive. So if you need compute power for an hour and you need persistent storage for, in perpetuity, you can actually separate those, which is kind of cool. But, say you didn't want that data just around when you had no virtual machine associated with it, you can identify those orphaned persistent disks. Since we mentioned in the introduction, you'll be creating two of those persistent disks. The VM is only going to use one of them, we'll detach that disk, and then we're going to create some code or copy some code from the repository that's going to be able to look through and find any of those disks that were never attached, never used and basically say, Hey, why are you paying for stuff that you're not using, then you deploy that Cloud function that will remove all this persistent disk and then lastly, so you don't have to constantly wake up every morning and press that button to say remove persistent disk, that would be really boring job. You're going to create a cron job via the Cloud scheduler to automatically do that for you. So again, if you already did the last lab or you've seen the demo video for the last lab. You'll be working off of the code that's in this public Google repository for GCF automated resource cleanup. GCF is just Google Cloud Function. Here we have the unattached persistent disks. Instead of JavaScript, this time it's actually written in Python, which is pretty cool. It's a little bit more involved. So it basically says," All right, well, I want to find and look at, and delete the unattached persistent disks." Much like you iterated through the IP addresses before in the prior lab, here you're getting the list of all the disks and iterating through them. Then hey, if the disk was never attached, and that's some metadata associated with the disks, that it was never attached, there's a timestamp associated with it. In fact, it's actually just last attached; timestamp is not present. Then you're basically going to say, all right, well, this disk was never touched to anything, it was never used I'm going to go ahead and delete that. This code will run and handle all of that code automatically for you. You're not going to be writing Python, don't worry about it. This is just code that you can lift and shift and use on your own public applications. The main argument that you want to be considering here is deploying this code as a repeatable cloud function. Then having it invoke at a regular nightly interval, say every night at 02:00 AM as the Cloud Scheduler will help you. Back inside of the lab, the orphaned persistent disks, let's take a look at some of the things that we can do. We'll run some of this too. We just looked at the repository, after that you're going to actually create those persistent disks. Here is where you give just a orphaned disk, great, unused disk, great. You're actually going to create those two disks. So I'll go ahead and just run these now. Inside of Cloud Shell. Let's see what directory I'm in. I'm in the root directory. I need to go into the [inaudible] of the code for the unattached persistent disk is. Now I'm in there. As you saw, we were just looking at that Python function before main.py. By the way, if you're not familiar with Unix commands, couple useful ones are LS, which just lists the contents of a given working directory. CD says Change Directory. It's double-clicking on a particular thing under the directory, so it's double-clicking on unattached PD. Then cat shows the contents of the file. It doesn't do anything with it, but it shows a contents. So that same python code that you saw before is now just visible on the screen here. So what do we want to do? I didn't want to copy that. We want to create some persistent disks, have some that aren't going to be used and then delete them. We're in the directory, we're going to create some names, and this is literally what you're going to be doing inside of the lab, is working your way through, copying and pasting and hovering over these boxes, clicking on the clipboard to get the copy, creating all of them. I need to make sure that my project ID is set, so let's see. It's probably because I skipped an earlier step inside of the lab. But the great news is if your product ID is not set, there is a command for that as well. So we'll set the Project ID. It's updated properly. Now we'll try it again. Export is just basically saying define this as a variable. Create those two disks. No, it's because I didn't run the export project ID up here. Boom, done. This is why it's super helpful to go through the lab in order. Then let's create the disk which did not work, make this a little bit larger. It's creating all these disks automatically, this is exactly what you could be doing through the UI. Here we go, we got some disks that are ready. Let's validate that these disks were created before we blow away the ones that are unattached. What disks do we have, we've got a lot great. We've got an orphan disk and an unused disk, and I have other stuff that I really don't want to be deleted. So hopefully this code works as intended. Orphaned disk and unused disk, keep your eyes on those. Of course as you're working your way through the lab, you click and check your progress in real lab instances as well. I've created the VM already, and then I'll give it just a different name this time. Here we're going to create that Virtual Machine instance. Then look, we're giving it the disk Name orphan disk, which I bet you can tell exactly what we're going to do it. Right now we have a Virtual Machine that's using this disk. The next thing in order to get it orphaned, we've got to detach it. Let's see, inspect to make sure that it was actually attached to the disk. As last attachment time and everything in there. Now, let's orphan it detaches the disk marked orphan, just a command to detach it. Now it's orphaned the word. Let's see. Detach disk, disk instance. My name for this demo, I just have a dot one. It's going to detach it. Now, it detached it and we're going to view the detached disk. It is orphaned, it is detached. Great. The last part of this code is actually deploying that Cloud function that will sniff through and look through all the disks that are out there and then detach them. It's having you inspect that Python code just be familiar with it. Again, you don't have to write any other Python code yourself, but getting a familiarity with it can't hurt. Okay. Now, I've already deployed the Cloud function before recording this video, I've scheduled it. Now what I want to do is list. All this will be the magic that you're going to be doing inside of your labs. I'll list all the disks that are there. Showing orphaned disk and an unused disk. Then now, if I got everything set up correctly, I'm going to go into my Cloud scheduler. I'm going to show you just using the UI here. You can use the command line as you wish. Unattached persistent disk job, run that to collect [inaudible] second to run, right. Let's see if they're still there. Are they gone? All right. As you see here, we're just about to run that cleanup of the unattached persistent disks. We've got an orphaned disk and then one that was just never used. But see it and hopefully that code runs. G Cloud compute disk, we've already run the function. It takes up to a minute for that actually to run. It will submit the function, but sometimes the code of take it a little bit longer. I've gone ahead and ran that. G Cloud just compute disks list shows the disks that are out there. If you notice, there are two discs that are no longer in here. The one that was unused and the one that was orphaned. I can say with certainty that the code works, at least when I recorded this video. Go ahead inside of your labs, experiment with that, and then maybe create three unused ones or couple orphaned ones and just get familiar with how to create, deploy those Cloud functions, and then get the ability to invoke them manually via Cloud scheduler or automatically via Cloud scheduler a cron job frequency. Give it a try. GCB provides Storage object lifecycle rules. They can use to automatically move objects to different storage classes. These rules can be based on a set of attributes, such as their creation date or their lives state. However, they can't take into account whether or not the objects have been accessed. One way you might want to control your costs is to move newer objects to near-line storage if they haven't been accessed for a certain period of time. In this lab, you'll create two storage buckets and generate loads of traffic against just one of them, and then you'll create a Stackdriver monitoring dashboard to visualize that buckets utilization or usage. After that, like what we did before, create a Cloud function to migrate the idle bucket to a less expensive storage class. Then we can test this by using a mock Stackdriver notification to trigger that function. Let's take a look. Now one of the last optimization strategies that you're going to see here is saying, all right. Well, I've got objects that I'm storing instead of Google Cloud Storage bucket or GCS bucket. What happens if I have them in a storage class like regional or near-line, or there's a more efficient way to store those assets depending upon their usage. How can I migrate them, move them between those towards classes automatically. One of the first things that I want to show you is just what all the difference towards classes are and you will experiment with these inside of your lab. This is just the URL for Google Cloud Storage and the different storage classes that are out there. This all shows just the storage classes that are available. Generally for standard, if you just create a Google Cloud Storage bucket, it will be just standard storage that's accessible, and you don't need to have any specify, any particular class when you're first creating it. It will default to standard. But if you don't use your data that often, say it's not a public bucket that gets a lot of traffic and you want to enable some cost savings for may be archival data. Or you want to automatically say, well, if you're not using it, let's put it on something that costs a little bit less and is accessed a little bit more infrequently. That's when you can actually shift data that's stored in a GCS bucket for standard storage and then reclassified into something like Nearline storage or even coldline storage. Maybe access like once a year or once a quarter instead of once a day, for something like standard Storage. Now you are familiar with the fact that different buckets can have different storage classes, let's get back to the lab. The Lab here, it's going to walk you through the different types of storage and then you're going to be creating different Storage buckets. I've already created these buckets a little bit before, but you're going to be running through just the same repository before where you're going to be Migrating the Storage, you're going to be creating a public bucket, you'll be uploading a text file that just says, "this is a test," then you'd be creating a second bucket that just doesn't have any data in it. Then spoiler alert, we're going to call that the idle bucket or the buckets are not going to do anything. You've got those two buckets, and one of the really cool things you can do is you'll set up a Stackdriver workspace and monitoring dashboard that's going to see the usage of each of those different buckets, similar to how in a previous lab you monitor to the CPU usage, instead of this lab GPS monitoring the usage of the bucket. Again, Stackdriver is very flexible in terms of finding a resource on Google Cloud Platform and monitoring how well it's used. After that, one of my favorite things to do is, if I'm using an Apache library and this is Apache Bench to serve traffic fate traffic to that particular text file. Let's do that right now, It's, it's fun. I don't want to be non-attached, persistent disk, I actually wanted to be an migrate Storage. Let's see, LS into the migrate Storage. This is where the Python code that actually handles the storage actually lives, which is cool. Let's see if we can just generate the traffic. The bench command is not found, so one of the things that you'll have to do is you'll have to install Apache Bench serving library. We'll go ahead and install that, then once that's available, we are going to serve 10,000 requests to one of the Republic buckets. As you can see here, I'm in the Google Cloud Storage page, how you can get here is just in the navigation menu under note Compute but Storage this time it was going into the browser. I have a couple of buckets. The two that you'll be creating as part of this lab, were the serving bucket which has that text file and you can see it's marked as public, which means anyone on the internet can access them. The idol bucket, which is doing nothing, it's already been reclassified to Nearline storage as opposed to something like standard original, that's because I've ran the Cloud function to make sure that this demo work before you record it. Let's serve atonic traffic. We've done that command and then the benchmarking, be patient, a 1000 requests, look at 1000 different people went and hit that text file, 4,000, 5,000. You can see if you're on your Stackdriver dashboard, it's like spiking up through the roof. What you're going to be doing later is going to be saying, "all right, well, this one particular file or this bucket is getting a ton of traffic," Regional storage is perfectly fine for it, but this other one is got nothing. Nothing is being accessed and there's nothing in there to be accessed. Let's move it from say, regional to nearline. That's exactly what the Python function is going to do, that you're going to be creating our Cloud Function for and then wrapping that inside of a cloud scheduler as well. Back inside the lab after you've done that artificial traffic, which is really fun. You can see the actual code that's going to be doing the migration is basically says, well let's update it to Nearline storage instead, if it's not used that much. Same thing as your previous labs, you deploy that function, you give it an HTTP endpoint that Cloud scheduler can then invoke, then you'll make sure that it actually gets set up with a logging feature where you can see them actually being deployed via that JSON file. Then let's see for us, I've already invoked the function. Let's just confirm that it is in Nearline storage, so is moved from a more frequently accessed Storage Class, likely regional or standard, it has been Reclast into something that's a little bit cheaper because the thought is you're going to be accessing that data more infrequently, as evidenced by the fact that it wasn't given 10,000 units are traffic and reclassified it automatically to Nearline. That's it for this lab, good luck with your attempt of it and, and keep in mind for quicklabs, you can execute them more than one time. Don't worry if the timer runs out and if you didn't complete all of your activity tracking Objectives, you can always click End lab and started it again for another fresh hour at the lab. Good luck.