Hey there, I'm Rudy. But since I'm in a bee suit, you can call me Buzzy for today, and have I been quite the busy bee the last few days. Our colony is shaping up more and more as we deploy infrastructure to the martian surface. We got therefore generating lots of data, and we need somewhere to store it. We could just archive it, but that's only if data is not needed immediately. So what are the options of securing our data for the busiest of bees? Well, firstly, we should revisit the notion of least privileged access. What this means is that you should only provide the minimum required permissions for users of your AWS resources. These users could be infrastructure users, end-users, busy bees like myself or even AWS services themselves. For example, if we needed a lambda function to pull out some info from an S3 bucket, it would need read access to that bucket. Similarly, if the data was stored in an Amazon DynamoDB table, we would provision read access to the relevant table. One of the main reasons we do this is to prevent exposure of critical data to unwanted parties. I mean we could have our honey stored in DynamoDB, but maybe an alien is trying to access it using compromise credentials. We're also trying to prevent accidental creations, reads, updates or deletes. This is also known as CRUD functions. Another thing to consider is the axis frequency of data. Does it need to be accessed quickly? Or less? Does it need to be stored for days, months or years? Even what type of data you're storing? Are you storing objects, rose, time-series data or any of the other types? As a starter guide, if you are looking to store objects, you can use Amazon simple storage service or S3. S3 allows you to make data accessible over the Internet, and as the most cost-effective storage mechanism offered by AWS, you can even store log files or archived data to S3. Speaking of archiving, if you don't need this data in a hurry, then you can archive it to Amazon Glacier, which lowers your cost even more. Better yet, you can set up lifecycle policies to archive data automatically after certain period. So if you needed the data to be in S3 for 30 days, then you set up the policy to automatically move it or archive it to Glacier after those 30 days. The next thing you'll probably be looking into is storing data for your applications, user-generated data and have relationships between those datasets. Thus you're looking for a relational database, and there are various ones offered under the Amazon RDS or Relational Database Service portfolio. Some options include Postgres, MySQL, and even commercial offerings like Oracle and SQL Server. For those of you who want a more performance and cost-effective database, we recommend Amazon Aurora which comes in two flavors: MySQL and Postgres. The MySQL version is up to five times more performant than standard MySQL, and Postgres version is three times more performant than standard Postgres. For a full feature comparison, you can check out our resources section. But you know what, not all applications require relational databases. Therefore, we offer a non-relational or NoSQL alternative called Amazon DynamoDB. This key-value database store offers single digit millisecond access times, and can scale up and down automatically. As an added performance gain, you can attach Amazon DynamoDB accelerator to your DynamoDB tables, which is an in-memory cache which lowers the response time to microseconds. That is lightning fast. Qichao. Speaking of speed, pop quiz hotshot. Welcome back. I hope you got the correct answer, I mean how could you not get it right. The answer was clearly bee. Three other mentions for databases are, how time series optimized database aptly called Amazon time stream, and fully managed ledger database called Amazon quantum ledger database. Last but not least, Amazon Neptune is our graph database service. So instead of leaping between options, you can now go straight for home and use one of these database options in your applications. Apart from databases, you might recall we rolled out some EC2 instances throughout our planetary endeavor. These EC2 instances are generating data and we need a place to store it. The first option we have is Amazon Elastic Block Store or EBS, which is used for writing block storage. These are automatically replicated within availability zones and offer a non transient option for EC2 instance storage. With EC2 instances, if you usually stop them, you're going to lose anything that was on the instance store itself. However, if you attach an EBS volume, that data is retained even when shutting down that instance. The second option for storage is Amazon Elastic File System or EFS. As the name implies, it's used if you need a file system attached to your EC2 instance. It's fully managed and can automatically grow or shrink depending on whether you add or remove files. For secure data warehousing, we recommend Amazon Redshift, which allows you to store your data in a columnar format. This along with parallel query execution means you'll get results up to 10 times faster than other solutions. As an added option, you can even access data stored in S3 via a feature called redshifts spectrum, thus allowing for even more data to be queried to ensure accurate and up-to-date results. For those of you wondering if you have to use redshift for data warehousing, the answer is a most resounding, no, you don't. You can use a service called Amazon Athena, which will scan data housed in your Amazon S3 buckets via standard SQL queries. It's a serverless offering, so there's no infrastructure to manage and the cost is associated with the amount of data you scan coupled with the query execution duration. But this sounds like we're going into data lake territory here. You know what, customers kept telling us they're setting up their own data lakes as a priority, but they weren't sure how to set it up. So we listened and created AWS lake formation, and AWS lake formation allows you to do that data lake setup in mere days. One of the last few service areas to touch on are our AWS data transfer options. I mean since we're on Mars, we have no idea what the Wi-Fi is like, and if there's even enough bandwidth to copy terabytes of data to the AWS Cloud. This is where AWS snowball comes in. It's a portable device, and can store petabytes of data. Data is copied over to the device with encryption, and it ships out to us at AWS. Once we've copied it to your account, you'll get notified of its availability. If petabytes aren't enough, we offer an actual truck to move exabyte scale dataset into and out of AWS called AWS snow mobile. I mean this thing is huge. But before I digress into the intricacies of trucking solutions, I'll mention the offerings I described have various encryption options to further secure your data, and that you can control those secure keys as well. We'll touch upon those in subsequent videos. If you want to learn more, check out our resources section for a link to a nice matrix of AWS storage options. I mean, we can lead you to the door of the solutions but only you can open it. Thanks for learning about available secure storage solutions and remember, there can be only one. Well, actually there are several but till next time, Shlara Gashlie. Goodbye. Cheers.