So, in this lecture, we're going to see how we can actually read JSON and CSV files into Python objects. Well, so far, we've seen how we can actually open those files using the csv.reader function or the JSON library. What do we actually do once we've opened the files? We'll also introduce some new libraries, in particular the gzip library, which is going to allow us to manipulate gzip files on the flier. So previously, we covered the basics of reading CSV and JSON files using a few different libraries. So, the question is what comes next? How do we actually go from just opening those files into reading them into appropriate data structures? So, the first thing we're going to want to do, is just to read one of the files we've have been working with. In this case we'll look again at the Amazon Gift Card data, which is a TSV file. So far, we've been able to read it by doing something like, this importing the CSV library, specifying the path to the file, opening the file, providing that open file to the csv.reader library along with a delimited option. Then we can read the header and all of the lines in that file. So, the questions we'd like to answer to extend that, are first, how are we going to be able to handle sort of large CSV or JSON files without having to unzip them? So far, we just operated on raw, CSV, TSV or JSON files, but many of the datasets we'll actually look at, would come zipped, can we exploit that to our advantage? Secondly, how do we actually extract relevant parts of the data for performing analysis? Often, we'll be looking at very large datasets, and not all parts of those datasets are relevant. So, how do we filter them or build a relevant subset of the data to work with? Third, what are some convenient data structures that we'll make accessing these types of data more convenient. First, we'll look at the gzip library. Issue we might want to overcome sometimes is that we'll have very large CSV, TSV or JSON files that are going to be cumbersome to store on disk if we have to extract all of them beforehand. So, datasets like the ones we've been working with, the Amazon dataset actually comes in gziped format. So, is there some way we can work with that file without having to unzip it? That's exactly what the gzip library is going to do. So to read the file in gziped format, we import the gzip library, we specify the paths of the file which now includes the.gz or gz extension. Then we open the file using the gzip library, this looks very similar to opening regular file with a few subtly different options. Just looking at this file, it's a 12 megabyte file when it is compressed, and it's a 39 megabyte file when it's unzipped. So, it's already worthwhile to try and manipulating this dataset in his native gziped format. So, when we open that file using the gzip library we specify the options rt. R is to read, t just specifies that the file is really a text file as opposed to reading gziped file in byte format which would be inconvenient. Otherwise, once we've opened the file using the gzip library, we can manipulate it pretty much like we would any other regular file. So, we can now pass the open gziped file to CSV reader, rather than passing an open file to csv.reader. Otherwise, it's going to be exactly the same, we can read the header and all of the following lines exactly as we would for a regular unzipped file. That's all we need to know about the gzip library. It essentially allows us to read zipped files in the.gz format without having to unzip them. The next concept I'd like to introduce is,how can we read and filter out data sets line by line? So, for manipulating a very large file and we have a gzipped, it's not going to help us if we then try to read the entire file into memory all in one go, because we're just going to run out of memory. So, the next concept we would like to introduce is to say, "How can we construct a data structure containing some reduced subset of the file that we'd really like to work with?" So, perhaps, in the case of our Amazon dataset, we'd like to build a subset that ignores the text fields in that dataset, because we'd just like to do some operations on the rating, or the vote, or the user data. That's what we'll do in this example. So, rather than reading the whole file and then trying to remove the text fields, which could cause us to run out of memory, what we'll do instead, is read the file line by line, delete the text fields and store the reduced entries of each line inside our appropriate data structure, which in this case is the list. So, all this happening in this code is the following: we read the file one line at a time just by passing our csv.reader object into a for loop. The second thing we do, is we just drop some entries from that file. In this case, we are dropping the last three entries which correspond to the text portion of the review. In this case, to show you another example of what we might do, we also discard unverified reviews from each line in the file. So, there are two ideas we've covered here. First, is that we should read files line by line, rather than reading the whole file into memory and then trying to pre-process it. Secondly, we should perform filtering as we read the data so that the entire dataset is never stored in a data structure in memory, which could be cumbersome for larger files than this one. The next idea is something I personally do and I find very useful. We just to take our CSV structured data and store it in key-value pairs much like we would have four adjacent object. So, rather than trying to manipulate a CSV file by looking for entry number two, which we remember corresponds to the user ID, and entry number 21 which corresponds to the index of the review field, that could be very cumbersome. Rather than doing that, we might actually like to use something like a dictionary data structure, that will store for us key-value pairs indicating which key corresponds to which entry. So, in this case, we might do that as we read the file by using this dictionary constructor. So, this is going to take the header and the line we're currently reading, and convert that to a dictionary which maps each key in the header, to each value in the line. So, it's essentially going to convert that line to a dictionary that we can index by keys from the header. The second thing that might be useful to do, since we're actually reading as file as a string, is to convert some of the numeric fields into Python types, such as integer or boolean types. So, we have fields here like the number of helpful votes or the star rating. As we read the file, they're going to be represented as strings, so it might be more useful to convert them to floats, integers, booleans, whatever type they natively come in. The same thing applies for the verified purchase and viine fields, which in this case are, yes or no, or just the characters Y or N, which we might convert to true or false values, which will make it easier later on to perform logic on those fields. So again, in our other two ideas. First of all, we use the dict operator to make our line into a Python dictionary, which is going to make it much easier for us to index the different fields by keys, rather than by the index of that field. Secondly, we convert strings to numbers and booleans where possible. So, that's about it for reading files into Python data structures. We did a few things in this lecture. First of all, we introduce the gzip library, which is going to be very convenient when we want to manipulate large files that maybe we don't want to unzip. We also saw some techniques for pre-processing data sets as we read them. So, now on your own, you should be able to work with some of the larger Amazon datasets or the help review data, and compile some simple statistics for them by reading them in a native gziped format. Also you should be able to experiment with his dict operator which you can use to convert CSV or TSV data into dictionary objects mapping keys from the header into fields from each line.