Welcome back everybody. In this video, I want to talk a bit about the process of evaluating user interfaces, talk to you about how it fits in the design process, and give you a few details about what we're going to do in this course. Now, just as a reminder, we talked earlier in an earlier course in this specialization about what a design process is. A design process, we said, is a systematic method for designing user interfaces. And there's a few key aspects of the design process we'll be teaching here, we have been teaching here. It puts users at the center of the process, as well as user goals. Another key aspect of it is iterative design. It is easier to improve a design than to get it right the first time. Now when we think about iteration, we focus on iterative, we focus on easier to improve, and this brings us directly into evaluation. So we talked about these three aspects of the user interface design process we were teaching. User research, design and prototyping, and evaluation. User research lets us find out about what are the problems users have? What is their context? It starts to help us brainstorm solution approaches. It helps us gather knowledge about users, capture that in an actionable way, helps us understand what tasks they're doing, what problems they have. That then is input to a design and prototyping process, where we sketch out our ideas. Perhaps do it in a low fidelity form. Maybe we design in a higher fidelity form at some point. And then we do evaluation. The evaluation methods we employ may lead us to identify problems, which may cause us to go back to users, try out new ideas, come up with new design ideas, and then continue to iterate. Now a few key points that I want to take away from walking through that diagram just to emphasize, evaluation is part of an overall iterative process. When you are doing evaluation, as you will see when we go through some of the methods throughout this course, you'll see a lot of them look like the kinds of things you might do in a social science sort of study. You observe people, you interview people, maybe you have people do controlled studies where you measure time, you measure errors. So those, as I said, are the kinds of things you might do in a psychology study. And indeed, a number of the methods that we use really stem from that discipline. However, a key distinction when we do these kinds of things in the user interface design process is we always have a design focus. We're focused on trying to understand how people use this thing we're creating. What are their problems? How can we come up with design improvements to address those problems? So we always have a design focus. We also think about setting goals. What counts as good enough usability? What are we trying to do in a particular evaluation? Are we trying to test our concept? Are we trying to test efficiency of the design? And that helps us determine what sorts of methods we want to apply, how many people we want to test our design with, and how we know when we're done. Another key thing I wanted to mention is the design or the devaluation can occur at multiple times during the design process. It can occur at different phases of the process, early or late. And that means it can be done on different types of interfaces. We can do this on low fidelity interfaces, even as low as paper sketches. We also of course can do evaluation on running interfaces. Now, a point that we have made in earlier courses and we'll make again here, is of course it's always cheaper to find your mistakes, to find issues as early as possible, rather than late, after you've done implementation. After you've put in expensive time, effort, after you're committed to a particular approach. So that means a number of the evaluation methods that we will show you will work with low fidelity interfaces. And another thing that I want to mention is we will be doing different types of methods. And let me segway directly, then, into talking about doing evaluation without users. Now, when I say this, that you can do evaluation of a user interface without users, maybe that should strike you as surprising. It should strike you as surprising because if you're evaluating user interfaces, don't you need users? It also should surprise you because we've said we do user-centered design. Okay. So, why do we talk about it, and how can it make sense to do evaluation without users? Well, first, it makes sense because it can be cheaper. If you recruit users and get them in, it can be time consuming to find people. It can be hard to get people. Their time is valuable. In some cases, you may have to pay people to participate. So of course if you can find problems with your design before you bring users in, that's great. It's also the case that sometimes with methods that refer to design, general design principles or general structured methods for examining an interface, that you can find things that users might not notice. So, evaluation methods without users can complement what you learned by doing evaluation with users. So these without user evaluation methods are systematic methods to step through an interface looking for problems. And each of these methods provides a focus, a sort of lens for examining your design. So for example, one focus is, does the interface satisfy a checklist of well-known principles or heuristics of good design? And this method is called heuristic evaluation. Another example method is you can step through the key tasks that your interface is supposed to support, carefully considering through a set of questions, whether a typical user will be able to complete each step of a task. Will they understand that they need to do this? Will they figure out how to do it? And so on. And this method is called a cognitive walkthrough. So those are two of the main methods that we will teach you for evaluating an interface without users, and these methods are widely used out in the field as well. Now, of course, we have to do evaluation with users as well. And one of the issues that we'll touch on as we do this is ethical issues. When you're involving people in studies, you have to make sure that you're treating people properly, that you are obtaining their consent, and depending on what country you are in, there may actually be laws that govern how you conduct these studies. And we'll talk about that as well. Now, there's a number of types of evaluation out there, and we'll be talking through most of these. Qualitative usability studies is sort of a main method that we will be talking about and that's used widely. In these methods, it's rather informal, but the focus is to make sure that you understand whether your entire design concept makes sense. Are you on the right track? Do users understand your design? Can they do the main tasks? Are there things they just don't get at all? Are there errors that everybody seems to run into? Is the language you use a set of vocabulary concepts that people understand. So that's what qualitative usability studies focus on. Controlled lab studies on the other hand tend to be more formal, controlled as I said, where you want to focus on things like how many errors do people make as they are trying to use my interface? How long does it take people to do particular tasks? And you might use a controlled lab study to compare your design to a previous design or a competitor's design, to be able to understand that yes, I've actually made quantitative and measurable improvements in this interface. Field studies or field experiments, these are studies that go out and actually try your ideas out, not in the lab, but in practice. A field experiment is a more controlled thing where you're actually trying to do an experiment where you're changing or putting in multiple versions of a feature in a deployed system. Probably the most well-known example of that is what's called A/B testing in industry, where Google, or Facebook, or Microsoft, or Apple will be trying out multiple versions of an interface. Maybe in this Google search interface, they might try out two different wordings or two different ways of displaying search results. And it's out there in the field, but they're gathering careful experimental data to try and understand, do users, are they more satisfied with one? Do they perform faster? Do they get better results? And so on. A field study, again, is putting a new feature out in the field and having people use it, but it can be more informal, where you have people try things out, and maybe you do interviews with them and understand, how do they use it? Do they like it? Does it fit in with their daily lives? And so on. Now, a few aspects that sort of cut across a number of these more systematic methods are preparation. We'll be talking about how if you're going to do a study, how you can prepare. How you can create a plan for being effective, including making sure you know what goals you're trying to achieve, making sure you know what equipment you need, making sure you've specified precisely how you're going to instruct your participants, how you're going to gather data, and so on. Couple things that are really worth mentioning are the think aloud method and eye tracking. Think aloud method sound very simple. It's simply, as people are going through and doing a task with an interface, that you are evaluating them, asking them to think aloud, to share their thoughts. What are they trying to do? How are they perceiving things? What do they notice? It's a great way to gather data. And I also wanted to mention eye tracking. It might be a technology that some of you are familiar with. This is a great thing in more controlled studies in high tech labs to understand, what it is people are even noticing? Are they seeing the things you think they're seeing or not? And we'll be talking about how that works later on as well and how that fits in to the whole evaluation process. So that's it for this brief look at the process of evaluating user interfaces. I've talked about what are the main methods, both with and without users, talked a little bit about how we're going to do things in this course. And we hope to see you again in the next few videos.