Hi everybody. In this series of modules, what we're going to be talking about is non-response. So, for this particular module, what we're gonna do is we're going to look at different reasons why people don't respond to surveys. We can ask them, but they don't always say yes. We're going to talk about why that matters for our survey results and then we're going to talk about some basic theories related to how do we encourage people to respond to the surveys that we have. So, as we do that, think about the last time you were asked to participate in a survey. It could be you're sitting down to dinner and somebody called your house, it could be that you're on a website and a pop-up appeared asking you to participate in a customer feedback loop. What was your response? How did you perceive that request? What did you think about it? If you [inaudible] think about it. It can be really hard to get people to respond to surveys. In general, we want to get people to respond to our surveys because it's better to get a high response rate from a small sample of people, rather than a low response rate from a big sample. So, even if you had the same total number of responses and a lot of responses, and you have really bad response rates, you could be systematically missing out on a bunch of people's experiences. So, to back up a second. Response rate is basically our number of usable surveys over the number of people in the sample that we have. Now, if you remember our lectures on non-probability and probability sampling, you'll understand that response rate is probably a much bigger deal for probability sampling. If you have a probability sample, you're always going to generate a response rate, which tells you basically how well you're representing the population that you're looking at. For our convenience samples and purpose of samples, response rates can still be really important for a variety of reasons. So, even in non-probability samples, you want to get a high response rate. Because you want to feel confident that you've detected a change or a pattern in the sample that you're looking at. The people that you're asking are going to of course have multiple experiences and if some people are more likely to respond than other people, even in a convenient sample, you don't want to miss out on those experiences. The other reason why you want to have a high response rate rather than a low response rate is you want to detect small changes or small patterns within your population. If you're trying for instance, to figure out a subtle difference between people, you want to be able to have as much heterogeneity as possible on who responds to your survey, and if there are systematic biases built-in by who's actually responding, that's not great for what you can say confidently at the end of your survey. Finally, if your population is very heterogeneous, you want to have a good response rate. So, if everybody you're sampling, if you're interested in the most homogenous population in the world, you don't actually need a big response rate. But, if you have a lots of different types of people, you want to be able to collect experiences from all those different various types of people, and that means that you really do want a good response rate. So, a question I often get asked from people who are launching a survey is, what is a typical response rate I should get, and they're particularly interested in response rates by mode. Remember we defined mode as the channel in which you deliver your survey. It could be telephone or web or mail or email or anything like that. It's really hard to define a typical response rate, which is why I have the word typical in quotation marks here, and that's important because you're looking at differences between an internal versus an external audience. If you're just throw up a survey on your web page, that's a very different experience than if you are sending something out to a company email list or if you send something to a specific purpose of sample that you have. It's also hard to define because of design features of your survey. How are you actually sampling? How are you constructing your channel that you're delivering the survey? All of this affects your response rate. A thing to keep in mind when you're thinking about response rate is that people need to see the invitation first. So, a good example of how this can be really challenging is that a lot of different marketing firms have defined that email open rate, the number of people who opened the email that you send them, is around 8%. Spam filters catch a lot of email. A lot of people just automatically delete anything that's not from a person that they recognize. So, it's really hard to get people to open an email and that's just the first step that you need to get them to then respond to the survey that you have in the email request. Another challenge that we see is through online ad clickthrough rates. How many times have you actually clicked on an ad on Facebook or another website that you've been on? You're not a typical. Most people don't click on those. Clickthrough rates are less than one percent for most websites. That means that if you have a survey invitation on a website, that can be really hard for people to even see much less than go through to see the invitation itself. Some interview modes they are higher than others. So, for instance it's much harder to say no to an actual person. So, if you call somebody's house, they're going to say yes more often. But even then in random digit dial household telephone surveys, we see very low response rates as more and more people are saying no to surveys, and I think there's a little bit of survey fatigue out in the world that people are responding to. So, the question of what is a good response rate, is really a matter of what do you need to be able to do with your survey responses at the end of the day. If we look back at those original questions about, is my population heterogeneous? Am I looking for subtle changes? The quality of your response rate is tied to those types of research questions. So, let's take a step back and talk about why do people say no to surveys in the first place. I'm sure you can probably anecdotally think of a lot of reasons why you say no to surveys. I think the typical narrative of a survey is you have a family sitting down to dinner at home and just at the worst moment, a survey researcher or a pollster will call up and ask a bunch of questions about their lives. So, often when people refuse to do a survey, one of the things they think to themselves, I don't have time for this. Of course, that's a proxy for all other set of rich reasons why they don't want to do the survey. Designing a survey response is about the cost and benefit that any individual participant or respondent has. They're consistently making almost a subconscious cost-benefit analysis. What's the potential costs of me participating in the survey in terms of my time, in terms of risk to me and a whole bunch of other things versus what's the benefit. What do I get out of participating in the survey? Why would this be fun or interesting for me to do? So, a lot of people for instance, don't care about a particular topic or they think my responses don't matter. We see this a lot. For instance, I've done surveys where we ask about the financial health of the United States. A lot of respondents think, I don't know much about that. So, I don't feel like I'm the person to respond to this so I'll say no or I don't want to seem stupid to this researcher, so I'll say no, or I don't care about that topic at all, it has no meaningful impact on my life. So, I'll say no. If respondents don't feel like they're the right person if they have a unique value to the survey, they're going to turn down that survey. You also have to remember that just because that survey topic is important to you, does not mean that it's important to your participants. I've done a lot of surveys where I was super invested in what people thought and what they cared about with a product or with a social network site that I was studying, and of course the people that I was trying to get that information from, it wasn't as close to their hearts. A lot of people also worry about the safety of their data. There has been a preponderance of surveys that have been out there recently, lots of different ones that are out there and not always trustworthy. Sometimes people worry about what data they give out and especially for online surveys, it can be a real challenge, because a lot of people I think have a heightened sensitivity to online scams and phishing attempts and things like that. So, questions like is this spam? Is this sufficient attack? Might this be a scam or people are trying to get my data? All of those are ways that people can find a way to say no as well, because if they don't trust how their data is going to be treated, they're going to say no to participate in that survey. A lot of people also just distrust surveys overall. There's a growth of bad surveys that's led to what I think of as a crisis of confidence in surveys as a technique for getting data. People need to understand why you're collecting data and how you're going to use it, so they can connect your authenticity and professionalism to their experience of conducting the survey. So, if we think about again these surveys as a cost-benefit, we've seen a lot of the potential risks that people feel like they might incur in doing a survey. So, how do we leverage that? How do we think about theories of social interaction that might help us to encourage people to participate in surveys? I like to think about it like this. When I'm asking someone to participate in a survey, I'm asking basically for some volunteer effort. I'm asking them to help me out. It's not this similar from going door-to-door and soliciting money for charity or for a non-profit organization. How do I actually encourage people to do me a favor when that person is a stranger? One consistent set of theories that survey researchers often use is social exchange theories. In social exchange theory, there's a belief that people consider the costs and benefits of complying with requests when it's made to them. So, if I ask you for help, you're going to subconsciously very quickly think about those costs and benefits. Social exchange relies on trust in an outcome, rather than specific calculations are like monetary exchange. So, in a monetary exchange, if I go and I pay you to do my survey, I don't require trust to be part of that relationship. If I'm asking you voluntarily to participate, that's going to change the nature of the exchange that we have. An important concept to consider when you're thinking about why people make decisions to participate in surveys is the idea of heuristics. Heuristic is a term that means that people don't often make conscious decisions about everything they're going to do. There's a whole bunch of automated processes we have like habits or biases that shape how we make choices. Most of the choices we make in fact don't actually hit the front part of our brains, we make these decisions subconsciously throughout the day. So, social exchange doesn't mean that people are considering every choice. If we use social exchange theory, it is not necessarily saying that a person is sitting down and thinking, Oh! Well, I trust this person, but there's a risk here. It's really much more organic and much more fluid than that. The trick that we have as survey researchers or people who are trying to get people to volunteer their time, is breaking through those automated cognitive responses, to get people to consider our survey requests. If people for instance have a strong heuristic where they don't participate in surveys, we want to break through that and get them to consider the survey we want to do. So, in summary, a few things to take away from this particular module. One is that, a good response rate is important even if you're using convenience or purposes of sampling. Response rates help improve the quality of the data that we get from survey research. People don't respond to surveys for multiple overlapping reasons. It's not necessarily that they just don't trust surveys or don't trust your organization, it could be both for instance. We have to think about the different ways that the respondent is going to experience your request as you make it. Finally, we should think about surveys and requests as a social exchange. This is mostly not a monetary exchange we make with people, we're asking for their help. So, what are the different ways that we can ask for help and how do we break through automated processes where people might reject our request? In the next modules, we're going to be talking more about very specific techniques for increasing response rates. But for now thanks much.