Hi. As scientific technology advances rapidly, It is said that a car that takes you to your destination without you having to drive it personally, i.e., self-driving car will come into the market soon. I expect how convenient it is. On the other hand, I am concerned about how safe it is for a machine to drive a car rather than a person. So in this lesson, I'd like to take a look at some issues about self-driving car after listening to the expert's conversation about the self-driving car. Well, then, shall we listen to the news? More than 30 companies, including not only automobile companies but also computer related companies are working hard on developing self-driving technology. It is expected that the technical development has come to the stage of completion so the self-driving car will come to the market in 3 to 5 years. But it is said that it will require more time to implement the perfect self-driving that you expect. Let's hear what Dr. Yeon Woo Park says about this. Hi. Hi. I'm Yeon Woo Park. So, other than completion of technical skills, what needs to be done in order to implement perfect self-driving As you know, the self-driving car is a car with a system that the machine determines and moves on its own. The car's operating system puts the driver's safety first. For example, there's a pedestrian crossing the crosswalk and let's say that if the car puts on a sudden brake to avoid the person, the possibility of the death of the passenger is high. At this point, the car may choose to crash into the pedestrian to save the passenger. Passenger's safety is the first priority. To put it the other way, the car may risk the life of pedestrians to protect the passenger's life in dangerous situations. It's a little scary. As artificial intelligence is designed to follow the rules programmed by human, could the machine make the right decisions? Yes, of course, it is programmed to focus on human safety. But the problem is that there are countless circumstances that are difficult to program in our society. Shall we take another example? Let's assume that you're in a situation to crush either 5-year-old child or 60-year-old elderly. Who are you going to choose? Can you make a choice? Even humans who program this can't make a decision eaily in this complicated situation. I don't think it's possible for artificial intelligence to judge a situation like this even if its technology is refined. So isn't it impossible to implement perfect self-driving? We need to assume various circumstances that need value judgements and program it with the best and most ethical judgement after accumulating social agreement. Technical development is welcomed but enough consultation about scientific ethic should be held beforehand. Did you listen carefully? Shall we find out what this is about? Today's news invited a special guest to have a conversation. What is the conversation about? What is the topic of the conversation? The anchor is saying that companies are working hard to develop self-driving technology, it will come out on the market in 3 to 5 years, and it needs little more time for the technology to be implemented perfectly. This shows that the self-driving car is the subject of the conversation. What position does the doctor take on the development of self-driving skills? Is it positive? Or is it negative? The doctor says that the development of self-driving technology is welcomed, good, and positive. But he also says that the consultation on science ethics should come to an agreement and we should share enough opinions on such issues as problems arise when we use scientific technology. The doctor is positive about the development of technology. But he is negative about the problems arise when using scientific technology. Overall, he has a positive stance on the development of self-driving technology although he added a condition that a consultation about science ethics should come to an agreement beforehand. Why is it difficult to say that the artificial intelligence of self-driving car made a right choice even if it followed the rules programmed by human? Self-driving car is programmed that the passenger's safety is the first priority. Therefore, as the doctor says, self-driving car may cause accidents as it thinks that the passenger's safety is more important than the pedestrian's safety. The other problem is that it's impossible to make a computer program by considering all the circumstances because there can be a lot of complicated and difficult circumstances. So it's difficult for the artificial intelligence to make decisions by considering various problems like human does in complicated and difficult situations. In short, self-driving cars consider the passenger's safety is more important than pedestrian's safety. The artificial intelligence not being able to be programmed by considering all complicated situations is the reason why it's difficult to judge that the artificial intelligence made a right choice even if it decided as programmed by human. What should be done to implement perfect self-driving? In other words, what should be done to let the car decide everything with an easy mind? First, we need to simulate situations in which it is difficult to determine what's more appropriate. And we discuss what's the nicest, most ethical, and most proper decisions to be made in those situations. It is one way to collect the conclusions that were made like that and program the self-driving car based on them. It is the most necessary to discuss together to avoid ethical issues in science ethics and scientific technology. Finally, shall we summarize the whole contents? What is still difficult for the self-driving car which is equipped with the system that allow it to decide and move on its own even though the development of technology at the moment has almost come to the stage of completion? Yes, a perfect self-driving is difficult yet. The followings explain the reasons for the difficulty. For what does the self-driving car choose to crash into a pedestrian? Yes, it is for saving a passenger. And what is another reason? You have to decide what you choose to do and what's important to do in various situations that occur during driving. What do I say this in four letters? Yes, it's value judgment. The value judgment is needed. To make the perfect self-driving how? Yes, realize the perfect self-driving. What should be done? What does the value judgment assume? Yes, it assumes a variety of hypothetical situations. And what is needed here to make the best and the most ethical judgment? What do we say about certain value and ethics that we should agree on socially? Yes, it's social agreements. What do you do with it after accumulating enough social agreement? Yes, that's right. Programming. We need to program it. In this lesson, we discussed self-driving cars and the issues on science ethics through a talk with an expert. In addition to the self-driving cars, we need to discuss science ethics in various fields such as gene manipulation, robot soldiers, and so on. Let's find out if there is any latest issue on science ethics that's causing a debate in your country. This is the end of today's lesson. See you later.