Lastly, you will want to talk about infeasible linear programs. Suppose we have a problem like this. We are minimizing something, maximizing something subject to inequalities like A_x less than or equal to b. If that's the case, pretty much, we are having a simple situation because we are going to add slack variables like this. The four last slack variable, naturally, it has an identity matrix there and nothing as the zeros row, nothing at the objective function. It is a very good initial candidate for basic feasible solutions. We may always form a feasible basis by selecting all those slack variables. But if we have equality or greater than or equal to constraint, that's not the case. For example, if you have greater than or equal to constraint, you are going to minus a slack and in that case, that column cannot be used as one of your basic columns, because it cannot be part of an identity matrix. We need some other considerations. Here comes an example. Suppose we are having an example like this. Then in this particular linear program, if we get a standard form, we are going to minus a slack here, add another slack here and then that's all. For this particular standard form, we have three variables for six constraints, we should pick three columns and try to make the columns invertible. That is already difficult. If you randomly pick three columns, is not so easy to make sure that they are invertible before you try to run those Gaussian elimination. Plus, if you pick three columns and if it happens to be invertible, your AB inverse b, you have no way to tell whether it's going to be all non-negative unless you really solve it. If that's the case, it's really not so easy to know where to have an initial basic feasible solution. In fact, you don't even know where it can be an feasible solution, whether you have any basic feasible solution. All these problems are difficult to answer, so we really need some better way to do this. Somehow some people give us this very smart idea. We are going to teach you one particular way, which is the two-phase way, two-phase implementation to do linear programming. We're going to try to find an initial basic feasible solution or show that there is nothing by applying the two-phase implementation of the simplex method. The fact is the following, if we have a standard form linear program that's called P, we're going to construct a phase 1, linear program, which is called Q. In that case, the difference is that we first get a standard form. Then for each equation, we add, we artificially add some y-variables. We don't care about whether originally it's less than or equal to, or greater than or equal to whatever, we just put falsely at stack variables or artificial variables, these y-variables to create an initial identity matrix. We know somehow these y-variables does not exist in the real problem. Let's see whether we may remove all of them by try to minimize the sum of all these y-variables. That's our phase 1 linear program Q. If we do that, if we create Q by P according to this way. We add slack variables or artificial variables, and we change the objective to minimize the sum of y-variables. Let's see what's going to happen. Naturally our Q has a trivial basic feasible solution, zero and b. We know this is true because this is just something we did in the past. If we have an identity here, then all we need to do is to say, "Okay, we may set y to be b and x to be zero. This is fine." Then we may apply the simplex method on Q but why do we want to do that? We want to see whether it is possible to find a optimal solution for Q such that no y is positive. We have the proposition two. P is feasible if and only if Q has an optimal basic feasible solution, which has y being zero. In this case, the remaining part, x_ would be a basic feasible solution of P. This proposition is actually very simple to be proved. Pretty much let's take a look at these two problems again. Somehow for your Q, you have zero, b as your feasible solution. Suppose you have also a feasible solution where the y are all zeros and your x is something else, let's call it x_. Then this would be feasible to P. Your x_ must satisfies ax equals b because your y can be zero here. If x_0 is feasible to Q, it would also be feasible for x non-negative. This can be a feasible solution for P. Pretty much, we solve Q and try to see whether this is possible. But initially all those y are positive. Let's try to minimize the sum of y to see whether it is possible to make all the y's zero. Once we get all the y's zero, this is an optimal solution for Q, and then we stop and say x_ is an initial basic feasible solution for P. Then we go back to P and try to run simplex method again for phase 2. In short, pretty much we are doing the simplex method for two times. For the first time, we solve the phase 1, the PQ, and then for the second one, we use x_ to be the initial point to solve P again. One thing to remind you is that it doesn't really matter whether your P has the maximization or minimization objective function. We always try to minimize the sum of these y-variables because we want to eliminate all of them. Pretty much we are going to run this method with this general idea. After we solve Q, either we know P is infeasible if the optimal solution still has some y that is positive, or we have a feasible basis for P. In the latter case, we will recover the objective function of the original P to do a phase 2 linear program. Then the phase 2 problem is nothing but the original problem is just that, now we actually have a feasible solution for phase 1, which comes from an optimal solution for phase 2. Pretty much phase 1 is to look for a feasible solution, and the phase 2 wants to find a real optimal solution for the original problem. Regarding those newly added variables, well, we call them artificial variables. We don't really call them slack variables in linear programming. In linear programming, when we say something is a slack variable, that had some physical meanings. That gap between the left hand side and right hand side. But when we talk about artificial variables, they are used in the two-phase method. They are created only for us to obtain an initial basic feasible solution. That's how we use the artificial variables. Also, we know if a constraint originally just have a variable that can be included in a trivial basis, then we don't need to add an artificial variable in that constraint. This happens, for example, to those less than and or equal to constraints, because in that case, we already add a slack variable which can serve as part of that identity matrix. Then in those constraints, we don't need to add an artificial variables. We can, but we don't need to do that. If we want to reduce the complexity, we don't need to do that. We would eventually just the tableau according to the initial basis and then continue applying our simplex method and the phase 2 linear program. Later, let's see an example to see how all the things works.