No, okay, so we gave an introduction last time and let me put this back in full screen. So we've got an introduction, why are we doing this. And now the first topic we want to discuss is basic variation of calculus. This is similar to when we did this optimization, it's a little bit of a tangent like out-of-the-blue why are we optimizing stuff. And turns out a lot of the things we do with constrained dynamics is equivalent to an optimization process, right, variation calculus is one of those tools. This is not just forgetting spacecraft equations of motion. People use variation calculus, tons of places to come up with conditions, optimal solutions. Who has heard of trans visibility conditions for example? Okay, did you know? In what context? >> [INAUDIBLE] Probably. >> It sounds familiar, good. Okay, you've seen any, sorry. >> Optimal. >> Optimal trajectories or control those kinds of problems. You quickly see those kinds of things. Any done anybody done any structural modeling before finite elements, flexible beams, right? You have boundary conditions, these other conditions that have to happen. Those are all part of this transfer technology stuff. But you will see it comes out of a mathematical framework that just goes very general. We assume we have dysfunctional and a functional is a scalar function just like a cost function typically have in guidance or optimal trajectories, right? It's a scalar subject to a whole series of states and it's written as a time integral over these states because in this case it's a mechanical system that we're considering. So we have states and rates you may want to approach at a slow speed, but get to this point. You could put in different kinds of costs there and it could be time explicit and we're going to integrate over this. So what could this look like? You mentioned, optimal control. So this could be your control effort, right, come up with the minimum effort to get from A to B. That's introductory as much as flying UAVs or the modern toasters, how do I get that toast? Just right with least energy usage or something. Energy could be your cost function, it could be anything in this case. Now, the key thing is going to be, there are some assumptions. x is some end dimensional state, right now it's part of C2. That means it's twice differentiable and continuous with a second derivative but doesn't have to be smooth. The second derivative could be having kinks in it, that's fine. So that means it's a certain property of smoothness and the same thing for this function. So if we take two derivatives which will happen, they always exist. The velocities don't do a step jump somewhere basically, that's what we're looking for. So great we are going to try to find extremists, so j could either be minimized or maximized, but basically we're finding an extreme one. Which is similar to what we did with the optimization problems before and we do that by seeking optimal paths? So what's the time history? So if you think of a control thing, when do I fire this thruster? When do I do this? So when do I actuate this wheel versus that wheel such that this functional is extremist? It could be a minimum, could be a maximum or it could even be a saddle point something everything between. And we'll see that today. So we seek paths which caused this to be and there may be one that maybe multiple. So that means that any instant I should be able to consider neighboring paths, well there we go. So deferentially neighboring paths, what does that mean? Well, you may have for example, time open, I don't know when I'm going to get there. I just want to intercept with that asteroid, right, but I might get there today. I may get there tomorrow, but I need to intercept that's an open time, end time problem. Or you don't care if you launch today or launch tomorrow or next month. That's an open T zero problem. And then you have to find the right trajectory in it. And there's all and then you try to find if you find the extreme you find the optimal launch trajectory, launching two weeks from now would be the best to get you there, so that's an open time problem versus fixed time. I am launching tomorrow. How do I get there? Kind of a problem, but neighboring paths as well. So we have our optimal trajectory X and X2 D2, D2 is now think of it almost like these virtual displacements. Which we said purely for the sake of analysis, where else could I be and what virtual work with such a displacement cause. And then from there we're able to build a series of arguments to come up with equations of motion. But here we're talking about not just discrete states at an instant of time, but I'm actually looking at correlated histories of variations, and so we'll go more into that in the next slide. So your delta X is a continuous variation and it has to satisfy any boundary conditions there. So you could have variations at the beginning, variations at the end or if it's a fixed endpoint problem, maybe there's no deflection. I'm starting right here, the rocket will launch from this height and then who knows what happens, right? But the starting condition's thing, in that case, delta x at t0 would be 0, you would pinch it off, but that's not always the case. So be able to want to solve this in a general way. So are varied path is externality and delta x is simply the difference between the very path that we're considering an x. And we want to find the path that will find the extremism of that functional that we had, that's the goal. If you have this here, you can take the derivative of it because we saw in the functional it was a function of x and x dot as well, generally speaking. So if you have this varied path, if you have your true path, it's derivative, it's velocity is here and the varied path taking its derivative is here and so delta x dot, you just put dots over everything. But the variation and the derivative operators are interchangeable. So you can also take the derivative of the variation or you know, take the, take the derivative of the states and then difference it to get the variation. So either way, this is basically that take the derivative of the difference and then take a variation of that versus reversing it, it's the same. So we'll use this a few times. Let me quickly, yep, no, it went to stuff so let me There was one slide and I was coming up so if you plug in a very path instead of x, I have X. Hildy and putting that into the functional X till the was simply X plus delta x and delta x dot sorry X tilde dot was x dot plus delta x dot, right? And then time and everything sounds the same. But you can see, I also now have t not til d and t f tilde because I'm a lying for very general, maybe there's an open end time or open initial time kind of a problem, this becomes your functional. So the first variation of this, like we did before you take your very path minus the actual optimum, the extreme point that you find it's like just the classic variation of math we've done before. Well, if we do that, we're going to have to take this mine, that's the J til D minus J, which is at the optimal path is this, right? Now to get the first order variation you would take basically taylor series expansion about that. And what's going to happen here is you take the gradient of this F function with respect to the states times delta X plus the gradient of this with respect to X stopped times delta X. There would also be gradients here at the reference these all cancel with what's from here. So with what you left, this is the variations. If delta X and delta X dots go to zero, then my delta J should go to zero again, right? So I've already subtracted out the nominal usually taylor series is your reference plus the first operation, but the reference gets canceled with this one. So we're left with the rest. So we've got this bracketed term we go cool, but then also time can vary. So at the end state whatever you have, if I'm going off small delta T F, you would have to multiply data to first order. That's your taylor series expansion of this. Integrated term is simply going to be F times T F. And then minus the one at the initial conditions. Just like endpoints F at initial conditions times delta T0 and that's it. Now again, like with virtual displacements, those were arbitrary purely for the sake of analysis. These path variations are also arbitrary. So if they're arbitrary, I should be able to factor things out again and go all these terms times something arbitrary must be zero. Therefore, this term itself individually must go to zero, right? And I thought we can use but delta TF that's arbitrary. Delta T0 that's arbitrary. And we can pick him in here the delta X and delta X dot are still coupled. They are not individually arbitrary. So we just to find the extreme and we already found one condition that this term must go to zero and this term must go to zero because the delta T at beginning and end arbitrary. But this term we don't know what happens to that. We can't argue, that's going to go to zero because I could pick a variation and a which has a rate like this that the sum goes to zero. Right, so we are going to have to rewrite this in this chapter. This is where you have to become one with integration by parts. You've all done this back in calculus. Why do we use integration by parts Anthony just in plain words, what is it used for? >> [INAUDIBLE] >> Yeah, so here there's a term delta X stopped. We don't want delta X start, what we want is delta X. All right, well we can somehow rewrite this term in terms of delta X. Then they have something delta X plus something delta X and delta X is arbitrary. Then those two, some things have to add up to be zero to vanish at an extreme of right? So this is the term that we have to use integration by parts. So let me just do this once. How do you remember, I don't remember the formula. So the way what I do remember is the chain rule for differentiation and integration by parts is basically the opposite. So let me show you how I this is how Dr Chavez mind works. I don't remember integration by parts, I have you and the as a product and I'm differentiating. So this could be a time derivative or it could be spatial, you can integration by parts with respect to space or time. So depending on the coordinates, I'm just using prime as a general differentiation. If you do just do chain rule, you would have u prime Vplus UV prime. Hopefully nothing too shocking so far. Now when you integrate this on both sides, some expression differentiated and integrated again. You're just going to have UV and then evaluated at the two boundaries, write the integral of X dot is going to give you X evaluated at the top and lower boundary, that's it. What you're going to have. This one is going to give you the integral of u prime v plus integral of uv prime. Now, what I'm looking for is I'm trying to get rid of a term like this, we had something times delta x dot and I'm trying to get rid of the delta x dot in terms of delta x. So there's a delta x here and here just putting it back to the Hamiltonian. So if I solve for this the integral of uv prime ends up being uv evaluated at the boundaries minus u prime v. Right, so do you ever have to re derive it to me that's the chain rule is so ingrained. I know I can do that, you quickly do that integrate over and you get this term so that's integration by parts. You've all seen this, right now let's apply it. We will have UV, so this will be a delta X. This will be a delta X, but I have to take in our case, I'm getting rid of delta X dot. So what kind of integration or what derivative do I have to take of u from getting rid of delta x dot for delta x drown? We have delta x and we want to get rid of it for delta X. We have delta x, dots in there and we want to have delta xs. So to do that with integration by parts, this u term. Right, we said there's two components that u function and the v function you end up with u prime. Yes, but what do I have to do to that F to get u prime? What kind of derivative is this prime going to be in our problem here? It's a time derivative. Right, because we're getting rid of a time differentiation. So therefore this has to be a time derivative. If this happens to be and you'll see later on when we do Bernoulli beam and timochenko beans and other stuff. There will be spatial derivatives. And then we have to take a drive with respect to space. This is how we end up with partial differential equations, right? So for now it's very simple, this will be a time derivative. So, this was that second term, I've just taken it out. The first round was something times delta X. We liked that part, right? We just want delta X is in the end. But this was the partial of F with respect to X stop times delta X dot. So in essence this became the u this is my v prime and prime is a time derivative. So this is going to be U times V evaluated at the boundaries and this is integration over time. So the boundaries are initial time and final time. If you integrate over space, it's going to be over the spatial domain like you know, from X equal to 02 x equal to L for the length of the beam or something minus u prime V. Because we're getting rid of a temporal derivative. We end up with the time derivative of this function of F. With partial of F with respect to X stopped makes sense. It's simple, not magic here. Hopefully it just looks complicated because of all the expressions and dependencies. This is uv at the boundaries minus u prime. The integrated over that temple domain. So good, now we can plug this in. So this term is over the time integral that we had earlier, and we already had Del F del X times delta X minus the time derivative of del F del X dot times delta X. But then we also have this, which gives us two other conditions. This will not depend on. This partial at the final time dotted with delta X at the final time minus this partial at the initial times times delta X at the initial time., If it's a fixed endpoint problem, like if you think of a string you say, okay, whatever the deflections are at the beginning, at the end there's a pinch point, there won't be any delta X is right. In that case these boundary conditions would vanish naturally and be zero. But if there isn't, if this is like a bean that's fixed in one and free in the other end, it can oscillate and do stuff. Then there would be a condition at the end. If you did a spacial, you will see that later, right? And that basically means at the end of the beam there has to be no strain on it or something because it's a free end condition. So this is where we get the boundary conditions. Now looking at this, these delta Ts are arbitrary, the delta X at t zero and t F are arbitrary. And the path that I'm putting together, integrating in between is arbitrary. So all these that depend on initial and final time or initial and final space are what's called the trans visibility conditions. And from that you will get all these boundary conditions on beans and discs and all kinds of stuff on the system but also temporal dependencies as we'll see in examples. So these are all can arbitrarily be chosen in this problem. So that also means at the extreme of dysfunctional F must satisfy this differential equation. Does that look familiar, Josh, it's the Lagrange operator essentially, right? Because with Lagrange is we did okay, you take a Lagrange in so that the functional F. Is simply the Lagrange in L script L right. And you do the partial with respect to Q dot and the answer. You take the time derivative and then you subtract the partial of L. With respect to Q. So instead of X we have Qs and the sign is flipped but that's about it. This is simply the Oiler Lagrange equation that we've rediscovered, not talking anything about dynamics. This is finding an extreme form of a functional. So you can see these tools are useful actually. Well beyond finding dynamics. So now we have a whole series of things that we can say, okay, these are the order Lagrange equations at the extreme this must be true. And we found before that our equations of motion ended up being an extreme. Some of this work principle, virtual work principle, Right, that was we found we're going to find similar things over again as we do this. But also we have these extra conditions that come out because we have complete path and these are the conditions that are needed if we're going to deal with infinite degrees of freedom problems like continuum because we have to deal with boundary conditions. How is this being attached and so forth that has to be part of it. And then there's a continuum of possible deflections. Any questions Josh? >> When we were looking at the LaGrange equations earlier we only had it it set equal to zero if there are no non conservative forces. >> Yes, yep. >> They're looking analogous [INAUDIBLE] aspect of that until we- >> So we haven't quite made the leap to equations of motion yet, right? So for this to hold it would have to be non conservative. No working forces, otherwise it's slightly different. So this exact this form will kind of hint at the equations of motion, but it's not the complete answer yet. So we'll get there with Hammett. Hammett, I can't speak. Hamilton's extended principle, so we have to slow down or I'll start stuttering again. Okay, so we're not there yet. You're absolutely correct, but you can see all of a sudden like, whoa, wait, something looks really familiar, right, we're going somewhere.