Let's look at how you get great outcomes with your test and your Ops function. Now, the way that these functions work together with the rest of the team and the rest of the product pipeline to get better outcomes is often referred to as DevOps. And this is a domain specific extension of agile really, where we're asking essentially the same questions, but specifically with regard to these two processes and how they relate to the rest of the product pipeline. And it's critically important, it was used to be that test and deploy or Ops for these specialized areas that somebody dealt with. Now, if you can't release really fast, you were going to get outrun by your competitors back when I started doing software releasing quarterly was pretty good. Now amazon releases every 11.6 seconds most companies can release, well not most companies, but high functioning capability is being able to release probably multiple times per day. And so how do we get there? In terms of what are the jobs that we need to do to have that happen? And then how do we work with our team to actually get there? Let's take a look at this term, little r of f for the way we're going to frame it 1- r of f and economic significance of this term here. And let's talk a little bit about what this is meant to capture and how you might measure this with regard to getting a measurement. And then with regard to this overall calculation for big F but also how much overhead for their whole team is created by the release process and by that, what I'm defining is overhead is any kind of manual testing in any kind of manual deployment steps. Because the idea with DevOps is that we're automating most of that stuff, their tool chains and techniques, they're pretty readily available to do those things. And so we consider that now, I would say overhead in this framing you may or may not want to go through and actually calculate a value for big F either one time or periodically. But in case you do and just to better understand this, the practical part of how we might calculate this. Let's talk about how that works. The general idea is we would take this parameter and probably realistically, I would say for most teams estimated so maybe it's 10% maybe it's 20%. If your team has really detailed time tracking that they do, which I don't see a lot of teams do, but I think some do and it works for them, then you might be able to kind of of figure this out by the buckets that people are crying their time into but otherwise, you can just estimate this. And then what you're going to do is is actually get a sort of of a true value for f of e in the calculation spreadsheet by dividing with this, if that makes any sense. And that the idea there is just to frame this calculation in a way that makes all the important things visible and yet just kind of make the thing practical, calculate the idea being that you probably have a value for f of e but you're probably going to want to estimate this. Ultimately take that spreadsheet, modify it however it suits you of course. How do we work with these teams to in these individuals to get to these outcomes? Well, in the olden days, the way that we did things and I did a lot of this actually Ops work the developers would get the requirements or the inputs whatever they were more modern version of that would be user stories and wire frames. And they create software handed off to the test people whose job it is to test it in practice, what that means is manually testing it as best they can because they obviously can't test everything and they make their best judgment calls about what that should be. And then test hands it over to Ops and says, all right, go deploy this and make sure it doesn't break. Here's some notes from development and maybe some upgrade scripts for example, to take the software from version X to version X plus one. And so you can see there's a lot of natural tension in this and a lot of ways for this to break Dev hands off the test, make sure it works. It all rolls down to these poor Ops people, which I'm kind of of sympathizing with myself I guess to a degree and this didn't work super great. These handoffs failed for reasons that are pretty predictable. It's hard to account for everything if you're developing one way, testing another way and deploying a third way with lots of little variations. There's a jillion geometrically compounding things that can kind of go wrong. And so the idea with DevOps is to basically ask how do we do this better? And the answer is really boiled down to two main things. One is more collaboration and continuity in the way these things work. And the second one is automation automating these processes. The tools are there, the task merit automation, it's economically solvent, It works well for teams once they get over the hump of doing it, what does that mean specifically? Well, for example, with tests you may have heard about, for example, the test pyramid of all the different kinds of tests you might do unit tests, integration test, system tests and so forth. You may have heard these referred to by different terms, the nomenclature is kind of loose, which is fine. So now what often happens is that the developers will write their own unit tests which are the lowest level tests that test their code and we'll do that as they go along the test folks will write these upper level tests and additionally kind of work on the infrastructure to help everybody test their own code, keep things running that works a lot better. And what's beautiful about that is that once you automated tests, you can run it as much as you want essentially for free. And so developer every single time they make a change, they can push that button, make sure everything's okay. Which also helps them do their job better with Ops. What this means is that instead of development making some automation scripts that will upgrade the software, Ops and Dev work together maybe directly, maybe through a self service platform that Ops makes available to them to automate the steps of deploying, upgrading and configuring and so forth the software. And then kind of with the unit tests, they're just they're practicing they're playing the whole time the way that they install, update, deploy the software in their development environments and anything intermediate between development and release environments. It's always done consistently the exact same way. So you're going to catch problems earlier and you're not going to have to execute a whole bunch of manual steps that are enormously prone to air. And so this is where teams want to get to and your job as a product manager is to mostly identify what these investments might be, make sure the team has access to the training and the tool chains to do these things. And then this is honestly the hard part create headroom where they can they can do those things by doing a better job of bringing fewer things into the pipeline. You can't drop everything and just spend six months or a year automating everything, but you can't not do this, you have to, otherwise you're just going to get out experimented, out released, outperformed by the competition. So it's a matter of striking a balance and prioritizing and then interview making these investments and seeing how they're going and how you make the most out of them. So that's how you get the best results from working with your test and your Ops functions in order to make your product pipeline healthy and get your customers the best possible experience they can have.