So how does Ethernet actually work? Let's look at Ethernet in a little bit more detail. So let's suppose we have a network. We have a bunch of switches operating at layer two. This is a single local area network, and you have two devices, or two end host with their own Mac addresses, and they have IP addresses to, but that doesn't matter for layer two forwarding. So suppose you have this end host, and it wants to send traffic to that other end host as a destination. So what it's going to do is it's going to send a packet. It's going to construct a frame. So we talked about packets at layer two, we call them frames. So it's going to construct a layer two frame with the source and destination Mac addresses, and the payload, the data it wants to send, and it's going to send that data out its interface. It's obscene switch is going to receive that data, and then that switch is going to say, well, I don't know where the destination is. So it's going to broadcast out all the interfaces, because it hasn't learned anything yet. These other switches are going to receive the traffic, and let's kind of focus on the one of the top there, so that one for example receives the data, and so what that one's going to do is it is also going to broadcast, because it haven't learned anything yet out these various links, and then, like for example, that one is going to receive the packet, and then that one is going to broadcast as well. And so then the next switch is going to receive it, and so this process kind of continues, and you can see the packet kind of continually goes, and goes, and so there's a problem here, right? Do you see the problem? There's a loop. I'm following the algorithm that I described, and each switch is broadcasting the packet, that's all interfaces, where it doesn't know where the destination is, but there's nothing really that stopping the process from happening, so we get a loop. In Ethernet networks when you have loops like this, when you have packets that kind of get stuck in cycles and go over, and over forever, these are called broadcast storms, because you have nodes broadcasting it over and over, and there's nothing stopping it in, and this is something you have to be careful about when you design Ethernet networks, because there's no stopping condition for this. So what can we do about broadcast storms? Is there some way to kind of make it so package don't go in these loops forever? Well one thing we could do is we could kind of make switches remember which data packets they forwarded. We've got a scene that approach in other protocols where kind of a switch could sit there and say, okay, that packet, I've seen that packet before, I'm not going to re-flood it. That would be correct. But the problem is we've done those approaches for control packets before, but doing that for data packets is not very scalable. Imagine a host sending megabytes and megabytes of data. Are you really going to store all that in a switch, and how long are you going to store it? These kind of loop detection mechanisms we've talked about don't really work for data. So there's this question of, how can we do these floods if switches don't maintain state, or if they're limited in terms of how much state they maintain? So this is a challenge, and in there's a solution for it, which has been designed, and it is an approach that has been around for a long time, and the idea is pretty simple and elegant in some sense, and idea is we're going to embed a spanning tree on the topology, and we're just going to throw away all the other links, and if you think about it works, right? You can't have a loop if you don't have any loops in the physical topology, and it sounds kind of a naive and silly. I mean, we have this topology. It's all nicely connected you've done the past, and what we're going to do is we're just going to like turn off links and throw them away. We're not literally going to physically disconnect the links, there's going to be a protocol that kind of shuts down ports. So it seems kind of wasteful, but it's also a pretty simple approach too, and simplicity is very nice for networking design as well, because it eliminates bugs and it's really hard developing distributed protocols that work in practice, and so there's a lot of benefits to just doing simple things, and this is an approach that one, in basically every ethernet device today supports this creating a spanning tree. And the way it works is there's a distributed leader election process where there is a root switch that is elected, and then all other switches kind of figure out their best paths to the root, and they turn on those links, and they turn off all the other links. So if I'm a switch I'm going to turn on all ports that point to the root and I'm also going to turn an all points that, for other switches that point to me, that use me to get to the root, and I'm going to turn off all of their links, and o this is a distributed algorithm that is correct. If you run it, it's guaranteed to construct a spanning tree with no loops, so that every switch is also connected. So to kind of show this graphically the way it works is we first elect a root switch, and this is done in the protocol by having each node kind of advertise an identifier and then the lowest numbered identifier is selected as the root. So, simple, straightforward approach. And then what happens is we can start the spanning tree off of this where all switches kind of figure out their best paths to the root, and they turn on those links, and then they disable all the other ports. So then when a host sends a packet, it could be sent to its upstream switch, and then it will be forwarded only along links in the spanning tree. Those are the only links that are available to be forwarded at a time, and so then you're guaranteed to not get loops. It'll be broadcast and then it'll reach all destinations. So this is an approach that eliminates loops, so that's good. And it has some downsides. One downside is you're eliminating these extra links, but one thing to keep in mind is that this is a continuous protocol, so it is actually failure resilient. If one of the links on the tree fails the protocol will run again and it'll select another path to the root, so those additional links are still providing some benefit. Another issue is you might look at this and you might say, well, that's unfortunate. I paid for all these links and all my traffic is just going over a small subset of them, so I'm going to get really uneven load balance of my network, and that's true, but there's something else we'll talk about later, which is called VLANs. VLANs is kind of the ability to run multiple virtual networks atop of single physical Network. And when I say about VLANs is you can have a different spanning tree for each VLAN. So if you have per VLAN spanning tree turned on then there will be a separate spanning tree computed for each VLAN, and so some traffic for some VLANs will go on some links, and for other ones it will go on other links, and so on. So yes, there are some downsides to using spanning trees, but some of those downsides have kind of been mitigated by these alternate mechanisms. So what I've done here is a kind of given you an overview of basic Ethernet forwarding. You kind of flood along various paths, before you do that though, you can start the spanning tree to kind of limit the ability for loops to occur.