Storm Control
We run a large number of LANs all over the country that are “controlled” by the particular business unit. We manage the gear, but, since they have the money and have to pay for anything we do, they make the final decision on what gets put in. Sometimes that gets out of hand, as you can well imagine.
A good terrible example came up a few months ago. It seems that, at some time in the past, one site needed some more LAN ports, but, instead of calling us and having us send them another switch, one of the “technical people” there brought in a hub from home. It really irks me to see a hub on the switched LAN, but we really have no control over those decisions. They plugged the hub into one of the existing drops somewhere in the building and plugged everyone in. It worked…until somebody moved one of the machines. The machine was at a desk near the hub, and the network cable, still with one end plugged into the hub, was just left lying there. A good Samaritan came by, saw that the hub was not plugged into the network (though it was through another path), and plugged it back in for us — providing a nice second link from the hub to the switch stack in the closet. Take one switch stack, add a hub, insert a switching loop, bake at 350F for a few milliseconds, and you have a broadcast storm. If you don’t know already, broadcast storms are bad and eat switch CPU like the yummy cookies we baked. In this case, several 3750s were taken completely down.
How does one prevent such from happening again? Well, the first thing to do is to get the CTO to tell everyone that they can’t plug hubs into the network. That works about 0% of the time, though, so we had to find a solution that was enforceable. One of my coworkers found the traffic storm control mechanism built into Cisco switches. This mechanism allows you to set thresholds based on broadcast, multicast, and unicast traffic and take action when those are reached.
Here are the gory details. I need to mention, though, that storm-control is configured very differently across platforms and IOS versions. I would say your mileage may vary, but it’s probably more accurate to say that this won’t work on your switch. A 6500 is configured differently than a 4500. A 2900XL is different from a 2950. This will get you going, but you’re going to have to do some research on your own to find out what works on your platform.
interface FastEthernet 0/1
storm-control broadcast level 50
storm-control action shutdown
What just happened? Good question to ask. If broadcast traffic on F0/1 utilizes 50% of available bandwidth, the port is shutdown. That means that if broadcast traffic takes up 50Mbps of bandwidth on this port, the port is admined down just as if you did a shutdown on it.. You should probably do the same for multicast or unicast as well to make sure you don’t get bitten by those. If you don’t want to shut down the port, you can also use the trap action to just send an SNMP trap with the port and information, but that doesn’t prevent very much; the storm will probably wreak havoc before an email for the trap lands on your Crackberry.
Here’s another big disclaimer. Finding a good level for a port can be very, very difficult. A linux box is going to have very different broadcast/multicast/unicast traffic than a Windows box which is different than a Mac. You may have to spend a lot of time analyzing SNMP counters to find out what a good level is. God help you if you have a hub like we did with mixed computer platforms on it.
- Generating Network Diagrams from Netbox with Pynetbox - August 23, 2023
- Out-of-band Management – Useful Beyond Catastrophe - July 13, 2023
- Overlay Management - July 12, 2023
Hi Aaron,
in this specific situation you would get the same benefit with Spanning Tree BPDU Guard, assuming the ports attached to the hub were configured with portfast.
This way you do not need to worry about the correct levels of broadcasts/unicasts/multicasts. The port will just be disabled if it recieves a BPDU, which would obviously happen in the situation you described above.
Absolutely right, Sebastian. That would fix the problem of hooking up random switches and ill-fated hubs to the network.
I wasn’t exactly clear on how the hub got connected twice to your switch fabric, but I guess somehow it was and that’s what caused the switching loops you’re talking about. Absent that second uplink to your switches, you’d probably be OK with that hub connected.
Bubba-jay: Right-o — hubs are alright as long as you don’t get any loops. BTW, I edited the article for a little more clarity.
I guess it doesn’t take much broadcast traffic in order for a lethal storm to get started. Taking down several newer switches like that is no joke!
No doubt. When this happened, I turned off spanning tree on a 2950, plugged a crossover cable from f0/1 to f0/2, and plugged in my Windows laptop to f0/3. I started to ping the IP of the switch, but, before five packets came back, the switch completely stopped responding. When I unplugged my laptop, it came back. It was pretty cool, actually. 🙂
Just reading a bit here. A really nasty issue is when a single miniswitch or hub is looped on the hub itself and has only a single link to your switched fabric. BPDU Guard does not readily work here but a storm control of 20 will. We have also found a unicast level (like in the description above) set to about 30 will suffice and multicast levels to about 50 so ghosting is supported. If you have found more suffice levels please forward.
Thanks
Roger
Hey guys,
We have similar issue in our network.
We are using Dell power connect switches & Avaya Phones in our network.
Sometimes users connect both ports of Avaya phone to a 8 port switch, which makes loop in network & whole site goes down. We already have STP running but it doesn't work as Avaya phone works like a hub. Is there any solution for this issue.