Stubby Post – What’s an IDB?
I posed the philosophical question on Twitter the other day asking if single trunk links should be in an EtherChannel bundle just in case you need to expand later. I didn’t really expect an answer, but the ever-verbose @WannabeCCIE pointed out (in not so many words) that you should watch your IDBs. What is that?
That’s an interface descriptor block. I admit that I’m not intimately familiar with them, bu they’re data structs in IOS used to keep track of the interfaces on that device. They come in two flavors – hardware and software. HWIDBs usually represent a physical interface but they also represent tunnels, SVIs, PortChannels, subinterfaces, and any other virtual interface that you can configure. The SWIDBs represent the layer-2 encapsulation of each HWIDB, so you’ll see entries talking about Ethernet, HDLC, PPP, etc. That means that every interface you have on a router consumes two IDBs (there are always exceptions). That’s important because each platform and IOS version combination has a limit to the number IDBs that device supports.
If you check out one of Cisco’s pages on IDBs, you’ll see a pretty table showing the limits. The 3640 running 12.4(25b) that I run in my GNS3 lab has a limit of 800 IDBs. That means that I can have 400 interfaces configured at most. That little 800 series router running 12.1T that you still have running at the VP’s house has an IDB limit of 300 or 150 interfaces. The 7200 in the data center running 12.3 can handle 20,000 IDBS or 10,000 interfaces!
If you guessed that you can see your IDBs by typing show idb, then you guessed right. That will show you the IDB limit, how many are being used, a summary table, and a list of all the IDBs with their details. Remember that there may be more interfaces on your device that just physical. You may have an SVI, loopback interface, or even a null or two. These all count towards the limit.
Before you get freaked out and start checking the IDB limits on all your devices, take a breath. I’ve never run into the IDB limit on any device and I’ve never heard of anyone who has. I’m sure someone has, but I don’t remember hearing about any. Think about it for a second. If I took my 3640 and filled it with 4 NM-16ESWs, I’d only have 128 IDBs used (16 ports * 4 modules * 2 IDBs for each port). Don’t forget the null interface and VLAN 1 SVI by default (VLANs take 1; VLAN SVIs take 2 each). That brings the count to 133. Let’s add 100 more VLANs and SVIs on this guy. Now we’re up to 433. How about we put each interface into a channel group of its own. That adds another 128, which is 561. Only 239 more to go.
Unless you’re doing something out of the ordinary, I don’t think the IDB limit will be a problem. Of course, that depends on your definition of “ordinary”.
Send any sort indexes questions my way.
- Netbox Upgrade Play-by-play - April 25, 2023
- Sending Slack Messages with Python - March 15, 2023
- Using Python Logging to Figure Out What You Did Wrong - February 26, 2023
I’ve hit the IDB limit on a 7206 doing PPPoE termination. Took a bit to figure that one out!
I’ve hit it on the 7100 platform while doing DSL aggregation.
Yes, DSL aggregation on a 7120. No, the 7100 is NOT a service-provider platform. We had to manually create a subinterface for every ATM PVC. Add to that all of PPP sessions (using Virtual-Access interfaces), and it’s a recipe for hell and sleepless nights.
Thank $DEITY I was finally able to convince them to go to the 7200 platform for this stuff (AutoVC + aal5autoppp encapsulation == happy me!)
One of our core 7206’s reached the limit as well, the same as Paul, it was terminating PPPoE. Not a problem any more though as we subsequently did a total overhaul of the network.
I have run into similar situations as the people of above, usually when doing PPPoE termination. The IDB limit is often the one of the two biggest limiting factors when acting as an LNS (bandwidth being the other major one). For example, you can use any 2800 ISR as an LNS, but you need to keep a really close eye on the bandwidth and idb.
These days I poll the idb values and get reports when the limit reach x% of the max on my LNS or other dense interface locations (like l3 switches terminating lots of SVIs for Cloud infrastructure).
Having said all of that, I really prefer using Port Channels where possible as it adds a level of abstraction which comes in handy on border routers for example where you set the source interface. I like knowing that I can add a few more interfaces into the bundle to increase bandwidth or upgrade from 100m -> 1g -> 10g. Only problem I have had is 3rd parties like to point to your port channel as the reason why things are failing!
I believe the Cisco 2k and 3k switching platforms have hard limits on the number of Etherchannels – perhaps 16 on a 2960 and 64 on a 3750. Sounds like plenty until you stack three 48 port 3750s 🙂
Nice idea though. And I think you can migrate to a bundle with minimal downtime by choosing the correct mode so the links don’t drop? Time for me to hit the lab…
I came accross a max of 4 VLANs on a Cisoc 87x router.
When using 1803 c180x-advipservicesk9-mz.124-6.T3.bin, I could not add more than 8 VLANs, unless I downgraded to c180x-advipservicesk9-mz.124-4.T4.bin or upgraded to c180x-adventerprisek9-mz.124-4.T4.bin