Stubby Post – Final Tally of 3750 Failures
It’s pretty widely known that I hate Cisco 3750 switches. We’ve had so many hardware and software failures with them that I’ve got a seriously bad taste in my mouth. Since I’m leaving for a new company, I thought I’d publish some statistics while I still have access to the numbers.
Total TAC cases online casino usa european roulette opened related to 3750s: 21
Number of 3750G-12S-S replaced: 21
Number of 3750G-24TS replaced: 7
Total number of RMAs issued: 28
Total number of 3750s in the company: ~120
Failure rate: 23.3%
I can accept a handful of failures, but 23%?!?!? That’s one fine platform you’ve developed there, Cisco. Keep up the good work.
- Netbox Upgrade Play-by-play - April 25, 2023
- Sending Slack Messages with Python - March 15, 2023
- Using Python Logging to Figure Out What You Did Wrong - February 26, 2023
Every time I see posts about the 3750s I cringe. Is it a power supply issue?
Are there any diff. of failure rate 23.3% between the 3750G-12S-S and the 3750G-24TS: is “better” the 12S-S or the 24TS ?
Daniel: Our failures are usually with the Ethernet ports themselves. We start seeing errors, and that escalates into a state where the ports stop sending traffic. Cisco immediately RMAs them without asking any questions.
Roberto: The overall failure rate of the 24TS is better, but we’re still at about 10% failures. The 12S-S are just horrible devices and make up the bulk of our problems.
That’s odd. We have hundreds of 24 & 48-port units deployed in our customer base, like hospital campuses and metro WANs and we see a very rare failure here and there, maybe less than 1%. Certainly nothing like the 23% reported here. Perhaps there’s something something particular to that extinct 12-port SFP model. The 3750X/3560X are pretty sweet. Are you deploying switches maybe without power filtering/UPS?
80-90% of our backbone at each location is 3750s (300+) – I have to stand up for them since I haven’t seen the problems that you are experiencing. Most of my cringe worthy moments have been with chassis based systems. We only had one 12SFP model and most of the others are 48 ports in stacked configurations. I still love them.
I have several customers(Metro SP, National backbone and hospitals) that rely completely on the 3750 for their networks and as far as I know the only failure that has happened to these switches was when the headquarter of one of the Metro SPs burned to the ground and I don’t think that any switch would survive that.
What is your experience with the ME3400? That model has been causing a lot of problems for some of my customers.
That agrees with my experience of 3750 switches. Definitely to be avoided where possible.
[…] Stubby Post – Final Tally of 3750 Failures | Aaron’s Worthless Words – I can accept a handful of failures, but 23%?!?!? That’s one fine platform you’ve developed there, Cisco. Keep up the good work. […]
Jason: The 3750s are always deployed along with 29[56]0s and or 4500s, and none of those fail (within reason). It’s just the 3750s every time, so the power and environment are fine.
Andy: We’ve had 3750s in the same rack with 4500s and 6500s just die randomly. Knock wood, we’ve had zero problems with the chassis switches.
Leo: I’ve never dealt with the MetroE gear, so I can’t comment on those.
Greg: I will never buy a 3750 ever again. I’m sure you and Ethan are with me on that.
strange….we’ve rolled out well over a thousand 3750s over the past few years and i can only recall two or three RMAs. Its the 4500s that leave a bad taste in my mouth….i don’t have any hard numbers, but the failure rate on those Sup engines seems pretty bad to me….
I just joined a company that recently installed 30 3750g’s and low and behold in a 1 week period 7 of them suffered power supply failures. I don’t have alot of confidence in them!
We have a raft of 3750s throughout the range from G to X, PoE and non-PoE, 24-48 ports. I hate them all with a passion. Everything is a chore with this platform. Hardware failures and software failures relating to stacking are the particular bains of my life. Most of my recent weekends have been spent dealing with stacks of 3750s which have been either upgraded or been replaced due to faulty hardware. The single-integrated-PSU Gs with their horrific RPS system drive me mad. Yesterday I had to fly to an island to swap out a bust G which was running on RPS after blowing its’ internal PSU. First thing I did (after prep, of course) was to switch the RPS to standby to kick the failed G over – either to internal PSU if it wasn’t fried, or dead due to lack of power. I wasn’t counting on this knocking off 2 other Gs in the same stack. A 5 member stack down to 2. WTF?! They can’t have been running on RPS as the stupid thing only supplies one device at a time, so why the hell did they power themselves off when it was disabled?! Franticly ripping the RPS cables out of the now dead switches (the RPS was in standby, remember…), removing and reapplying the power cable had no effect. Removing the stack cables completely and reapplying power DID however revive them. So after I repeated this for the not-really-failed switches and swapped out the one with the blown supply, I went to change my underwear.
Other stacking woes? Removal of a stack member for upgrade (replace G with X), disconnected both stack cables (the to-be-replaced switch was powered off), but ‘show switch stack-ports’ shows 3 of the 4 necessary cable endpoints to be DOWN. How the f… Turns out that while it thought this member was still in the stack while it wasn’t (showed up in ‘show switch’ as Ready), forwarding was being screwed for some reason (maybe it was punting traffic down a stack cable that wasn’t really there? :-/). We had to reattach and then detach the switch again in order to observe it properly disappear.
Added an X to a stack a few months ago, it functioned perfectly except didn’t apply any QoS marking on any of its interfaces. No other hallmarks of any kind of problem. Just incorrectly dropped traffic which was pretty tricky to isolate. Reboot fixed that.
StackPower whinging about unbalanced supplies in an identically populated stack with no hardware or power issues. Reboot fixed that.
Spontaneous stack-split during the addition of an X to a stack of other Xs. A stack cable in the middle of the stack spontaneously appeared disconnected and caused a complete stack split while the stack ring was open at the bottom for the new member. That one could’ve been the fault of the guy that was doing the work but he denies it. I have no reason to trust him on this, however: fairly certain he’s the one who keeps bending fibres very tightly (to the point of the insulating sheath going white with stress…) to make them fit in cable management arms.
When we finally replace our X installations with Nexus, we’ll be replacing the G installs with the Xs, which at least have dual modular power supplies. Shitload of work and still a massive potential for stack issues to bite us all though. And then I’m going to go Office Space on those fucking RPS.
Check serial numbers
issue with 3750’s – stackwise port issues
Its a capacitor issue inside the device
Not sure its widely known, but we’ve seen it a lot in our organization.
Serial numbers usually begin with (but not limited to) cat08XXXXXXX
I have done 8 RMA’s this year on 3750’s. The issue with everyone was power supply.
Last year I replaced about 6, same problems; power supply.
Yea, I know, dirty power right…even through the UPS eh? Don’t think so. It is a quality control issue.
Does anyone know which capacitor(s) inside the 3750 power supplies are failing? I would love to try to fix the dead power supply that I have here, before paying for a replacement.
Just looking at the posts here, and the rant about power supplies dying…even when on RPS.
Do you folks realize that when RPS takes over, the switch (except E series 3750’s) will fault the p/s and a hard boot is required to have the internal p/s go green again….. A reload will not fail back to AC power. Even on the E series, you must manually switch back to internal p/s, but without the reboot.
I had nine 3750G’s w/power fault, but must physically remove the AC cord, put RPS in standby, and reconnect AC for a reboot, which clears the p/s fault…..these switches are designed this way; it’s not a bug. Although my RMA was accepted, I am sending the “new” units back rather than replace them…
c09 c10
Hello,
I’m sorry to bore you with this. I plugged in a C3560-48TS recently, and strangely enough this device just blew smoke at my face as soon as I switched on at the wall socket.
I don’t know what the problem was, since its rated for 110-240V, though I am in a 240V environment.
Tell me your thoughts, please…
We have used cisco 3750-x-48pf stack switches.we are facing the hardware problems often.Cisco 3750-x got faulty.But power supply is working.We have replaced the faulty switch with new cisco 3750-x-48pf but within one month new switch also will get faulty.Can anyone know the solution kinldy inform to me.Yesterday also i replaced 2 cisco 3750-x switched due to hardware problms.