Please note: This site is in no way affiliated with Cisco and is intended only to make light of the differences between the various aspects of their product range that causes frustration amongst some of their user-base.
Fuck you, 3750.
The Cisco Catalyst 3750 is a piece of shit switch. Sure, it switches packets and has CEF and QoS and PoE if you want it. Sure, the later versions have dual power supply options and the former had that oh so fucking wonderful RPS. A Networkers presentation (subsequently forwarded to me by TAC when investigating an issue with performance) contains a slide that proudly proclaims that the 3750 will forward across a stack even when being swung around in the air by said stack cable. Oddly, I don't think that many 3750 administrators are hoping that the sod is going to keep forwarding while they're swinging it by the stack cable: they're just trying to get enough speed to smash the fucking thing into oblivion. Or at least, that's my experience of them. Yours might differ, but probably not by much.
Know Your Enemy
This is the stuff of nightmares. That single integrated and infuriatingly unreliable power supply you see there is 25% of the problem with the platform. Another 25% goes to the RPS. 50% of the blame is owned by the StackWise setup on the left there. There's some overlap as you'll see. The other image there is of the range of the 3750G models. These ones are the nastiest out of the whole family (Es aren't too bad, and Xs are pretty good now in homogeneous stacks) but they have an ugly step-sister: the dreaded 1.5RU monstrosities. Stupid, awkward sized (if it's going to be 1.5RU, make it 2RU, eh? At least then it'll fit neatly in racks) switches with all the same problems as their 1RU equivalents, if not more-so on the hardware reliability front.
3750 RPS - what a shit idea.
The RPS used on the 3750G and E models is shit. There's not a lot more to it than that. It's just properly shit. 6 output ports, and a single input for itself. Of those 6 output ports, only one can be fed at a time. Without a low switch to RPS ratio, your risk is ridiculous when you consider the potential for 2 devices blowing supplies at once, or the second even occurring some time after the first if you didn't notice it go over to RPS. We've had RPS' spontaneously switch modes, cause live switches running off internal PSUs turn off when the RPS was kicked into standby for maintenance, and have switches randomly switch to RPS when it becomes available despite their internal supplies working just fine. On the up-side, it does generally give you the opportunity to kill the config and flash of a dead box before returning it to Cisco under RMA (which is almost certainly because of the blown power supply it has).
Stack of 3 Xs becomes stack of 4: one missing QoS.
Added a new 3750X to a stack of 3 other 3750Xs to increase capacity, and all appeared to go smoothly. Knowing the 3750s, this was too good to be true. Yep, turns out that the new box in the stack wasn't applying QoS marking on any of its' interfaces, despite being configured to do so. Nothing else to indicate any kind of problem going on with the stack or any individual member, just some of our high-priority traffic getting dropped and being discovered to have no QoS marking set. TAC couldn't fathom, reload resolved it. Stupid 3750s.
Remove G from stack: 3 of 4 stack cables DOWN.
Trying to remove a G from a stack to upgrade with an X, the switch didn't go down cleanly. The stack saw the switch as still there with 1 stack port UP on a neighbouring device, while it was powered off and physically disconnected from the stack. Reconnected it to the stack, powered up, and then powered off and disconnected again - all well with the world. What a ridiculous platform.