Out-source Call Center recovers from catastrophic outage

This became a surprisingly full day. An earthquake near Taiwan around 1 PM local time disrupted communications lines to our out-sourced call center in Manila, the Philippines. This was certainly a surprise to our Customer Care group as they were under the, apparently mistaken, impression that diverse redundant voice routes were provided by the out-source company.

Regardless, it caused the Engineering department some serious headache. As the assumption had been that redundancy was built-in, a business recovery plan had not been created. A scramble was required to cobble together a work-around to reduce the impact on our customers, whose calls in the meantime were being dumped on the floor.

In the heat of the moment, and I must say that at that moment there was far more heat than light, the discussions were very dynamic and free ranging. Similar discussions, in the context of international diplomacy, might be called “frank and forthright.” Simple folk like you and me probably would call them “painful.” (The Reader is encouraged to insert here anecdote about what your Customer Care department considers a crisis and how such events have impacted the lives of working Engineers like us.)

So we’re discussing how to stop spilling all calls to Care onto the floor. Engineering first described the normal flow of these calls from our switch through the IVR and on to the Philippines, routed using the service provider’s toll-free number. Helpfully, there was talk of finding an alternate toll free number, or even a toll number, just anything that might allow calls to be delivered without being dropped.

The alternate toll-free approach was quickly eliminated because the service provider had no such “plan B” in place (Note to Customer care: that contract ought to be re-visited.)

We discussed routing calls to a cellular voice-mail after first playing a sympathetic, not-our-fault recording. This quickly morphed into routing calls to one of our cellular switch voice-mail boxes with a sympathetic recording being the outgoing announcement. Engineering liked this solution as it required no tweaks to the IVR, only switch translations. Unfortunately the switch gamely clung to the notion of delivering ANI, and hence all mobile-originated test calls using this work-around routed the mobile caller to their own voice mailbox. Not exactly the desired result.

Next we hit on the idea of routing these calls to a DID line on our corporate PBX, where the outgoing voice-mail message described the problem and encouraged the caller to leave a message that a CSR would soon respond to. This allowed us to get past the ANI issue. So the new PBX extension was created, verbiage for the outgoing message written and recorded, and switch translations modified to re-route the calls.

And wouldn’t you know it. Just as all the pieces of this work-around fell into place, the out-source Customer Care called to say they were back online, no workaround would be required. The good news is that yes, no work-around was now needed. But more importantly, it was learned that such a failure was a) possible, and b) could be bypassed if needed. However the Engineers continue shaking their heads at the furor that was caused. They are happy to return to the crises of the routine, day-to-day type.

Just another day in the life.