This morning, a project I’ve been working on with a small team scattered around Research had a presentation scheduled to a VP. We wanted to give a demo, and some of my colleagues back East worked through the weekend to integrate their technology into the system. All was well.
So this morning, when it came time to demonstrate, Things Went Awry. The person giving the demo couldn’t actually get connectivity in the meeting room — she solved that problem by switching from wireless to wired, but then the system didn’t work well, giving “500 Internal Error” messages. Of course, we had backup slides to show, and made light of the problems, but it was still infuriating. Even restarting the server, and then the machine, didn’t help.
Once I got to the office, I was determined to find out what was wrong and fix it, because I hadn’t changed a thing since Friday, when all was well (the integration all happened on another system on the other coast). Nothing made sense, so eventually I searched the corporate directory for an expert, and found the right guy in one shot. He listened to my description, and then asked me if I’d pinged his servers to see if I had connectivity.
I hadn’t, so I tried. Name resolution took an awfully long time. And it failed at times, too. This was a strong hint that the problem really wasn’t in my code — it was somewhere in the bowels of the nameservers. So I got a nameserver expert involved, and, eventually, all was well.
But why did this have to happen this morning?