On my first job out of college, the development manager was an un-PC guy with a scruffy beard and a ponytail who passed a rule for the entire team: No development tools could be installed on the official test servers unless those tools shipped with our product (i.e., no third-party debuggers). Any bugs discovered on the test servers would be debugged the same way we'd have to debug problems on a customer's environment. I didn't understand the point of this at the time. We had a 14-person team developing a 1.0 product from the ground up—naturally there were a huge
I once worked with a team to design network diagnostic tools. These guys were sharp. They could lecture nonstop for a month about network performance and only tell you a tenth of what they knew. Some of these guys literally wrote the original spec for the various network protocols we use today and could tell you exactly what bits were traveling on the wire at each step of a connection. But every time a piece of code took more than a few seconds to run, someone would always say, "Must be a slow network today that's causing this bottleneck."
Now, our building was equipped with one of the fastest LANs money could buy in those days—high-end routers, state-of-the-art backbone, the works. Even in the worst cases, it never took more than half a second to send data from any machine in the company to any other machine. And everyone on the team knew this! These guys could recite off the top of their heads the most miniscule details of the TCP/IP network protocol, but even the best of them were still able to occasionally fool themselves into thinking that poor performance was caused by unnamed factors on the network rather than their own code.
Don't be like that. Always assume the problem is in your own code until you have overwhelming evidence to the contrary. number of bugs, and on countless occasions, we'd plead, "Just let us install a debugger on that machine! It would make finding this bug SO much easier!" But my manager wouldn't hear of it.
Looking back on it now, I realize he was exactly right. It's OK to install debugging tools on most of your computers, but make sure you always have a few on which debugging tools are strictly forbidden. Let me repeat that: Debugging tools are OK on most of your computers, but just make sure you have a few computers on which they're forbidden. Most people hear this rule and only the second half sinks in: "What? You're saying I shouldn't install debugging tools and I should do everything the hard way?!?" No, of course not. Debugging tools are lifesavers; you should use them for most computers whenever you can.
But in your test lab, you should also have at least a couple of computers that are exactly like a typical customer's environment, and (unless you're writing tools for software developers) most of your customers don't have debuggers installed on their computers. So you want to have your test environment match that condition.
Let's examine the advantages of this policy:
Verifying your code works on a non-developer environment
Ensuring your product can be debugged at a customer site
Giving your team practice in other forms of debugging
Lots of times, installing a debugger will update various system files with the latest system libraries or even install new system libraries that didn't exist before at all. For instance, watch the installation screens of Microsoft Visual Studio .NET very carefully and you'll see it's updating a lot more files than just the compiler, editor, and debugger. It also gives you the latest version of Microsoft Internet Explorer, the .NET runtime, the latest Windows Service Pack, and a hundred others.
Now what happens when you write code that depends on some system library that comes with Visual Studio .NET? Remember those files aren't guaranteed to be installed on all computers. Maybe you don't even realize your code depends on those files because it always Just Plain Works with every computer you try it on (because every computer you try it on has development tools installed). If you allow all your test machines to have debuggers installed, then you run the risk of shipping your product without even realizing that it won't work unless the customer installs a new version of system file X.
One day, I came up with an idea for a great in-house utility for my team. I used Visual Studio .NET to write the utility in a mix of C++ and C#, tested it thoroughly, and then proudly sent my team an e-mail announcing this great new tool I'd written. .NET had only recently come out, though, so most of my coworkers didn't yet have VS .NET installed. I told them it would be easy—just install the lightweight .NET runtime!
But I had tested only on machines with VS .NET installed. I hadn't tried it on a computer with only the .NET runtime. Since I had compiled my C++ code with the VS .NET compiler, my code was trying to dynamically link to the latest version of the Microsoft C runtime library, which was only installed on my machine because VS .NET had put it there. Oops. I built a new version that statically linked to the runtime library, and sent out an e-mail to my team saying to try again with the new version.
That bug was fixed, but my coworkers reported the utility was now throwing an exception when reading from a database. That's when I discovered the .NET SqlConnection class requires a database library called MDAC 2.6. MDAC is installed by default on Windows XP or later, and it also gets installed with VS .NET. That's why I didn't notice the dependency. But none of my coworkers had it on their Windows 2000 machines, so I had to send out yet another e-mail telling everyone to download MDAC before running my utility. My team teased me about my double mistake for months.
Moral #1: If your .NET application uses a database and must run on Windows 2000, make sure your installation program checks if MDAC is present. The .NET runtime by itself is not enough.
Moral #2: Test your program on a machine that doesn't have development tools installed.
No matter how thoroughly you test your code, some customer will report a serious showstopper bug after you ship it. Maybe the bug only occurs on that customer's highly nonstandard configuration, or maybe the bug was simply something you forgot to test for; but either way, you're going to need to make sure you can debug your product after it's in the field. Now if your development staff relies on a debugger for fixing bugs, how will you find the cause of this bug at a customer's computer that is behind a firewall thousands of miles away?
There are lots of things you can do: Your code could output a trace log of important events so you know where the program choked (see Chapter 5). You could provide great debugging error messages and hope the user will accurately report those messages to you. You could write a snazzy system to e-mail you the state of the computer's memory and stack trace when a crash occurs. You could write a utility to scan the computer for various known problems. Whatever.
But how do you know those efforts are good enough? If you're using a log, how do you know whether your log is detailed enough? If you're using error messages, how do you know whether they correctly identify the problem? That's why you refuse to install debugging tools on some of your test computers. Anytime you discover a bug on those machines, you won't have the advantages of a debugger. You'll be forced to debug them exactly the same way you'll debug a customer's problem. That way, if you're not logging enough or if your snazzy reporting system isn't snazzy enough, then you'll discover this during product development—which is way better than discovering this flaw after the product ships.
I hope this book is successful in convincing you that debugging involves more than just brute-force stepping over code in the debugger. I hope you come away having learned other methods and when they are appropriate. If so, why not give yourself the chance to practice? The debugger is a powerful tool, and you'd be foolish not to use it—but it might be nice to have at least one computer in your test lab that forces you to occasionally explore other avenues. It'll keep you sharp.
Your testers should be testing the program on computers that are configured as close as possible to that of a "typical" user of your software. Do your typical users install development tools on their machine? No? Then at least some of your testing should be done the same way: on machines that don't have development tools installed.