Team LiB
Previous Section Next Section

Searching and Reporting

A database does more than simply list data, of course. It also accepts queries so that the data can quickly be searched, filtered, sliced, diced, and reported on. You can instantly narrow down gigabytes of data into small, manageable chunks. Without this basic ability, the sheer volume of information would make large databases nearly useless.

Since every defect tracking system is built atop a database, your tracking system will also almost certainly allow queries for searching and reporting. This feature is underestimated by many developers—after all, if you already know all the PRs that are assigned to you then why search for any others? But it's amazing the number of different ways a simple keyword search on the PR system can aid a development organization. Project leaders will find these features indispensable for estimating project progress, but even rank-and-file developers will occasionally find the key to solving a bug in a search.

Duplicate PRs

Often, the same bug is reported more than once. Two testers may independently discover a bug and each write a PR about it. If the two PRs are assigned to two different developers, then both developers will have to go through the work of trying to reproduce the bug. And if the first engineer fixes the problem before the second starts debugging, then the second engineer will be chasing a phantom—a surefire recipe for wasted time. Duplicate PRs can be created in less obvious ways, too.

  • A known bug is deferred to a later release. A customer then reports the bug, the tech support rep doesn't realize the bug is already known, and creates a duplicate PR.

  • Two features appear unrelated, but the developer knows they share the same code. A tester reports bugs in both features, even though the magic of code reuse means fixing one will fix the other, too.

  • One bug is actually a side effect of another. For instance, because the application isn't persisting the user's settings from one run of the program to the next (bug #1), the program expects a certain setting to be defined and crashes when the setting can't be found (bug #2).

  • The GUI may have a usability flaw, prompting people to expect it will do X even though it won't. A usability PR is created to address this, but someone else creates a PR reporting that feature X isn't working.

If the description of a bug clearly indicates the issue is caused by code you maintain, then there's usually no reason to believe there might be a duplicate PR for this issue. Since you maintain the code, if there were a duplicate PR, you probably would have already seen it. But if an issue appears to involve interaction between your code and someone else's, it may help to briefly search that person's PRs to see if she is already working on something similar. If so, then that frees you up for another task. Most of the time, you won't find a match, but it only takes a few seconds, and think of the payoff if it works. Besides, searching often helps in other ways even if it doesn't turn up any duplicates. We'll discuss that in a moment.

In addition to searching for duplicate PRs yourself, your team should encourage everyone to search for duplicates before filing a PR. Preventing duplicate PRs saves everyone time. Inevitably, some duplicates will still occur, because no search is perfect. Two users may describe the exact same bug using very different terms, and in that case, it's unlikely that either user will find the other's PR in a simple search. But searches in PRs do work out often enough to be worth a try. Make sure your defect tracking system supports keyword searches, and encourage your team to spend a few seconds using that search feature before filing new PRs.

Searching Past Bugs for Clues About the Present

One day, a customer reported a bug against a project I had recently inherited. I checked the debugging logs, and saw one error message coming from a certain function, so I pulled up VS .NET to quickly browse that code. Since I was new to this codebase, I was hoping I might see something obviously wrong with the function. Sadly, no such luck—the function seemed OK at first glance. I resigned myself to the fact that I would have to actually work to solve this bug. But something still nagged me.

The error message in the log seemed familiar—the text was "The server is unwilling to process the request", and I knew I'd seen that message before, but I couldn't remember any details. So I ran a search on the PR database from my previous project, looking for the text of that error. Lucky for me, I found the PR from a year earlier describing a very similar problem, and that PR included my comments on how I fixed the issue. It was the password policy bug we discussed in Chapter 7. As soon as I saw my comments on that PR, all the details came flooding back to memory and the fix was a snap.

Finding that password policy bug the first time was annoying enough. Tracking it down a second time would have been just as bad. You can't expect to remember all the details from every bug you've fixed over the past few years. Nor can you remember every bug that your coworkers fixed. But a defect tracking system can remember everything, and if the system is searchable, then it can serve as an institutional memory. Anytime you see a bug with a searchable term, such as an unusual error message or the name of an infrequently used feature, search to see if the bug has been encountered before. You might not find anything, but if you do, then you may get a powerful hint about the nature of the bug.


Team LiB
Previous Section Next Section