Most high-quality software companies have a priority system for bugs: Rate each bug on a scale of 1 to 5. Not only does this let the developers focus first on the most important issues, but it also lets the company set policies that all bugs above a certain priority level must be fixed before shipping, whereas lower priority bugs are considered nice to fix but not crucial. The categorization of a "must-fix" bug will depend on your product—for instance, aerospace software must be held to higher standards than games software. But even then, some bugs are still more important than others.
At an aerospace company, a level 1 bug might be defined as any error that could potentially cause a plane to crash. Shipping software with a known bug like that is inconceivable. But an example of a level 5 bug might be when there is extreme turbulence above a certain altitude on days when the temperature is below a certain level and the sun is at a certain angle, then the in-flight movie may occasionally skip frames if more than 50 passengers are listening to the soundtrack. That bug is rare enough and minor enough to be deferred when you're behind schedule. A gaming software company would, of course, have completely different standards of acceptability.
Developers often feel uncomfortable deferring known bugs. But since you'll never have time to fix all the bugs, come up with a priority system for deciding what's important and what's minor enough that the user won't seriously mind. Assign a priority to each bug based on that scale, and set a policy that you will never release software with any known bugs above a certain priority, but may consider releasing with bugs of lower priority. That lets you balance the engineer's desire to produce bug-free code against the company's need to start selling the software so it can make a profit.
Other statistics are useful, as well. For instance, how many bugs has a particular customer reported? At my company, we track whenever a customer reports more than a certain number of bugs in a certain time period, and our project manager then sends personal notes to those customers to make sure they understand we appreciate their patience and apologize for the trouble they've had. You'd be amazed how often techniques like that can prevent a customer from becoming angry. And this can be handled automatically with a simple query to the defect tracking system!
If you set up your defect tracking system to record which feature each PR involves, then you can see statistics about which features have the most bugs. That can be very helpful information when your team runs beyond schedule and the program manager asks whether deferring a particular feature would help ship on time. Or, you might want to know how many PRs each developer is currently assigned so you can balance the workload fairly. Getting this info from the tracking system is easy, too.
Reports from a tracking system are also one way for project leaders to estimate—and document—which programmers are most productive. Who fixes the most PRs? Who has the fewest bounce-backs? Who has the fewest bugs logged against his code? Of course, these metrics tell only part of the story—maybe one programmer fixes the most PRs merely because she's only working on the easy ones. So these numbers can't be used without placing them in context. But they're better than nothing when justifying why a certain programmer deserves a raise.
Another great use of the reporting feature is to create "top ten" lists. A simple query of the tracking system can show all the open bugs sorted by priority or all the recently fixed bugs that are candidates for inclusion in the next hotfix or the most commonly reported complaints from users. Take the first ten items from that list and you have a helpful report. Most professional defect tracking systems not only allow you to generate your own reports, but usually even include several predefined templates. These reports make great summaries for senior management. When the vice president of engineering asks for a presentation about the status of the project, include a couple top ten lists and it'll make you look like you're fully in control of the project.
The more advanced tracking systems will even produce long-term graphs (or at least export data to a spreadsheet) showing the trend of your bug information over time. At what rate is the total bug count going down? That can estimate a timetable for when all the bugs will be fixed. Which feature has the most bugs? That can show which feature is a likely deferral candidate. Is the average time needed to fix each bug increasing? That might be a problem you should look into.
During the month before Microsoft shipped Outlook 2000, our project leader made a daily ritual of posting a chart showing the number of known priority 1 bugs in each component. It became a friendly competition among the different component teams to stay on top of the rankings by striving for the fewest number of open bugs. I saw one programmer who was about to go home at 7 P.M. realize his team was only three bugs away from the top spot, so he stayed and worked several more hours to fix those three bugs—just so he could have bragging rights the next day. Of course, the team would have worked hard regardless, but a little competition now and then can provide extra oomph during the last days of a release cycle. The charts produced by the tracking system are perfect for this.