Like comments in code, notes on a PR can remind a developer why he did something 6 months ago. But developers often forget that other people need to read and search those notes in the PRs, too. Defect tracking systems are an excellent place for collaboration among individuals and departments. The documentation, tech support, and quality assurance departments can gain value from the tracking system, too.
Even though each PR has a single owner, there's nothing wrong with multiple people reading the PR and adding suggestions. Teams with this form of collaboration gain a great strength. On my team, several of the senior developers make a point to at least glance at every new PR. If they suspect they know the cause of the problem, they'll add a brief note, which often gives the junior developer who owns the PR the benefit of the senior's experience. On the other hand, sometimes when reading another person's PR, the developer realizes this bug is a mere side effect of one of her own bugs, and will make sure it gets marked as such. But the final benefit is cross-pollination: It's good to have members of your team who are familiar with the entire product rather than just one small area.
Get in the habit of posting notes about your theories and assumptions on each PR so that your teammates can briefly check your logic (and you can check theirs).
My team makes a habit of documenting our ideas on each non-trivial PR ("I know the problem's not such-and-such because of X. I'm now investigating Y.") That way, if someone starts to implement an unnecessarily complex solution to a bug, a teammate might be able to notice and suggest an easier way.
Of course, carefully studying everyone else's PRs leaves less time for you to solve your own. I'm not suggesting you read other people's PRs as carefully as you would study those assigned to you. But glance at them once in a while and read the ones that sound related to your area. It'll give you a better appreciation for the workings of the system as a whole.
On my first job out of college, I was assigned to write server-side code. Being new and inexperienced, I didn't know how the other areas of the product worked. So I started skimming all the new PRs and gradually learned which areas of the product were generating the most trouble. I realized the GUI team was having tremendous problems fixing a GUI limitation, and this had generated multiple PRs. But the amazing thing was that my server-side component already contained an undocumented method that easily worked around that GUI limitation. I had written the method solely for testing purposes, and that's why no one else knew about it.
It had never occurred to the GUI team that my server-side component might already have the function they needed, and it had never occurred to me that my testing method would be useful to them. Skimming over all the PRs is one way for a team lead (or an ambitious developer) to stay on top of the project and spot opportunities for cross-team communication.
It's unfortunate that many developers feel they have an adversarial role with their testers. Testers and developers are on the same side, after all—both try to ship the best possible product. But some developers subconsciously see the job of testers as trying to stop the release of the product by constantly pointing out flaws. That attitude is just plain wrong. Yet a side effect of this mentality is that testers sometimes don't get enough information to test the product as well as they could. The more information you can provide your testers, the better they can help you.
Often when a PR is created, the tester doesn't yet fully understand what the problem is. This is especially true of bugs that occur only sporadically. It's also true when the repro steps involve several conditions and the tester isn't quite able to narrow down the exact problem. Suppose you're testing a word processor and find the program sometimes (not always) fails to save your files. You write a detailed bug report describing the repro steps as best as possible, and you send the program logs to the developer, who writes back: "Aha! I figured it out. It'll be fixed in tonight's build."
Now how are you going to verify whether the fix works? You weren't 100 percent positive about the exact cause of the bug in the first place. So if you can't repro the problem after the fix, does that mean that the bug really is gone, or does it just mean you're not running the right repro steps? You don't know. On the other hand, what if the developer had instead written: "Found it: The bug was caused when the filename was longer than 50 characters *and* you saved to a non-default directory. Otherwise, things worked fine. Fixed in tonight's build."
That comment provides far more help and tells exactly what test case should be constructed to verify the fix. In addition, this comment also tells a good tester something very useful about the implementation—50 is (or at least was) a "magic number" in the product. A good tester would immediately type 51-character-long strings into every other place in the product that accepts input. Developers tend to make the same mistakes over and over again, and you'd be amazed how many additional bugs a tiny bit of implementation knowledge can find.
A good tester will never entirely take the developer's word about the cause of a bug, of course. Just because the developer says the bug only occurred with file names longer than 50 characters doesn't mean that the tester shouldn't also test the fix with shorter names, too. Testers should throw in a bit of judgment, common sense, and randomness when writing test cases.
Try to help your testers whenever you can by posting more information on the PR. If you can think of additional test cases you'd like QA to focus on ("I'm confident it handles situation X, but I'm not positive about situation Y"), then note that on the PR. Any magic numbers or heuristics you use should be described, too. Basically, write down anything that could help the tester focus on the correct problem areas.