Asserts can also warn you when code performance falls below a certain threshold. Suppose you have some speed-critical code in an inner loop. If the timing of that section ever exceeds X milliseconds, then you might add an assert to notify you that your code is running slower than expected. That'll give you a head start on deciding where to optimize. The point here isn't to replace a traditional performance profiler. The point is just to make sure that the developer will be instantly informed if this unexpectedly bad performance ever happens.
using System.Diagnostics; class TimingTest { public static void Main() { #if DEBUG const int maxExpectedTime = 500; //.5 seconds System.DateTime startTime = System.DateTime.Now; #endif //First speed-critical section of code here ... #if DEBUG System.DateTime endTime = System.DateTime.Now; System.TimeSpan elapsedTime = endTime.Subtract(startTime); Debug.Assert(elapsedTime.Milliseconds <= maxExpectedTime, "First section took too long"); startTime = System.DateTime.Now; #endif //Second speed-critical section of code here ... #if DEBUG endTime = System.DateTime.Now; Debug.Assert(elapsedTime.Milliseconds <= maxExpectedTime, "second section took too long"); #endif } }
In the preceding code snippet, there are two pieces of speed-critical code. A real-world example might have four or five speed-critical "hot spots." When you run this program, you notice the feature is slow and you suspect the bottleneck is occurring in one of the known speed-critical sections—but without running a performance profiler, you couldn't tell which one. Yet these asserts will tell you if any section takes longer than expected, so that helps pinpoint which section is taking too long. And what if none of the asserts fire? Then that would prove the bottleneck is not in any known hot spots, so get out that profiler to identify the real culprit.
You probably don't have a firm estimate for the number of milliseconds a piece of code should require—developers tend to be very bad at predicting such things. But you can make some good, conservative estimates. Just ask how long a typical user would be willing to wait for a feature to run. Two seconds? Thirty seconds? Put an assert around the feature stating it should never take longer than your chosen value. Don't forget, though, that various factors make it difficult to reliably time intervals less than about.2 seconds. Better to assert on longer intervals.
I once inherited the code of a management tool for Microsoft Exchange. The authors intended to support both Exchange 5.5 and Exchange 2000, but time pressures forced them to defer support of Exchange 2000. APIs for the two versions were similar, but there were a few substantial differences. So when I was assigned to take over this code base and add support for Exchange 2000, I feared I'd have to spend weeks searching to find every single bit of Exchange 5.5-specific code.
Imagine my pleasant surprise when I discovered the previous authors had made heavy use of asserts to identify those sections.
Debug.Assert(version == Exch55, "This won't handle Exchange 2K yet");
So I did a first pass through the code to add support for Exchange 2000. Naturally, I missed a few places that needed to be changed. But when I ran my first test against an Exchange 2000 server, asserts popped up on all the places I missed. I didn't have to spend hours debugging to figure out all the problems—all I had to do was run the code and then the asserts instantly lead me there. It's times like that that make you want to send a dozen roses to the people who maintained the code before you.
Prepare for the future. Anytime you have code that works fine now but will require changes if somebody does such-and-such in the future, then add asserts to fire when such-and-such happens. That way the person who implements such-and-such will be guaranteed to notice he needs to change your code, too. He'll thank you for it.