Team LiB
Previous Section Next Section

High-Performance Code: Best Practices

Unfortunately, many people assume that their code is working as fast as it can because it runs fine in the environment in which it is being tested. This leads to an attitude that ignores coding with performance in mind because the application runs fast, with no noticeable drop in performance, regardless of how it is coded.

The next section will show you some ways to write code that preempts a large number of the performance problems that you might experience, but without having to run a lot of complex performance analysis tests.

Using Exceptions

If you've been programming with .NET for a while, you're familiar with the sound of your hard drive grinding and screeching when an unhandled exception is thrown. In fact, the exception sometimes consumes so many resources that you know it's about to happen even before the error dialog appears. The bottom line is that throwing exceptions is extremely expensive for any .NET application, whether it is an ASP .NET web application, a Windows application, or a Windows service.

Keep in mind that the cost involved with exceptions occurs only when exceptions are thrown, not every time you enclose code in a TRy/catch block. Exceptions should be used to handle only unexpected conditions. Never use an exception to deal with something like user interface or input validation, validation of function parameters, and so forth. Exceptions should be reserved for only those circumstances where something went so wrong that the current context could no longer complete its task properly.

To get a good idea for how many exceptions your application generates, you can look at your application in the Performance Monitor (also called perfmon). One of the counters that you can examine is the number of exceptions generated. The Framework itself could be throwing exceptions without your knowledge, so it is always a good idea to examine your application's exception performance.

Chunky API Calls

There are two different schools of thought with regard to designing an API. One style is called chatty, which refers to very small and frequently invoked methods. The other style is chunky, which refers to less frequent, larger method calls. When a method call is referred to as large, the word large typically indicates the amount of work performed and the time it takes to complete the task.

The reason for the debate about chatty versus chunky APIs is that some method calls incur performance overhead simply by invoking them. Such calls include COM Interop, Platform Invoke (P/Invoke), web service, Remoting, and any other call that crosses a process boundary or requires additional effort to marshal information to the called method.


If you have to choose between whether to perform a task via COM InterOp or via Platform Invoke, consider this: The overhead for creating a P/Invoke call can be as few as 31 instructions, whereas the overhead for making a COM method call can be more than 65 instructions.

There is a vanishing point at which the overhead cost becomes greater than the cost of performing the method call itself. This is when you know that your API has become far too chatty, and you need to combine tasks to create method calls where the overhead cost is minimal compared to the tasks performed by those methods.

Value Versus Reference Types

As you saw earlier in the chapter, value types that are treated as object types incur a boxing performance penalty. Value types are stored on the stack and are not stored on the managed heap. These facts mean that, by default, value types perform slightly better than reference types so long as the value type is small enough.

One way in which you can easily gain some advantage in regard to memory and speed is by examining your code for classes that are nothing more than property containers. You could be using a class to contain some information, but the class doesn't have very many methods (if any at all). If the memory size of the class is also pretty small, you might want to consider turning that class into a struct.

Consider the following class:

class MyClass
    public Myclass() { MyData = 21; }

    public int MyData;
    public int OtherData;
    public string SomeMoreData;

If you are passing instances of this class as parameters to method calls (reference type), you can definitely get some performance improvement by converting the class into a struct, as shown here:

struct MyClass
    public Myclass() { MyData = 21; }

    public int MyData;
    public int OtherData;
    public string SomeMoreData;

Tip: Using AddRange on Collections

When you need to add items to a collection, you should consider using AddRange instead of Add. The reason for doing so is that when you're adding multiple items, using Add within a loop is considerably slower than using AddRange. The AddRange method enables you to add a collection of items to an existing collection. If you find yourself in a situation in which you are looping through a collection, adding values to another collection, it is an ideal time to switch methods and use AddRange.

Jagged Versus Rectangular Arrays

A jagged array is slightly different from a multidimensional array. You can think of a standard multidimensional array as a rectangular array. Jagged arrays are arrays of arrays. When you provide an index into a jagged array, you are referencing an array. But with a multidimensional array, you are referencing one dimension. To make it obvious, the following code shows how you would declare a jagged array as compared to a rectangular array:

static void Main(string[] args)

  // declare a jagged array
  string[][] jaggedArray = {
    new string[] { "One", "Two", "Three" },
    new string[] { "One", "Two" },
    new string[] {"One", "Two", "Three", "Four" } };

  // declare a two-dimensional array
  string[,] twoDArray = {  { "One", "Two" },
    { "One", "Two" },
    { "One", "Two" } };

The reason the comparison between rectangular and jagged arrays is mentioned is that the Common Language Runtime can optimize access to jagged arrays much better than it can optimize access to rectangular arrays. If you can figure out a way to accomplish your task with a jagged array, the code's performance will be better if you implement the jagged array to start with, and you won't have to worry about optimizing arrays when your application is complete.

For Versus Foreach

It was discussed earlier that using a foreach loop typically results in slower code than using a for loop. This is because the code created to deal with the foreach loop uses generalized objects, whereas you have more specific control over the for loop.

When you use a foreach loop, the .NET Framework automatically builds a try/finally block and uses the IEnumerable interface. The following code iterates through an ArrayList of the days of the week in C# using both a foreach loop and a for loop:

foreach (string x in al)

for (int i=0; i < al.Count; i++)

Here is the IL code generated by a foreach loop (word wrap in the code is for clarity only):

    IL_0061:  br.s       IL_0075
    IL_0063:  ldloc.3
    IL_0064:  callvirt
      instance object [mscorlib]System.Collections.IEnumerator::get_Current()
    IL_0069:  castclass  [mscorlib]System.String
    IL_006e:  stloc.1
    IL_006f:  ldloc.1
    IL_0070:  call       void [mscorlib]System.Console::WriteLine(string)
    IL_0075:  ldloc.3
    IL_0076:  callvirt
      instance bool [mscorlib]System.Collections.IEnumerator::MoveNext()
    IL_007b:  brtrue.s   IL_0063
    IL_007d:  leave.s    IL_0093
  }  // end .try
    IL_007f:  ldloc.3
    IL_0080:  isinst     [mscorlib]System.IDisposable
    IL_0085:  stloc.s    CS$00000002$00000001
    IL_0087:  ldloc.s    CS$00000002$00000001
    IL_0089:  brfalse.s  IL_0092
    IL_008b:  ldloc.s    CS$00000002$00000001
    IL_008d:  callvirt   instance void [mscorlib]System.IDisposable::Dispose()
    IL_0092:  endfinally
  }  // end handler

Here is the code generated by a simple for loop:

IL_0093:  ldc.i4.0
  IL_0094:  stloc.2
  IL_0095:  br.s       IL_00ac
  IL_0097:  ldloc.0
  IL_0098:  ldloc.2
  IL_0099:  callvirt
    instance object [mscorlib]System.Collections.ArrayList::get_Item(int32)
  IL_009e:  callvirt   instance string [mscorlib]System.Object::ToString()
  IL_00a3:  call       void [mscorlib]System.Console::WriteLine(string)
  IL_00a8:  ldloc.2
  IL_00a9:  ldc.i4.1
  IL_00aa:  add
  IL_00ab:  stloc.2
  IL_00ac:  ldloc.2
  IL_00ad:  ldloc.0
  IL_00ae:  callvirt
    instance int32 [mscorlib]System.Collections.ArrayList::get_Count()
  IL_00b3:  blt.s      IL_0097

You don't have to be able to understand much MSIL to understand that the code for a foreach iteration is going to be slower and more time-consuming than the code for a regular for loop. If you are running through a potentially large collection, you should consider using a standard for loop instead of a foreach loop. In the future, the code for foreach might be as optimized as the code for a regular for loop. However, for now, you should definitely consider the speed of a for loop as an advantage over the readability of a foreach loop.

Utilizing Asynchronous I/O

Another best practice that you can adopt is that of asynchronous I/O. In most cases, you can get away with synchronous I/O whether you are reading from a file on disk or a file indicated by a URL. However, there are times when the information being read from disk (or any other location) is so time-consuming, or the processing of the information being read is so time-consuming, that you can't perform the operation without blocking the user.

The key to asynchronous I/O is the use of BeginRead, EndRead, BeginWrite, and EndWrite. You will see plenty of I/O code throughout this book, and a great deal of asynchronous code.

The purpose of this section is to get you to design your application around the concept of asynchronous I/O. Instead of thinking of reading to or writing from a file as something that the user has to wait for, think of it as something that can happen behind the scenes. If you are working on a WinForms application, you can think of using progress bars and status bars to indicate the status of asynchronous operations. You could use some kind of iconographic treatment, such as green, red, and yellow lights in the status bar, to indicate file I/O operations.

The bottom line is that if a user has to sit, unproductive, and wait for an application to do anythingwhether it is I/O or anything elsethat user will be displeased. Anytime you find yourself waiting for your application to finish a task, consider making that task asynchronous so that it can complete in the background while the user can interact with the application.

    Team LiB
    Previous Section Next Section