Team LiB
Previous Section Next Section

.NET Security

It may seem odd to discuss the .NET security model in a chapter about debugging remote customer issues. At first glance, the two appear unrelated. But in some sense the new security model is a configuration issue: Customers frequently find software works on one machine but not on another, and it won't be obvious what the problem is. You will have to help them play the game of "What is different about the two machines?" to discover that security is the culprit. A full treatment of .NET security is well beyond the scope of this book, but fortunately, most developers don't need to be experts in this subject. Even a modest understanding of .NET security will be enough to prevent most of those security-related bugs from ever happening, or help you quickly pinpoint the bugs that do happen.

Before.NET, there was little granularity on what the security model allowed code to do. When you downloaded an ActiveX control from a web page, you saw a dialog box asking if you trusted the control. If you didn't trust it, then the code couldn't run at all; but if you did trust it, then the code could do anything your logged on account could do—it might harmlessly display dancing hamsters on the screen, or it might delete your personal files. This "all or nothing" security model had a huge flaw because you never knew in advance whether the code would be friendly or malicious. Therefore, you never had enough information to know whether you could trust a piece of code or not.

Limiting Access

Of course, you could always limit access to certain resources. For instance, you might set the permissions on a file directory so that only administrators could access it. That directory would then be safely inaccessible to malicious code running under a normal user's account. But then the directory would be inaccessible to the user, as well. Every time the user had a legitimate reason to access those files, you would have to grant him the appropriate permissions over the directory so he could do his job. But as soon as you grant permissions to the user for legitimate business reasons, then you've also implicitly granted the same permissions to any virus the user might accidentally run.

.NET addresses this problem with Code Access Security (CAS). Instead of deciding security based on the user who is running the code, CAS decides security based on the code itself. For example, any code installed on the local hard drive might have one set of permissions, but any code that is downloaded from the web might have a much more limited set of permissions. Or you could configure CAS so that a completely different third set of permissions applied to any code that was digitally signed by a particular vendor (such as Microsoft).

An example makes this easier to understand. Assuming you haven't changed the default.NET security policy on your computer, then type in this C# program and run it from your local computer. Watch it create a file called C:\test.txt on your hard drive.

using System.IO;
class SecurityTest {
   static void Main(string[] args) {
      FileStream fs = new FileStream(@"C:\test.txt", FileMode.Create);

Did running that program from your local computer work? Great! Now here's the part that surprises most new .NET developers. First copy that executable you just built to a network share on a different computer. Delete the C:\test.txt file that you just created, and run the program from the network share. You would expect that to work, wouldn't you? You're still running the program with the same user credentials as before, and your credentials clearly have the necessary permissions to create files on the hard drive. But running the program from a network share yields the error shown in Figure 7-1.

What's going on here? CAS comes configured with several default code groups that control what permissions are granted to various code. By default, any program installed on the local machine can access the file system, but programs run from a network share cannot. Instead, the network share version will throw a SecurityException. That's why this program worked when run locally but not remotely. Now, native Windows security is still respected. If a user doesn't have authority to access a particular file, then running code as that user from a different machine won't make a difference. If a certain Windows API requires administrator rights to invoke but you are not an administrator, then you won't be able to call that API simply by logging on at a different computer with a more lenient CAS policy.

Click To expand
Figure 7-1: A security exception from running code remotely.

How CAS Works

CAS merely controls whether code can access a particular system resource at all. Once that access has been granted, then it's up to the resource to decide how to handle security for the particular user. For example, CAS might be configured to give a piece of code full access to the Active Directory. But if the logged on user hasn't been granted permissions over the Active Directory through normal Windows methods, then the CAS permission is useless. Alternatively, the logged on user might have full permissions over the Active Directory, but if CAS has been configured to deny access to your piece of code, then your permissions won't matter.

Understand that subtle difference. You might get a SecurityException because your code lacks CAS permissions, or you might get it because the logged on user lacks regular Windows permissions. Those are two entirely unrelated issues. Developers have been dealing with Windows permissions for years, so they're generally familiar with the second type of problem. It's the CAS permissions that will lead to the most initial confusion for .NET developers.

CAS is completely configurable, of course: You can examine or modify the permissions assigned to each CAS code group with the .NET Framework Configuration tool (see Figure 7-2). You can also configure entirely different settings for all users in a domain, all users on a particular computer, or even for one individual user. Search for the file mscorcfg.msc on your computer (it's usually in a directory like "C:\winnt\Microsoft.NET\Framework\ v1.0.3328" or some such name depending on what version of the.NET runtime is installed). Double-click that program, and then select the Runtime Security Policy heading.

You can create your own code groups through this tool, or edit the permissions assigned to an existing one. Alternatively, you can edit the permissions through the command line tool caspol.exe. Sooner or later, you will need to learn these tools because you'll write a program that needs non-default permissions. You'll have to give your customer a configuration file to change the default permissions. The CAS code groups are very flexible, so you could, for instance, give only web pages within your site an extra permission without having to grant those permissions to every web page in the world. Once the user has manually installed your configuration file, then she will be able to use your component.

Click To expand
Figure 7-2: The .NET Framework Configuration tool

If you were evil, you could have your product create a code group to explicitly deny all permissions to any product made by your competitor. That way, your competitor's product wouldn't even run on any computer you software was installed on! Hopefully no reputable software company would ever be that devious, but such shenanigans are theoretically possible.

Now do you see why .NET security was included in a chapter about debugging remote customer issues? Your program may run fine on your development machine with the default permissions. But did you test if your code works when run across a network share? And what if someone changed his default permissions? Some CAS policies will refuse to run any code that is not digitally signed— did you test your code against that? Will your program gracefully notify the user about the CAS problem? Or will your program merely crash without warning? Better test to be sure.

Once you know what to look for, recognizing and avoiding CAS-related bugs isn't difficult. But there are a few subtleties that you might not expect.

Handling SecurityExceptions

Naturally, we don't want to show that ugly "Exception thrown: Do you want to debug?" error message to users. We want to give users something more friendly. You might think the security problem is as simple as putting a try/catch block around every operation that might access system resources. That way you could deal with the lack of permissions gracefully. Indeed, that is an excellent start:

using System.Security;
using System.Windows.Forms;
using System.IO;

class MySecurityTest {
       static void Main(string[] args) {
           try {
               //code that may throw a SecurityException
               FileStream fs = new FileStream(@"C:\test.txt", FileMode.Create);
           catch(SecurityException) {
               MessageBox.Show("Permissions error");

Any code with the necessary permissions will run fine, and any code without permissions will display a friendly MessageBox describing exactly what the problem is. And this will work fine with the default permissions on a network share. But would it surprise you to learn that this program could still crash due to a lack of permissions? Would it surprise you that the ability to display a MessageBox on the screen is part of the User Interface permission, which an administrator might have removed from the code group your program is in?

After all, you don't want to give a virus the ability to display a dialog box saying "Please enter your password" because users may unthinkingly do it. In the preceding code snippet, the function will safely catch the SecurityException and attempt to gracefully display an error message—but if the code doesn't have the User Interface permission, then the attempt to display the MessageBox will throw a second SecurityException, and that one will go uncaught. That means the user sees the ugly "Do you want to debug?" dialog box again.

So you could add a second level of try/catch blocks around the MessageBox.Show function to safely catch the exception. That would at least allow you to terminate the program cleanly. Or maybe you could just assume that few of your customers will modify the default permissions, and you're willing to have tech support work out the problem for those few customers who do change their permissions. That decision is probably fine for most applications—I've made that decision myself dozens of times. But if you do cut that corner, make sure you're making a conscious decision rather than letting the decision be made for you through lack of planning.

On the other hand, you're never quite sure when a SecurityException might show up. Obviously, anytime you access system resources like the file system or registry, then you know there's the potential for an exception. But what if you invoke a function that someone else wrote? Who knows what resources that function might try to access?

Permissions Are Granted on a Per-Assembly Basis

The problem gets especially interesting when you invoke functions in a different assembly because of a fact that surprises almost every .NET developer. CAS permissions are granted on a per-assembly basis, not on a per-process basis, as you'd probably expect. Anytime you try to access a system resource, the .NET CLR will check the permissions on your assembly, but it will also examine every single assembly in your call stack to make sure all of them have the necessary permissions, too.


Remember, a .NET assembly is basically just a DLL or EXE. It also has built-in metadata to describe itself to the .NET CLR, but as far as the casual developer is concerned, a .NET assembly and a .NET DLL are essentially the same thing. Anytime you create a DLL or EXE with C# or VB .NET, you created an assembly.

Yes, that stack walk is a significant performance hit, and yes, you can configure CAS to disable that runtime checking if you want (either programmatically with the CodeAccessPermission.Assert() function, or through the.NET Framework Configuration tool by opening the Security permission, and checking the Assert any permission that has been granted checkbox). But no, you shouldn't. Bizarre as that behavior of CAS may sound, there is a good reason for it. Do not disable this runtime checking except in very special cases.

Imagine your computer contains an assembly for reading/writing/deleting files called FileManagement.dll that has permissions over the file system. Imagine an attacker knows that assembly is installed on your computer, so he writes a program that will pass any runtime security checks because it doesn't access any system resources... except that it calls the functions of FileManagement.dll. Since FileManagement.dll has the permissions to write to the hard drive, the attacker component could circumvent the security model by tricking the trusted assemblies to do the damage. So CAS avoids this problem by checking the permissions on all assemblies in the entire call stack unless you specifically turn that feature off. My assembly cannot force your assembly to do anything bad unless both of us have the required permissions.

You might think identifying security model–related bugs ought to be easy in .NET because these bugs consistently behave the same way. As long as you remember to include try/catch blocks for SecurityExceptions, then detecting the cause of the bug is easy because your catch block presumably logs the exact error. But that argument assumes every developer will be careful about their error handling. Not only will some developers forget to catch the SecurityException, many will be lazy and commit an even bigger crime:

Imports System.Windows.Forms
Imports System.IO

Module VbSecurityTest
    Sub Main()
          Dim sr As StreamReader
          sr = New StreamReader("C:\someFile.txt")

        Catch ex As Exception
          'If that fails, it must mean the file does not exist.
          MessageBox.Show("Error: File does not exist")
        End Try
    End Sub
End Module

Can you spot the bug in the preceding function? That code assumes that the only possible exception that might be thrown is a FileNotFoundException, so it blindly catches all possible exceptions and treats them the same. Unfortunately, many VB and C++ developers don't really understand exceptions. I once found the preceding bug in a coworker's code. Presumably, he had originally written the code without a try/catch block, but then he saw an unhandled exception when the file didn't exist, and fixed it by catching the exception and handling the error. The only problem was that he incorrectly assumed a non-existent file was the only possible reason the code in the try block could fail.

Sure enough, a customer soon reported that our program incorrectly claimed the file didn't exist when in fact it did. The problem, of course, was that the customer ran the program from a network share that didn't have CAS permissions to access the local hard drive—a SecurityException was thrown, but since all types of exceptions were being funneled through this one catch block, an incorrect error message was displayed. Don't do that. Make sure you're explicitly catching each type of exception you expect to handle: one for FileNotFoundException, another for SecurityException, etc.

Once you have a modest understanding of the .NET security model, then these bugs become easy to detect and avoid. But as one of the most unfamiliar parts of .NET, security will cause many developer headaches while programmers learn how it works.

Team LiB
Previous Section Next Section