Improve Your Technology

Just another blog for techology

Secure Coding Guidelines

Secure Coding Guidelines

 Evidence-based security policy and code access security provide very powerful, explicit mechanisms to implement security. Most application code can simply use the infrastructure implemented by the .NET Framework. In some cases, additional application-specific security is required, built either by extending the security system or by using new ad hoc methods.

Using the .NET Framework-enforced permissions, and other enforcement in your code, you should erect barriers to prevent malicious code from obtaining information that you do not want it to have or performing other undesirable actions. Additionally, you must strike a balance between security and usability in all the expected scenarios using trusted code.

Secure Coding Overview

 Security-neutral code does nothing explicit with the security system. It runs with whatever permissions it receives. Although applications that fail to catch security exceptions associated with protected operations (such as using files, networking, and so on) can result in an unhandled exception, security-neutral code still takes advantage of the .NET Framework security technologies.

A security-neutral library has special characteristics that you should understand. Suppose your library provides API elements that use files or call unmanaged code; if your code does not have the corresponding permission, it will not run as described. However, even if the code has the permission, any application code that calls it must have the same permission in order to work. If the calling code does not have the right permission, a SecurityException appears as a result of the code access security stack walk.

Application Code that is Not a Reusable Component

 If your code is part of an application that will not be called by other code, security is simple and special coding might not be required. However, remember that malicious code can call your code. While code access security might stop malicious code from accessing resources, such code could still read values of your fields or properties that might contain sensitive information.

Additionally, if your code accepts user input from the Internet or other unreliable sources, you must be careful about malicious input.

Managed Wrapper to Native Code Implementation

 Typically in this scenario, some useful functionality is implemented in native code that you want to make available to managed code. Managed wrappers are easy to write using either platform invoke or COM interop. However, if you do this, callers of your wrappers must have unmanaged code rights in order to succeed. Under default policy, this means that code downloaded from an intranet or the Internet will not work with the wrappers.

Rather than giving all applications that use these wrappers unmanaged code rights, it is better to give these rights only to the wrapper code. If the underlying functionality exposes no resources and the implementation is likewise “safe,” the wrapper only needs to assert its rights, which enables any code to call through it. When resources are involved, security coding should be the same as the library code case described in the next section. Because the wrapper is potentially exposing callers to these resources, careful verification of the safety of the native code is necessary and is the wrapper’s responsibility.

Library Code that Exposes Protected Resources

 This is the most powerful and hence potentially dangerous (if done incorrectly) approach for security coding: Your library serves as an interface for other code to access certain resources that are not otherwise available, just as the classes of the .NET Framework enforce permissions for the resources they use. Wherever you expose a resource, your code must first demand the permission appropriate to the resource (that is, it must perform a security check) and then typically assert its rights to perform the actual operation.


Securing State Data

 Applications that handle sensitive data or make any kind of security decisions need to keep that data under their own control and cannot allow other potentially malicious code to access the data directly. The best way to protect data in memory is to declare the data as private or internal (with scope limited to the same assembly) variables. However, even this data is subject to access you should be aware of:

  • Using reflection mechanisms, highly trusted code that can reference your object can get and set private members.
  • Using serialization, highly trusted code can effectively get and set private members if it can access the corresponding data in the serialized form of the object.
  • Under debugging, this data can be read.

Make sure none of your own methods or properties exposes these values unintentionally.

In some cases, data can be declared as “protected,” with access limited to the class and its derivatives. However, you should take the following additional precautions due to additional exposure:

  • Control what code is allowed to derive from your class by restricting it to the same assembly or by using declarative security, described in Securing Method Access, to require some identity or permissions in order for code to derive from your class.
  • Ensure that all derived classes implement similar protection or are sealed.

 Securing Method Access

 Some methods might not be suitable to allow arbitrary untrusted code to call. Such methods pose several risks: The method might provide some restricted information; it might believe any information passed to it; it might not do error checking on the parameters; or with the wrong parameters, it might malfunction or do something harmful. You should be aware of these cases and take action to help protect the method.

In some cases, you might need to restrict methods that are not intended for public use but still must be public. For example, you might have an interface that needs to be called across your own DLLs and hence must be public, but you do not want to expose it publicly to prevent customers from using it or to prevent malicious code from exploiting the entry point into your component. Another common reason to restrict a method not intended for public use (but that must be public) is to avoid having to document and support what might be a very internal interface.

Managed code offers several ways to restrict method access:

  • Limit the scope of accessibility to the class, assembly, or derived classes, if they can be trusted. This is the simplest way to limit method access. Note that, in general, derived classes can be less trustworthy than the class they derive from, though in some cases they share the parent class’s identity. In particular, do not infer trust from the keyword protected, which is not necessarily used in the security context.
  • Limit the method access to callers of a specified identity–essentially, any particular evidence (strong name, publisher, zone, and so on) you choose.
  • Limit the method access to callers having whatever permissions you select.

Similarly, declarative security allows you to control inheritance of classes. You can use InheritanceDemand to do the following:

  • Require derived classes to have a specified identity or permission.
  • Require derived classes that override specific methods to have a specified identity or permission.

Securing Wrapper Code

 Wrapper code, especially where the wrapper has higher trust than code that uses it, can open a unique set of security weaknesses. Anything done on behalf of a caller, where the caller’s limited permissions are not included in the appropriate security check, is a potential weakness to be exploited.

Never enable something through the wrapper that the caller could not do itself. This is a special danger when doing something that involves a limited security check, as opposed to a full stack walk demand. When single-level checks are involved, interposing the wrapper code between the real caller and the API element in question can easily cause the security check to succeed when it should not, thereby weakening security.

Security and Public Read-only Array Fields

 Never use read-only public array fields from managed libraries to define the boundary behavior or security of your applications because read-only public array fields can be modified.

Some .NET framework classes include read-only public fields that contain platform-specific boundary parameters. For example, the InvalidPathChars field is an array that describes the characters that are not allowed in a file path string. Many similar fields are present throughout the .NET Framework.

The values of public read-only fields like InvalidPathChars can be modified by your code or code that shares your code’s application domain. You should not use read-only public array fields like this to define the boundary behavior of your applications. If you do, malicious code can alter the boundary definitions and use your code in unexpected ways.

In version 2.0 and later of the .NET Framework, you should use methods that return a new array instead of using public array fields. For example, instead of using the InvalidPathChars field, you should use the GetInvalidPathChars method.

Note that the .NET Framework types do not use the public fields to define boundary types internally. Instead, the .NET Framework uses separate private fields. Changing the values of these public fields does not alter the behavior of .NET Framework types.


Security and User Input

 User data, which is any kind of input (data from a Web request or URL, input to controls of a Microsoft Windows Forms application, and so on), can adversely influence code because often that data is used directly as parameters to call other code. This situation is analogous to malicious code calling your code with strange parameters, and the same precautions should be taken. User input is actually harder to make safe because there is no stack frame to trace the presence of the potentially untrusted data.

These are among the subtlest and hardest security bugs to find because, although they can exist in code that is seemingly unrelated to security, they are a gateway to pass bad data through to other code. To look for these bugs, follow any kind of input data, imagine what the range of possible values might be, and consider whether the code seeing this data can handle all those cases. You can fix these bugs through range checking and rejecting any input the code cannot handle.

Some important considerations involving user data include the following:

  • Any user data in a server response runs in the context of the server’s site on the client. If your Web server takes user data and inserts it into the returned Web page, it might, for example, include a <script> tag and run as if from the server.
  • Remember that the client can request any URL.
  • Consider tricky or invalid paths:
  • ..\ , extremely long paths.
  • Use of wild card characters (*).
  • Token expansion (%token%).
  • Strange forms of paths with special meaning.
  • Alternate file system stream names such as filename::$DATA.
  • Short versions of file names such as longfi~1 for longfilename.
  • Remember that Eval(userdata) can do anything.
  • Be wary of late binding to a name that includes some user data.
  • If you are dealing with Web data, consider the various forms of escapes that are permissible, including:
  • Hexadecimal escapes (%nn).
  • Unicode escapes (%nnn).
  • Overlong UTF-8 escapes (%nn%nn).
  • Double escapes (%nn becomes %mmnn, where %mm is the escape for ‘%’).
  • Be wary of user names that might have more than one canonical format. For example, in Microsoft Windows 2000, you can often use either the MYDOMAIN\username form or the username@mydomain.example.com form.
Advertisements

January 13, 2010 Posted by | Secure Coding Guidelines, Technology, Uncategorized | | Leave a comment