Why do we need memory management
Every one knows that application takes memory to load and run. Application internally access memory for each statement execution. Some portion of memory uses processing memory some portion uses the system hard disk at the time of execution. Data transfer rate between hard disk and processor is slow and it causes reduction in performance. If you like to know the difference, then transfer files with in the hard disk and outside of hard disk (e.g. pen drive), you can see time difference. In this same manner with in the RAM data transfer rate is high, but when system accessing data from other places its slow. Because of this factor system gives poor performance.
How can we know how much my application taking
Windows systems contains “Task Manager”, there you can see how much memory your application is using. In general small application uses 3-10 MB memory, if it medium size application it user 10-30 MB. Huge application generally uses till 100 MB. Check how much memory your application consuming. When there is more number of swapping there will be less performance. In general swap memory size is 512 KB. Think how many time memory need to be transfer your application.
If you are familiar with NUnit you can use it for tracking which component is taking more memory. Even you can see how much each variable and function is taking in your application. Even you can test it for web application. Generally most of the developers have a miss assumption that we can track memory consumption for web application, which is not correct. IIS provide information about each request that is processing to the client using these IIS configurations NUnit works and gets the information.
You can use Log parser to identify client request size and result size. In order to know this you have enable logger in IIS. This log contains request page name, request type, requested from (IP), repose time, bytes transfer … etc. You can enable options in the log settings page. It’s very hard to understand these logs, so you have to use log parsers to convert this log to CSV format.
You can go for memory profilers and code it right software tools, which will help you in identifying problem areas and what need to be done to resolve it. Don’t expect solution for all the issues that are listed but mostly possible solution it will provide. Code it right software tool is very user friendly and it will list as an option in your visual studio. So you can find all problem areas and can be resolve at the same time. As a developer ever one expects to be compatible with VS.
I don’t know much about CLR profilers but I can say it will be more useful compare to other memory profilers.
How to reduce memory usage while coding
There are so many ways to reduce memory usage. If your application is console or windows application, then memory is not a big deal and you can find very less variation in performance. But when you are developing a web application then you have to think about execution time, packet size and memory usage. I am a developer so most of the time I think like a programmer and give suggestions in that manner rather than generic suggestions.
- People love to write lengthy programs and huge functions and huge functionality in single class file or single component. People think that it’s very easy to understand and maintain. But some other person who is not aware of your logic wants to review it’s very hard to understand.
Components that are developing must be in small size. Always keep similar functionality at one place, so you can use it n of times. Reduce number of lines of code in single function. These are normal OOPS concepts, I know. Do you really think it will work out? Yes it will. As component size is reduced the load time will be minimized, as load time is minimized it reduces execution time.
- Most of the cases we used to declare variables at begin of the function and forgot to use them and we have a habit of creating variables where ever it required. In most of the cases we used to create new variable and forgot to use already created variables. These type of things need to be resolved. There are so many software tools to track these types of mistakes. For e.g. code it right software.
Know the purpose of the variable and create it with right data type. Its basic suggestion you can get from ever person.
- Use strict data types. E.g. when you have a requirement to store a small number then use short.
In VB we can assign different type of data to a variable. It rises exception if it unable to convert internally. At the time of assigning operator overloading concept will execute, which will identify type of variable that we are assigning to, and coverts the value that we are assigning using conventional data type conversion. So here the number of operation that are executing is more.
- Use structure instead of classes when your class file don’t have business logic.
The basic advantage of structure is, it doesn’t do boxing and un-boxing for properties and variables. In boxing variable is converted to object and in un-boxing the object type will be converted to variable type. The conversion takes very less time but we really didn’t need it all the time.
- Don’t use hash tables or dictionary for storing and retrieving small amount of data. Normally we are using these types of object to store values and search values in an ordered list. When you are hold huge data and have to search on it then go for these types of variables.
- Reduce number of server controls in your web page. When your application is web based then reduce execution time at server. When you are using any server control then know why are you using and what type of operation that you are performing at server side. Why because when you are using client control it reduces load on server and server no need to maintain view state of it. If you like to compare the performance, then develop an application with complete server controls and verify load time at client end. Then modify few server controls to client controls and verify load time at client end. There will be huge difference.
- Most of the case we used to write our complete logic in exception blocks in order to handle exceptions at runtime. I know we never show any interest in handling possible runtime errors with some interest. But in order to reduce load you have to handle known exception. Few exceptions like data base and files, it’s hard to maintain, these exceptions can be handled using exception handler. There are two advantages of using this 1. It reduces time at the time of execution and 2. We will be having proper error handling for our code.
I think you got a doubt, even I have that doubt. How exception handling blocks matters memory? When I got this doubt I read articles about exception handling. Whatever code that we write in exception block will be treated differently and compiler will takes addition resource to monitor the exceptions. So it performs regular execution and additional monitoring. So automatically processing time will be reduced. Even you can try with simple example. Write a small code to call a function which contains exception handling block and logic to do small manipulations. And collect time for completion of the process and then remove exception handling block and execute the function. You can see the time difference.
How to use Tools
Code it Right: Code it right is software tool to identify critical issues and warning in your code. After installation of this tool you can see a new menu item in your visual studio IDE (CodeIt.Right). This menu item contains “Start Analysis”, “Profile Editor”, “Options” etc… In order to analyze your code you have load your project and then click “Start Analysis” button. It takes few minutes to generate a report. This report contains below items
- Item contains file name or class name or variable name or any type. It represents which object causing error.
- Ln# contains line number where this error occurred
- Type represents type of the item like file, class, method
- Violation contain what’s was the error
- Category represents type of violation. In general it contains General, Naming, Design, exception handling…etc.
- Severity contains complexity of the issue.
- Correct contains True or False which represents whether it this violation got resolve or not.
- Action contains what need to be done to resolve this violation.
Each row acts like a link and using this you can directly go to code where the violation was or you can directly correct the violation using apply changes.
This tools is very simple to use and you have an option to list the violation based on complexity so instead of going to each and every violation for knowing complexity you can directly filter out the list.
Log Parser: In order to use log parser you have to enable logging in your IIS. This tool can be used to track execution time of each request. In IIS manager on Default web site right click and click on properties you will get below screen.
If you check Enable Logging check box below drop down will be enabled. Click on properties and give your source path and location and time line for each log file. Based on time line that you specified each log file will be created. Figure 2 display how the property window looks like. In this property window under extend properties tab; list of items that can be tracked at server will be listed. By default system will track few data you can enable additional information by checking those check boxes.
Now your task is to use these logs to trace the response time of each request. If you are good at understanding each line by looking at sequence of data you can track the information but how many request are you going to understand. It’s a time killing process. In order to resolve this problem log parser software provide a command line query similar to our regular SQL or ORACLE queries to extract data from these log files.
I don’t remember exact the queries at this moment but you can search on net to find it. By using those queries you can extract the information in CSV file, which can be used like a spread sheet. By using this you can find most valuable pages based on hits and most complex page based on execution time. Once you have these data you can work on those file to increase more performance.