skip to Main Content

It’s OK to Be Lazy

At least when it comes to instantiating objects.

Even in today’s environment, when the typical amount of RAM on each server is in the gigabytes, it’s still wise to pay attention to memory usage. As a developer or architect, you need to be aware of the trade-offs between eager instantiation and lazy instantiation. Yes, it’s rather pointless to consider an Int16 versus an Int32 for a variable if it’s just going to be created and used a few times in the lifetime of your application. However, if that same variable is instantiated thousands of times or more, then the potential improvement in either memory usage or performance (whichever is more important to you) is definitely worth a look.

Eager/Lazy Instantiation Defined

With eager instantiation, the object is created as soon as possible:

Example – Eager Instantiation

public class Customer
{
    // eager instantiation
    private Address homeAddress = new Address();
    public Address HomeAddress
    {
        get
        {
             return homeAddress;
        }
    }
}

With lazy instantiation, the object is created as late as possible:

Example – Lazy Instantiation

public class Customer
{
   private Address homeAddress;
   public Address HomeAddress
   {
       get
       {
           // Create homeAddress if it’s not already created
           if (homeAddress == null)
           {
               homeAddress = new Address();
           }
           return homeAddress;
        }
    }
}

Eager/lazy instantiation also applies to classes, singletons, etc. The principles and potential advantages/disadvantages are similar. For this article, I am only discussing the instantiation of class members.

CPU Cycles vs. Memory Usage

Eager vs. lazy instantiation is the classic performance/memory trade-off. With eager instantiation, you gain some performance improvement at the cost of system memory. Exactly what kind of performance/memory trade-off are we talking about? The answer depends mostly on the objects themselves:

  • How many instances of the parent object do you need?
  • What is the memory footprint of the member object?
  • How much time does it take to instantiate the member object?
  • How often will the parent object/member object be accessed?

Calculating the Memory Footprint of an Object

According to my own experiments (using DevPartner Studio and .NET Memory Profiler), each reference-type object (class) has a minimum memory footprint of 12 bytes. To calculate the total memory footprint of each reference-type object, add up any other memory used by members in the object. To get the exact memory footprint, you also need to take into consideration “boundaries” but for our purpose that’s probably not important.

The memory foot-print of an object can be closely approximated using the following table (from MSDN Magazine):

Type Managed Size in Bytes
System.Boolean 1
System.Byte 1
System.Char 2
System.Decimal 16
System.Double 8
System.Single 4
System.Int16 2
System.Int32 4
System.Int64 8
System.SByte 1
System.UInt16 2
System.UInt32 4
System.UInt64 8

Using the example Customer class above, let’s say that each Address object take up 1 KByte, and my application frequently needs to instantiate up to 10,000 Customer objects. Just by creating 10,000 Customer objects, we would need about 10 Megabytes of memory. Now let’s say that the HomeAddress member is only needed when the user drills down into the details of a Customer, and we are looking at a potential saving of 10 Megabytes of memory by using lazy instantiation on HomeAddress.

Memory Usage Can Also Impact Performance

Another important consideration with .NET managed code is garbage collection. In .NET managed code, memory usage has a hidden impact on performance in terms of the work the garbage collector has to perform to recover memory. The more memory you allocate and throw away, the more CPU cycles the garbage collector has to go through.

Recommendations

  • Pay closer attention to classes that get instantiated multiple times, such as Orders, OrderItems, etc.
  • For light-weight objects, or if you are not sure, use lazy instantiation.
  • If a member object is only used some of the times, use lazy instantiation.

Additional Reading

Rediscover the Lost Art of Memory Optimization in Your Managed Code

A New Way to Measure Lines of Code

Is Lines of Code a good way to measure programmer output?

Background

First, some background: several studies (Sackman, Erikson, and Grant – 1968; Curtis – 1981) have shown that there are large variations in productivity levels among the best and worst programmers. While the numbers from the studies are controversial, I tend to agree with the basic premise that a super programmer can significantly outperform the average programmer. In my real-world projects, I estimate that variations have ranged up to 5/1.

As a manager or technical lead of a project, it’s important to have a good idea of how productive your programmers are. With a good idea of productivity levels, you can make better estimates for time and resources, and you can manage the individual developers better. Knowing that Programmer A has relatively lower productivity than his teammates, you can assign him smaller features and save the more complex ones for more productive/better programmers. Or, in the case of the negative-productivity programmer, you can identify him quickly and react appropriately instead of letting him continue to negatively impact your project.

So, is Lines of Code (LOC) per Day by itself a good way to measure productivity? I think the answer is a resounding no for many reasons:

  • A good programmer is able to implement the same feature with much less code than the average programmer.
  • Code quality is not taken into account. If you can write a thousand lines of code more than the average programmer, but your code is twice as buggy, that’s not really desirable.
  • Deleting and changing code, activities that are associated with important tasks such as re-factoring and bug-fixing, are not counted, or even counted negatively.

A New Method to Measure LOC

If LOC is not a good way to measure productivity, why am I writing about it? Because it’s still a good metric to have at your disposal, if you use it correctly, carefully, and in conjunction with other data. I also propose a revised method to calculate LOC that can better correlate with productivity. This “new-and-improved” LOC, in conjunction with other data (such as a Tech Lead’s intimate knowledge of his programmers’ style, efficiency, and skill level), may allow us to gain a better picture of programmer productivity.

The traditional way of calculating LOC has always been to count the lines of source code added. There are variations, such as not counting comments or counting statements instead of lines, but the general concept is the same: only lines or statements of code that are added are counted. The problems with the old method are:

  • Buggy code counts as much as correct code.
  • Deleting or changing code is not counted. Deleting/changing code is often done when you are re-factoring, or fixing bugs.
  • Optimizing a 20,000-line module to make it 10,000 lines actually impacts the LOC negatively.

At a conceptual level, my new method to calculate LOC (let’s call it “Lines of Correct Code” or LOCC) only counts correct lines of code, and code that is deleted or changed. Short of reviewing each line of code manually, how does a program know if a line of code is correct? My answer: if it remains in the code base at the end of the produce cycle, then for our purpose, it is “correct” code.

Algorithm for Counting Lines of Correct Code

Below is the proposed algorithm for calculating the LOCC. It should be possible to automate every of the steps described here using a modern source control system.

  • Analyze the source code at the end of the product cycle and keep a picture of the code that exists at the end. This is our base-line “correct” code.
  • Go back to the beginning of the project cycle and examine each check-in. For each check-in, count the lines of code that is added or changed and remains until the end. Lines of code that are deleted are also counted.
  • Auto-generated code is not counted or is weighted appropriately (after all, some work is involved).
  • Duplicate files are only counted once. In many applications, some files are mirrored (shared in SourceSafe-speak) in multiple locations. It’s only fair to count these files only once.

Ways to Use Lines of Correct Code

Here are a few ways I am planning to use LOCC in my projects:

  • Look at the LOCC per day (week/month) of the same developer over time.
  • Compare the LOCC per day between different programmers of equal efficiency and skill level.
  • Compare the total LOCC between different projects to get an idea of their relative size.
  • Correlate the LOCC of a programmer against his/her bug rate.
  • If a programmer writes code that is often deleted or changed later on, try to find out why.

Tell me what you think. Is this LOCC metric something that you would consider using in your project? I am writing a utility to calculate LOCC automatically from SourceSafe and if there’s sufficient interest, I will consider making it available.

System.Transactions: New and Improved Transaction Management Model for .NET 2.0

Ok, so System.Transactions is not new to many .NET developers but it’s new to me. We have just started to research transaction management in my current .NET 2.0 project at work and  System.Transactions is looking beautiful. It gives us exactly what we need without the overhead of the old EnterpriseServices model (registering with COM+, having a strong name key, etc.) or the high maintenance of manual transaction management.

System.Transactions is more light-weight compared to EnterpriseServices. The programming model is relatively simple. You use the TransactionScope class to wrap your code inside a transaction. TransactionScope can be nested and can be assigned different transaction options such as Required or RequiresNew, similar to the various transaction attributes for EnterpriseServices.

using (TransactionScope scope = new TransactionScope()) 
{ 
  // do some work here (like executing a SQL, calling a method, etc.) 

  // The Complete method commits the transaction. If an exception has been thrown, 
  // Complete is not  called and the transaction is rolled back. 
  scope.Complete(); 
}

In my brief testing with Enterprise Library 2.0 and an Oracle database via Oracle Data Provider, I found that System.Transactions always enlist my transactions in MSDTC (Microsoft Distributed Transaction Coordinator). I am sure there is some overhead associated with this. I will do some more performance test later to find out exactly what the overhead is. I was hoping that for a transaction involing a single database/connection string, that MSDTC would not be needed. But further research indicated that the non-MSDTC/light-weight transaction only works with SQL Server 2005.

If your transaction management needs extend beyond databases, you can even write your own resource manages so that operations such as copying a file can be wrapped inside a transaction as well. Look into the IEnlistmentNotification interface.

Here are some good articles about System.Transactions for your further reading:

StringBuilder is not always faster – Part 1 of 2

How often have you been told to use StringBuilder to concatenate strings in .NET? My guess is often enough. Here is something you may not know about string concatenation: StringBuilder is not always faster. There are already many articles out there that explain the why’s, I am not going to do that here. But I do have some test data for you.

When concatenating three values or less, traditional concatenation is faster (by a very small margin)

This block of code took 1484 milliseconds to run on my PC:

for (int i = 0; i <= 1000000; i++) 
{ 
    // Concat strings 3 times using StringBuilder 
    StringBuilder s = new StringBuilder(); 
    s.Append(i.ToString()); 
    s.Append(i.ToString()); 
    s.Append(i.ToString()); 
}

And this one, using traditional concatenation, took slightly less time (1344 milliseconds):

for (int i = 0; i <= 1000000; i++) 
{ 
    // Concat strings 3 times using traditional concatenation 
    string s = i.ToString(); 
    s = s + i.ToString(); 
    s = s + i.ToString(); 
}

The above data suggests that StringBuilder only starts to work faster once the number of concatenations exceed 3.

Building strings from literals

When building a large string from several string literals (such as building a SQL block, or a client side javascript block), use neither traditional concatenation nor StringBuilder. Instead, choose one of the methods below:

+ operator

// Build script block 
string s = "<script>" 
       + "function test() {" 
       + "  alert('this is a test');" 
       + "  return 0;" 
       + "}";

The compiler concatenates that at compile time. At run-time, that works as fast as a big string literal.

@ string literal

I sometimes use the @ string literal which allows for newlines (I find this syntax is harder to maintain, formatting-wise, than using the + operator):

string s = @"<script> 
        function test() { 
        alert('this is a test'); 
        return 0; 
        }";

Both methods above run about 40 times faster than using StringBuilder or traditional string concatenation.

Rules of Thumb

  • When concatenating three dynamic string values or less, use traditional string concatenation.
  • When concatenating more than three dynamic string values, use StringBuilder.
  • When building a big string from several string literals, use either the @ string literal or the inline + operator.

Updated 2007-09-29

I have posted a follow-up article to provide more detailed analysis and to answer some of the questions asked by readers.

Google Co-op

Google Co-op allows you to create customized search engines to fit your search interests. .NET developers, you may want to give the following pre-customized search engines a try:

For me, Co-op is a good idea but for now I will stick with just regular Google, at least for .NET related searches. I gave SearchDotNet.com a try and was initially underwhelmed my the number of matches. For example, searching for “EntryAssembly” using the DotNetSearch engine above yielded two pages of matches. Regular Google returns 387 pages.

Test Your Web Site in Internet Explorer 6 Using Microsoft IE6 Virtual Test Image

We all know that it’s a good idea to test our web pages in all popular browsers (to me that’s the various current versions of Internet Explorer and Firefox). It’s also a good idea to back up your data frequently, to listen to your elders, to not take a bath right after dinner, etc. Ok, so I admit I didn’t test my latest WordPress theme in IE6. While browsing the web on a friend’s PC, I discovered that my blog did not render correctly in IE6. It was broken in a major way. The right side bar completely disappears. Ouch!

I needed a way to test my site in Internet Explorer 6 at home, and since I have upgraded all of my PC’s at home to IE7, it seems like it’s a good time to give “virtual appliances” a try.

It took me about an hour to download the Internet Explorer 6 Application Compatibility VPC Image, install it, and have the virtual machine up and running. It helped that I already had Virtual Server installed. Yes Virtual Server works with Virtual PC disk images just fine. The current IE6 image expires on 4/1/2007 but I am sure Microsoft will replace it with another one when the time arrives. There is a blog entry on IEBlog about the image here.

Now I can reproduce the problem on my PC using IE6. That’s going to be the easy part I am afraid.

IE6

Update (2/8/2007): I fixed the IE6 problem. Apparently I had some sort of error in my sidebar.php file which messed up the HTML code, causing problems with IE6.

Stop Your .NET Managed Memory from Taking a Leak

It’s been a while since my last post… I have been very busy with wrapping the latest development project at work and I also took a 3-week long vacation to Vietnam in December! That was nice. I am still having post-vacation depression syndrome. When I have more time I will write more about the trip to Vietnam and maybe post some pictures.

Anyway, this post is about .NET memory leaks. If you suspect a memory leak in your .NET managed code, I recommend downloading a tool called .NET Memory Profiler. In my last project, we had a huge managed memory leak in our app due to the incorrect use of static events. This tool was a life saver because it helped us track down the memory leak.

Back To Top