skip to Main Content

Samsung SCH-i760

It looks like Samsung is finally coming out with an upgrade for the SCH-i730 Windows Mobile Phone, in the form of the SCH-i760. Major new features include Windows Mobile 6, side-sliding keyboard and a built-in camera. Currently the i760 is not officially on Verizon Wireless’ web site yet.

Don’t know about you but I think this one looks a bit on the ugly side. Still, I really like my i730 and when I will certainly consider upgrading to the i760 when the time comes.

Samsung SCH-i760

More info:

Related Posts

Watch Live TV on Your Windows Mobile Phone/Pocket PC

Being a news junky that I am, I love being able to listen to the news while driving around, or even while at the beach. I have found that my Samsung i730 Windows Mobile phone, in conjunction with the unlimited EDVO data plan from Verizon Wireless, let me watch the news in glorious color, virtually anywhere. Yes, there is just something neat about being able to watch BBC World News while sitting alone on a empty beach at night. Perhaps it’s the combination of being far away from civilization, yet at the same time connected to the world.

BBC News

iPhone owners, sorry… no streaming videos for you. But you’re probably too busy looking at the phone itself, admiring its beauty, to have time for anything else anyway.

(Oops, my iPhone envy is showing.)

Below are my favorite video streams for your enjoyment (updated Jan 18, 2008).

News

Weather

  • Weather+ – US and Canada weather forecasts).

For more channels, go to:

I only tested the above links on my i730, they should work for other Windows Mobile devices such as Verizon XV6700, or the new MOTO Q 9m.

WARNING – After further research, I found that Verizon’s EVDO policy officially prohibits video streaming, so there is a chance that if you use too much bandwidth, they may decide to terminate your account. So, use video streaming at your own risk! The limit seems to be about 10GB. You can monitor your data usage by logging into the Verizon Wireless’ My Account service. There is apparently also a way to monitor your data usage with a Verizon software application called VZAccess Manager.

Update 10/25/2007

Verizon has just settled with New York regard its practice of terminating users who exceeds “unlimited” bandwidth.

Update 1/7/2009

Refreshed links. Fixed link to Streaming PDA.

Best Quad Core CPU for the Money (Intel Core 2 Quad Q6600)

I just can’t have enough cores! My current main PC is an AMD X2 3800+ due core and while it’s still working great…. I suddenly have developed the urge to build a new PC with a quad core CPU! Do you have that feeling sometimes?

Ok, the real story is that I’ve turned the current PC into a home theater PC and now it’s sitting in the bedroom alongside a shiny new 1080p Philips 42″ LCD that I bought for the wife as her birthday present (evil smile :-)). While it’s a great setup, sometimes I just need to work at a desk and not on the bed… therefore the need for the replacement PC.

It looks like currently the best value quad core CPU is the Intel Core 2 Quad Q6600, priced at a relatively affordable $289 from Newegg.

Intel Core 2 Quad Q6600

Optimizing/Building Your Baseline Win XP Virtual PC Image

I am getting ready to check out all the neat tools in Scott Hanselman’s 2007 Ultimate Developer and Power Users Tool List for Windows. It’s a nice and comprehensive list of tools that covers just about all the tools a developer or power user may ever need. There are quite a few tools I have not used before.

Since I don’t necessarily want to install all of these tools to my everyday Windows installation, I am preparing a baseline Virtual PC image to install these tools into. My experience with Windows has taught me to try to keep it as clean as possible.

I found a great guide to optimize your baseline Windows XP virtual image from Dan’s Archive. It took about 30 minutes for me to go through the guide to build my optimized XP image (that’s not counting the time needed to install Windows XP Pro). If anyone is listening out there, I think this would be a good candidate for another “tool”.

With by baseline image ready, now I can make a backup copy of it and install away!! I feel like a kid at Christmas. Notepad2, Notepad++, Lutz Reflector, SlickRun, FireBug, ZoomIt, WinSnap, CodeRush, Refactor, FolderShare… so many new toys to play with, so little time.

Virtual PC

It’s OK to Be Lazy

At least when it comes to instantiating objects.

Even in today’s environment, when the typical amount of RAM on each server is in the gigabytes, it’s still wise to pay attention to memory usage. As a developer or architect, you need to be aware of the trade-offs between eager instantiation and lazy instantiation. Yes, it’s rather pointless to consider an Int16 versus an Int32 for a variable if it’s just going to be created and used a few times in the lifetime of your application. However, if that same variable is instantiated thousands of times or more, then the potential improvement in either memory usage or performance (whichever is more important to you) is definitely worth a look.

Eager/Lazy Instantiation Defined

With eager instantiation, the object is created as soon as possible:

Example – Eager Instantiation

public class Customer
{
    // eager instantiation
    private Address homeAddress = new Address();
    public Address HomeAddress
    {
        get
        {
             return homeAddress;
        }
    }
}

With lazy instantiation, the object is created as late as possible:

Example – Lazy Instantiation

public class Customer
{
   private Address homeAddress;
   public Address HomeAddress
   {
       get
       {
           // Create homeAddress if it’s not already created
           if (homeAddress == null)
           {
               homeAddress = new Address();
           }
           return homeAddress;
        }
    }
}

Eager/lazy instantiation also applies to classes, singletons, etc. The principles and potential advantages/disadvantages are similar. For this article, I am only discussing the instantiation of class members.

CPU Cycles vs. Memory Usage

Eager vs. lazy instantiation is the classic performance/memory trade-off. With eager instantiation, you gain some performance improvement at the cost of system memory. Exactly what kind of performance/memory trade-off are we talking about? The answer depends mostly on the objects themselves:

  • How many instances of the parent object do you need?
  • What is the memory footprint of the member object?
  • How much time does it take to instantiate the member object?
  • How often will the parent object/member object be accessed?

Calculating the Memory Footprint of an Object

According to my own experiments (using DevPartner Studio and .NET Memory Profiler), each reference-type object (class) has a minimum memory footprint of 12 bytes. To calculate the total memory footprint of each reference-type object, add up any other memory used by members in the object. To get the exact memory footprint, you also need to take into consideration “boundaries” but for our purpose that’s probably not important.

The memory foot-print of an object can be closely approximated using the following table (from MSDN Magazine):

Type Managed Size in Bytes
System.Boolean 1
System.Byte 1
System.Char 2
System.Decimal 16
System.Double 8
System.Single 4
System.Int16 2
System.Int32 4
System.Int64 8
System.SByte 1
System.UInt16 2
System.UInt32 4
System.UInt64 8

Using the example Customer class above, let’s say that each Address object take up 1 KByte, and my application frequently needs to instantiate up to 10,000 Customer objects. Just by creating 10,000 Customer objects, we would need about 10 Megabytes of memory. Now let’s say that the HomeAddress member is only needed when the user drills down into the details of a Customer, and we are looking at a potential saving of 10 Megabytes of memory by using lazy instantiation on HomeAddress.

Memory Usage Can Also Impact Performance

Another important consideration with .NET managed code is garbage collection. In .NET managed code, memory usage has a hidden impact on performance in terms of the work the garbage collector has to perform to recover memory. The more memory you allocate and throw away, the more CPU cycles the garbage collector has to go through.

Recommendations

  • Pay closer attention to classes that get instantiated multiple times, such as Orders, OrderItems, etc.
  • For light-weight objects, or if you are not sure, use lazy instantiation.
  • If a member object is only used some of the times, use lazy instantiation.

Additional Reading

Rediscover the Lost Art of Memory Optimization in Your Managed Code

A New Way to Measure Lines of Code

Is Lines of Code a good way to measure programmer output?

Background

First, some background: several studies (Sackman, Erikson, and Grant – 1968; Curtis – 1981) have shown that there are large variations in productivity levels among the best and worst programmers. While the numbers from the studies are controversial, I tend to agree with the basic premise that a super programmer can significantly outperform the average programmer. In my real-world projects, I estimate that variations have ranged up to 5/1.

As a manager or technical lead of a project, it’s important to have a good idea of how productive your programmers are. With a good idea of productivity levels, you can make better estimates for time and resources, and you can manage the individual developers better. Knowing that Programmer A has relatively lower productivity than his teammates, you can assign him smaller features and save the more complex ones for more productive/better programmers. Or, in the case of the negative-productivity programmer, you can identify him quickly and react appropriately instead of letting him continue to negatively impact your project.

So, is Lines of Code (LOC) per Day by itself a good way to measure productivity? I think the answer is a resounding no for many reasons:

  • A good programmer is able to implement the same feature with much less code than the average programmer.
  • Code quality is not taken into account. If you can write a thousand lines of code more than the average programmer, but your code is twice as buggy, that’s not really desirable.
  • Deleting and changing code, activities that are associated with important tasks such as re-factoring and bug-fixing, are not counted, or even counted negatively.

A New Method to Measure LOC

If LOC is not a good way to measure productivity, why am I writing about it? Because it’s still a good metric to have at your disposal, if you use it correctly, carefully, and in conjunction with other data. I also propose a revised method to calculate LOC that can better correlate with productivity. This “new-and-improved” LOC, in conjunction with other data (such as a Tech Lead’s intimate knowledge of his programmers’ style, efficiency, and skill level), may allow us to gain a better picture of programmer productivity.

The traditional way of calculating LOC has always been to count the lines of source code added. There are variations, such as not counting comments or counting statements instead of lines, but the general concept is the same: only lines or statements of code that are added are counted. The problems with the old method are:

  • Buggy code counts as much as correct code.
  • Deleting or changing code is not counted. Deleting/changing code is often done when you are re-factoring, or fixing bugs.
  • Optimizing a 20,000-line module to make it 10,000 lines actually impacts the LOC negatively.

At a conceptual level, my new method to calculate LOC (let’s call it “Lines of Correct Code” or LOCC) only counts correct lines of code, and code that is deleted or changed. Short of reviewing each line of code manually, how does a program know if a line of code is correct? My answer: if it remains in the code base at the end of the produce cycle, then for our purpose, it is “correct” code.

Algorithm for Counting Lines of Correct Code

Below is the proposed algorithm for calculating the LOCC. It should be possible to automate every of the steps described here using a modern source control system.

  • Analyze the source code at the end of the product cycle and keep a picture of the code that exists at the end. This is our base-line “correct” code.
  • Go back to the beginning of the project cycle and examine each check-in. For each check-in, count the lines of code that is added or changed and remains until the end. Lines of code that are deleted are also counted.
  • Auto-generated code is not counted or is weighted appropriately (after all, some work is involved).
  • Duplicate files are only counted once. In many applications, some files are mirrored (shared in SourceSafe-speak) in multiple locations. It’s only fair to count these files only once.

Ways to Use Lines of Correct Code

Here are a few ways I am planning to use LOCC in my projects:

  • Look at the LOCC per day (week/month) of the same developer over time.
  • Compare the LOCC per day between different programmers of equal efficiency and skill level.
  • Compare the total LOCC between different projects to get an idea of their relative size.
  • Correlate the LOCC of a programmer against his/her bug rate.
  • If a programmer writes code that is often deleted or changed later on, try to find out why.

Tell me what you think. Is this LOCC metric something that you would consider using in your project? I am writing a utility to calculate LOCC automatically from SourceSafe and if there’s sufficient interest, I will consider making it available.

I found a bug in IE7

Looks like I found a bug in IE7. Sometimes the tab title “sticks” and stays the same regardless of which page/site you navigate to.

I was reviewing a blog post I had just made, so the tab title correctly said “Chinh Do”:

IE Bug 1

I then clicked on a link to navigate to Technorati, which normally has a title of “Technorati: Home”, but here the tab title still said “Chinh Do”.

IE Bug

I tried clicking on various links to navigate to different web sites, the title remained “Chinh Do”.

System.Transactions: New and Improved Transaction Management Model for .NET 2.0

Ok, so System.Transactions is not new to many .NET developers but it’s new to me. We have just started to research transaction management in my current .NET 2.0 project at work and  System.Transactions is looking beautiful. It gives us exactly what we need without the overhead of the old EnterpriseServices model (registering with COM+, having a strong name key, etc.) or the high maintenance of manual transaction management.

System.Transactions is more light-weight compared to EnterpriseServices. The programming model is relatively simple. You use the TransactionScope class to wrap your code inside a transaction. TransactionScope can be nested and can be assigned different transaction options such as Required or RequiresNew, similar to the various transaction attributes for EnterpriseServices.

using (TransactionScope scope = new TransactionScope()) 
{ 
  // do some work here (like executing a SQL, calling a method, etc.) 

  // The Complete method commits the transaction. If an exception has been thrown, 
  // Complete is not  called and the transaction is rolled back. 
  scope.Complete(); 
}

In my brief testing with Enterprise Library 2.0 and an Oracle database via Oracle Data Provider, I found that System.Transactions always enlist my transactions in MSDTC (Microsoft Distributed Transaction Coordinator). I am sure there is some overhead associated with this. I will do some more performance test later to find out exactly what the overhead is. I was hoping that for a transaction involing a single database/connection string, that MSDTC would not be needed. But further research indicated that the non-MSDTC/light-weight transaction only works with SQL Server 2005.

If your transaction management needs extend beyond databases, you can even write your own resource manages so that operations such as copying a file can be wrapped inside a transaction as well. Look into the IEnlistmentNotification interface.

Here are some good articles about System.Transactions for your further reading:

StringBuilder is not always faster – Part 1 of 2

How often have you been told to use StringBuilder to concatenate strings in .NET? My guess is often enough. Here is something you may not know about string concatenation: StringBuilder is not always faster. There are already many articles out there that explain the why’s, I am not going to do that here. But I do have some test data for you.

When concatenating three values or less, traditional concatenation is faster (by a very small margin)

This block of code took 1484 milliseconds to run on my PC:

for (int i = 0; i <= 1000000; i++) 
{ 
    // Concat strings 3 times using StringBuilder 
    StringBuilder s = new StringBuilder(); 
    s.Append(i.ToString()); 
    s.Append(i.ToString()); 
    s.Append(i.ToString()); 
}

And this one, using traditional concatenation, took slightly less time (1344 milliseconds):

for (int i = 0; i <= 1000000; i++) 
{ 
    // Concat strings 3 times using traditional concatenation 
    string s = i.ToString(); 
    s = s + i.ToString(); 
    s = s + i.ToString(); 
}

The above data suggests that StringBuilder only starts to work faster once the number of concatenations exceed 3.

Building strings from literals

When building a large string from several string literals (such as building a SQL block, or a client side javascript block), use neither traditional concatenation nor StringBuilder. Instead, choose one of the methods below:

+ operator

// Build script block 
string s = "<script>" 
       + "function test() {" 
       + "  alert('this is a test');" 
       + "  return 0;" 
       + "}";

The compiler concatenates that at compile time. At run-time, that works as fast as a big string literal.

@ string literal

I sometimes use the @ string literal which allows for newlines (I find this syntax is harder to maintain, formatting-wise, than using the + operator):

string s = @"<script> 
        function test() { 
        alert('this is a test'); 
        return 0; 
        }";

Both methods above run about 40 times faster than using StringBuilder or traditional string concatenation.

Rules of Thumb

  • When concatenating three dynamic string values or less, use traditional string concatenation.
  • When concatenating more than three dynamic string values, use StringBuilder.
  • When building a big string from several string literals, use either the @ string literal or the inline + operator.

Updated 2007-09-29

I have posted a follow-up article to provide more detailed analysis and to answer some of the questions asked by readers.

Back To Top