skip to Main Content

SQL Server Implicit String Conversion/Concatenation (XML parsing: line <x>, character <y>, unexpected end of input)

If you are getting this error in your SQL Server T-SQL script:, you may be running into an issue with implicit string conversion in SQL Server:

declare @xml varchar(max), @doc XML, @stringVariable varchar(256)
set @stringVariable = 'a string value'

-- @doc is set by concatenating multiple string literals
-- and string variables, with all the variables having less than
-- or equal to 8000 characters
set @xml = '<root>' +
... + @stringVariable +
...
'</root>'

print len(@xml)

set @doc = @xml

Output

8000
Msg 9400, Level 16, State 1, Line 4
XML parsing: line 64, character 74, unexpected end of input

As you can see in the output, the @xml variable was truncated to 8000 characters, resulting in an invalid XML. This is due to the way SQL Server performs implicit string conversions when concatenating strings. When all the string literals/variables involved in the concatenation are 8000 characters or less, the resulting string will be exactly 8000 characters.

The same issue occurs with NVARCHAR data type. Instead of the 8000-character limit, it’s 4000 characters.

A simple fix is to make sure at least one of the string variables is of type VARCHAR(MAX):

declare @xml varchar(max), @doc XML, @stringVariable varchar(256)
declare @x varchar(max)

set @stringVariable = 'a string value'
set @x = ''

set @xml = @x + '<root>' +
... + @stringVariable +
...
'</root>'

print len(@xml)

set @doc = @xml

More Info

Using a Different Configured Binding in WCF Client

To programmatically switch bindings on the fly, you can do it via the constructor of the generated client:

var client = new WeatherClient(“MyEndpoint”);

“MyEndpoint” is the name of the endpoint defined in your config file:

<client>
    <endpoint address="" binding="basicHttpBinding" bindingConfiguration="http1" contract="MyContract" name="MyEndpoint" />
</client>

WCF Client Error “The connection was closed unexpectedly” Calling Java/WebSphere 7 Web Service

If you get the following exception calling a WebSphere web service from your .NET WCF Client (service reference):

System.ServiceModel.CommunicationException: The underlying connection was closed: The connection was closed unexpectedly. —>  System.Net.WebException: The underlying connection was closed: The connection was closed unexpectedly.

Try adding this code before the service call:

System.Net.ServicePointManager.Expect100Continue = false;

More info on the 100-Continue behavior from MSDN.

Give The Power of Speech and Sound to Your PowerShell Scripts

Do you ever have the problem where you start a long running script (such as running a code build), multi-task on something else on another monitor while waiting for the script to finish, and then totally forget about the script until half an hour later? Well, here’s a solution your problem: have your script give you holler at you when it’s done.

Hello sir! I am done running your command!

In my library script file, I have the following functions to play sound files and to speak any text:

function PlayMp3($path) {
    # Use the default player to play. Hide the window.
    $si = new-object System.Diagnostics.ProcessStartInfo
    $si.fileName = $path
    $si.windowStyle = [System.Diagnostics.ProcessWindowStyle]::Hidden
    $process = New-Object System.Diagnostics.Process
    $process.startInfo=$si
    $process.start() 
}

function PlayWav($path) {
    $sound = new-Object System.Media.SoundPlayer;
    $sound.SoundLocation="$path";
    $sound.Play();
}

function Say($msg) {
    $Voice = new-object -com SAPI.SpVoice
    $Voice.Speak($msg, 1 )
}

If you like the text-to-speech feature but find Windows’ speech engine lacking, check out Ivona. It’s a commercial text-to-speech engine but you are allow to generate and download short speech files for free personal use. Now, my script can nicely interrupt me to tell me when it’s done. Other online text-to-speech engines: vozMe, SpokenText.

If Making Noise Is Not Your Thing

If making noise is not your thing, consider displaying a message in the Notification Area. Here’s the code (courtesy Microsoft TechNet):

Build complete!
function Get-ScriptName {
    $MyInvocation.ScriptName
} 

function DisplayNotificationInfo($msg, $title, $type) {
    # $type - "info" or "error"
    if ($type -eq $null) {$type = "info"}
    if ($title -eq $null) {$title = Get-ScriptName}

    [void] [System.Reflection.Assembly]::LoadWithPartialName("System.Windows.Forms")
    $objNotifyIcon = New-Object System.Windows.Forms.NotifyIcon 
    # Specify your own icon below
    $objNotifyIcon.Icon = "C:CdoScriptsFolder.ico"
    $objNotifyIcon.BalloonTipIcon = "Info"  
    $objNotifyIcon.BalloonTipTitle = $title
    $objNotifyIcon.BalloonTipText = $msg

    $objNotifyIcon.Visible = $True 
    $objNotifyIcon.ShowBalloonTip(10000)
}

Automating Maven with PowerShell

I found out the hard way that mvn (Maven) on Windows always return a success code of true, which means you cannot use the return code ($?) to check whether the mvn command succeeded or failed. Why they decided to break this seemingly basic program contract is a mystery. A work-around is to scan the mvn output and look for specific strings such as “BUILD SUCCESSFUL”.

Here’s how:

function InvokeAndCheckStdOut($cmd, $successString, $failString) {
    Write-Host "====> InvokeAndCheckStdOut"
    $fullCmd = "$cmd|Tee-Object -variable result" 

    Invoke-Expression $fullCmd
    $found = $false
    $success = $false
    foreach ($line in $result) {
      if ($line -match $failString) {
       $found = $true
       $success = $false
       break
      }
      else {
       if ($line -match $successString) {
        $found = $true
        $success = $true
        #"[InvokeAndCheckStdOut] FOUND MATCH: $line"
        break
       }
       else {
        #"[InvokeAndCheckStdOut] $line"
       }
      }
    }

    if (! $success) {
      PlayWav "${env:windir}\Media\ding.wav"
      throw "Mvn command failed."
    }

    Write-Host "InvokeAndCheckStdOut <===="
}

function InvokeMvn($cmd) {
    InvokeAndCheckStdOut $cmd "BUILD SUCCESSFUL" "BUILD FAILED"
}


InvokeMvn "mvn clean install"

See Also

Tee-Object and Invoke-Expression in PowerShell

The PowerShell Tee-Object Cmdlet allows you to send command output to a file or a variable, and display it in the console at the same time. This is very useful for those instances where you need to parse the text output of a command. I had a hard time getting it to work with Invoke-Expression. After trying different things, I finally found the solution. To get Tee-Object to work with Invoke-Expression in PowerShell 1, include the Tee command in the Invoke-Expression command like this:

Invoke-Expression "mvn clean install | Tee –variable result”

The following, which I guess is what most people try first, doesn’t work (at least in PowerShell V1). I guess because you are storing the result of the “Invoke-Expression” command itself into the variable instead of “mvn clean install”.

    Invoke-Expression "mvn clean install” | Tee -variable result

Wrapping Invoke-Expression in parenthesis (see below) works, but has a drawback: the output is not written to Standard Out until the whole command finishes.

    (Invoke-Expression "mvn clean install”) | Tee -variable result

Convert List<T>/IEnumerable to DataTable/DataView

Greetings visitor from the year 2023! You can get the source code for this from my Github repo here. Thanks for visiting.

Here’s a method to convert a generic List<T> to a DataTable. This can be used with ObjectDataSource so you get automatic sorting, etc.

using statement: using System.Reflection;
...

/// <summary>
/// Convert a List{T} to a DataTable.
/// </summary>
public DataTable ToDataTable<T>(List<T> items)
{
    var tb = new DataTable(typeof (T).Name);

    PropertyInfo[] props = typeof (T).GetProperties(BindingFlags.Public | BindingFlags.Instance);

    foreach (PropertyInfo prop in props)
    {
        Type t = GetCoreType(prop.PropertyType);
        tb.Columns.Add(prop.Name, t);
    }

    foreach (T item in items)
    {
        var values = new object[props.Length];

        for (int i = 0; i < props.Length; i++)
        {
            values[i] = props[i].GetValue(item, null);
        }

        tb.Rows.Add(values);
    }

    return tb;
}

/// <summary>
/// Determine of specified type is nullable
/// </summary>
public static bool IsNullable(Type t)
{
    return !t.IsValueType || (t.IsGenericType && t.GetGenericTypeDefinition() == typeof(Nullable<>));
}

/// <summary>
/// Return underlying type if type is Nullable otherwise return the type
/// </summary>
public static Type GetCoreType(Type t)
{
    if (t != null && IsNullable(t))
    {
        if (!t.IsValueType)
        {
            return t;
        }
        else
        {
            return Nullable.GetUnderlyingType(t);
        }
    }
    else
    {
        return t;
    }
}

Problem with jqModal/jQuery JavaScript Intellisense and Workaround

Is anyone else experiencing a problem with Visual Studio 2008 Intellisense with jQuery and jqModal?

It seems that jqModal (a jQuery plugin for modal dialogs) is breaking jQuery Intellisense on my Visual Studio 2008 setup. Without a reference to jqModal.js, jQuery Intellisense works fine:

image

As soon as I add the reference to jqModal.js, the Intellisense stops working with this error:

Warning    1    Error updating JScript IntelliSense: …\scripts\jquery-1.3.2-vsdoc.js: ‘jQuery.support.htmlSerialize’ is null or not an object @ 1430:4

Workaround

To work around the problem, generate the script include tag dynamically so that the JavaScript Intellisense engine doesn’t see the jqModal reference at design time.

<head runat="server">
    <title>My Web</title>
    <asp:Literal ID="LitJqModalScript" runat="server"></asp:Literal>
    <script src="../scripts/jquery-1.3.2.js" type="text/javascript"></script>
</head>

 

protected override void OnLoad(EventArgs e)
{
    base.OnLoad(e);
 
    LitJqModalScript.Text = @"<script src=""../scripts/jqModal.js"" type=""text/javascript""></script>";
}

My Multiple-Monitor Programming Setup

In my opinion, the top three hardware items that help maximize programmer productivity are sufficient RAM, multiple monitors, and a fast multi-core CPU. For RAM, try to have at least 2GB for Visual Studio development, especially if you have additional applications such as Resharper, a local SQL Server instance, IIS, etc. Remember that on Windows XP or Vista 32-bit, the maximum usable RAM is limited to about 3.2 GB or so.

My current company-provided laptop, which I must perform most programming activities on, is a Core 2 Duo 2.4 Ghz. It is satisfactory for what I am doing. Given the choice however, I would go for a quad-core CPU desktop. Desktop usually has faster drives, better video cards, etc. And the additional cores allow for smooth multitasking and better performance when using virtual machines.

image

Plenty of RAM and multiple fast CPU cores will keep the waiting down to a minimum. Now you need to give yourself the ability to see all of those things that you have going on. Research has demonstrated that multiple monitors increase productivity. For most programmers, I would recommend two monitors. Adding that second monitor is a relatively inexpensive affair. Most laptops or desktop video cards has built-in support for a second monitor so all you have to do is getting that second LCD, which cost very little these days compared to three years ago. The third monitor and beyond is where things start to get complicated. You either have to install a second video card, or use a second PC/laptop in conjunction with a software application named MaxiVista. Unless you have spare monitors sitting around, I think the return on investment starts to diminish greatly beyond two monitors.

Another way to use that third or fourth monitor is to simply attach them to additional computers. Sure, you won’t be able to control all those monitors with the same mouse and keyboard but this setup is not without advantages:

  • When the first computer is rebooting or not responding, you still have a fully functioning computer.
  • The screens on the second computer are not using CPU/memory resources on the first computer.

My 5-Monitor Setup

I work from home most of the time and on my desk, there are five monitors, hooked up to three laptops. I know… it’s over the top but it’s not like I bought all of these laptops/monitors specifically for this setup. The second laptop is my personal laptop. The third one is another old personal laptop that would otherwise would be sitting around gathering dust. I might as well put them all to use. Here’s how the monitors are arranged:

image

The main laptop is hooked up to monitor #1, an ASUS 24″ running at 1920×1080 resolution and monitor #2, the main laptop’s built-in 15.4-inch LCD running at 1680×1050. I spend most of my time on these two monitors. The ASUS 24″ is great for programming/debugging and is where I normally park Visual Studio 2008 or Rational Software Architect. If you can get one with more vertical resolution (such as 1920×1200), that’s even better.

Visual Studio

The extra width on the main monitor allows me to permanently open supporting panes like Solution Explorer that I otherwise would configure to “auto hide”. Note where I have my Start menu: on the right edge of the main monitor. This gives me more vertical space so I can see more code without scrolling. The additional benefits are that more opened windows can fit on the task bar, and that I don’t have to move the mouse as much to access the start menu.

The built-in laptop monitor (#2) is where I have miscellaneous supporting windows such as rolling logs, instant messaging client, email client, documents, etc.

For taking notes, I use Microsoft OneNote and since it’s not installed on my company PC, I use my own laptop for it. This laptop is monitor #3, sitting to the left of the main monitor. Monitors #4 and #5 are used once in a while to display server logs, various server telnet and remote desktop connections, and anything I don’t need to control often.

Update 2/25/2008 – Synergy

Rohan kindly pointed out to me a free keyboard/mouse sharing utility called Synergy. You should give this a try if you have multiple computers on your desk. Synergy lets you use a single keyboard/mouse to control multiple computers, running multiple operating systems. Additional features include clipboard sharing, screen saver, and single password login. The configuration is not the most intuitive but once you have everything set up correctly, it works like a charm.

I now can control all three laptops and 5 monitors with one mouse/keyboard combo! Very nice.

Synergy animation

Back To Top