Running Windows 7 Beta

With everyone reporting that Windows 7 beta was very stable I upgraded both my wife’s Dell Studio and my Dell 64-bit XPS to Windows 7.

Both setups ran fine without a hitch although the upgrade route took a couple of extra hours with all the migrating of application and user settings.

The good news is that I’ve been using both for over a week without any major hitches. The only problems I’ve had so far are to do with the 64-bit install. The first one being Windows Live Mesh which stays forever at the ‘Live mesh is starting’ step (there doesn’t seem to be any kind of fix out for this) and Google Chrome which flat out doesn’t work unless you add a flag to it’s startup. This seems to work for most cases except for a few place where it fails like the new Gmail Offline access using Gears.

Migrating from BlogEngine.NET to WordPress

Yet another migration post, feels like just yesterday I migrated from dasBlog to BlogEngine.NET.

My justification was that I didn’t want to be manually updating my blog with each new release.

With Aneef graciously offering to host my site I’d be cutting down on my hosting bill as well! How sweet is that?

So here’s how I made the move.

  • First I setup a new instance of WordPress on my host account. Now this is the absolutest fun part, with Fantastico it takes just two clicks and literally two seconds to install WP, plus upgrading to the latest version of WordPress is going to be a breeze as well!
fantastico-wordpress-install
  • Next we’ll export our posts to BlogML from the BlogEngine.NET admin site.
  • You’ll need to tweak this file a bit before the import. Search for .axd files and you’ll realize the first problem. BlogEngine seems to pull down the images through an .axd file, a simple search and replace should fix this problem. You might want to move your images to the wp-content/uploads folder as well since it will become easier to backup your stuff.
  • In the BlogML file, the categories seem to have a GUID, which I think I might have inherited from dasBlog, but if you do come across the problem you can either do a search and replace with the GUIDs or after importing into WordPress rename the category from the Admin site.
  • Finally you’ll need to add the BlogML Import plugin to your WP install. Just grab the zip file from Kavinda, unzip and upload them to the wp-admin/imports folder check his post and the related posts if you have trouble setting it up.
  • From the WordPress admin menu click import, supply your file and your done!
  • You’ll of course need to upload your images as well.
  • If your using FeedBurner remember to change the source address of FeedBurner and also use the FeedBurner plugin from Google so that new user’s will get the FeedBurner url when they subscribe to your blog.

Enjoy! Let me know if you have any issues migrating.

Deleting millions of records from a table without blowing the transaction log

Over the weekend I realized that deleting 80% of the records on a table with 87 million rows is not quite as easy as issuing a delete statement with a where criteria.

The main reason is that the transaction log quickly grows as it needs to keep track of all the uncommitted rows that are being deleted. In my case the database file was 60GB and the log file grew quickly from a mere 40MB to 32GB before I ran out of disk space and the database goes into recovery mode.

Browsing through blogs I narrowed my scenario to two options.

  1. Move the required records over to a temporary table and issue a truncate on the table you want to delete.
  2. Issue the deletes in batches so that the log file doesn’t fill up.

The first option seems to take the shorter time to execute but I didn’t want to go through the effort of removing and adding foreign keys  and renaming tables. So I went with the second option. Here’s the script I used for SQL Server which deletes 10,000 rows at a time and commits them.

DECLARE @continue INT
DECLARE @rowcount INT
 
SET @continue = 1
WHILE @continue = 1
BEGIN
    PRINT GETDATE()
    SET ROWCOUNT 10000
    BEGIN TRANSACTION
    DELETE FROM Transactions WHERE  TradeDate IS NULL
    SET @rowcount = @@rowcount 
    COMMIT
    PRINT GETDATE()
    IF @rowcount = 0
    BEGIN
        SET @continue = 0
    END
END

The print statements help to show you how things are moving along.

(10000 row(s) affected)
Jan  1 2009 11:54PM
Jan  1 2009 11:54PM

(10000 row(s) affected)
Jan  1 2009 11:55PM
Jan  1 2009 11:55PM

Seven hours later the script is still executing with 39 million rows deleted so far and the log file currently at 700MB.

Viewing the disk space usage (size) of all the tables in a SQL Server 2005/2008 database

I spent a couple of hours searching for a script that returns the size of all the tables in a database. Most of the ones that you find use sp_msforeachtable but that is just for one table and you then need to use temporary tables to hold the values.
 
This article on the Database Journal by Ananth Kumar (AKA MAK) had this handy script that simply works by copy-pasting it as is. Read the linked article for details on viewing the size by schema.
 
BEGIN TRY 
SELECT 
    (row_number() over(order by a3.name, a2.name))%2 as l1,
    a3.name AS [schemaname],
    a2.name AS [tablename],
    a1.rows as row_count,
    (a1.reserved + ISNULL(a4.reserved,0))* 8 AS reserved, 
    a1.data * 8 AS data,
    (CASE WHEN (a1.used + ISNULL(a4.used,0)) > a1.data THEN 
    (a1.used + ISNULL(a4.used,0)) - a1.data ELSE 0 END) * 8 AS index_size,
    (CASE WHEN (a1.reserved + ISNULL(a4.reserved,0)) > a1.used THEN 
    (a1.reserved + ISNULL(a4.reserved,0)) - a1.used ELSE 0 END) * 8 AS unused
FROM
 (SELECT 
  ps.object_id,
  SUM (
   CASE
    WHEN (ps.index_id < 2) THEN row_count
    ELSE 0
   END
   ) AS [rows],
  SUM (ps.reserved_page_count) AS reserved,
  SUM (
   CASE
     WHEN (ps.index_id < 2) THEN 
   (ps.in_row_data_page_count + ps.lob_used_page_count + ps.row_overflow_used_page_count)
     ELSE (ps.lob_used_page_count + ps.row_overflow_used_page_count)
   END
   ) AS data,
  SUM (ps.used_page_count) AS used
 FROM sys.dm_db_partition_stats ps
 GROUP BY ps.object_id) AS a1
LEFT OUTER JOIN 
 (SELECT 
  it.parent_id,
  SUM(ps.reserved_page_count) AS reserved,
  SUM(ps.used_page_count) AS used
  FROM sys.dm_db_partition_stats ps
  INNER JOIN sys.internal_tables it ON (it.object_id = ps.object_id)
  WHERE it.internal_type IN (202,204)
  GROUP BY it.parent_id) AS a4 ON (a4.parent_id = a1.object_id)
INNER JOIN sys.all_objects a2  ON ( a1.object_id = a2.object_id ) 
INNER JOIN sys.schemas a3 ON (a2.schema_id = a3.schema_id)
WHERE a2.type <> 'S' and a2.type <> 'IT'
ORDER BY a3.name, a2.name
END TRY 
BEGIN CATCH 
SELECT 
 -100 as l1
, 1 as schemaname 
,       ERROR_NUMBER() as tablename
,       ERROR_SEVERITY() as row_count
,       ERROR_STATE() as reserved
,       ERROR_MESSAGE() as data
,       1 as index_size
,   1 as unused 
END CATCH

 

Quick extra tip, if you export the result-set to Excel you can do additional sorts to figure out more stuff. For example the first table on this list was taking up 47GB of disk space in a 63 GB database. With the index taking more space than the actual data! But this is more of an OLAP db with just one batch update daily and optimized for retrieving data faster so I guess that's okay.

sql-disk-space-usage

 

Went shooting

beretta

So we went clay shooting yesterday with Adam Tyler (an Athen’s Olympic Bronze medalist and two time Commonwealth Gold medalist) showing us how it’s done.

After playing Counter Strike with the guys at office it felt good too hold a real live Beretta shot gun. Got four of the first five shots in the first round.

Here’s one where I hit the clay.

good-shot

The Melbourne Unique World team!

UniqueWorld Team

Melbourne

It’s been nearly four months since I moved to Melbourne and having been in Sydney six months prior, Melbourne was indeed a welcome change.

Although it’s very easy to forget how beautiful it is here. My favorite part of the daily commute is the ten minute walk through Fawkner Park (named after Melbourne’s co-founder).

Here’s a bird’s eye view of Fawkner park.

Fawkner-Park

A few from today’s morning walk.

IMG_0662

 

IMG_0663

 

On sunny days we grab our lunch and head downstairs to have it at the park. It’s no wonder Melboune has consistently been at the top spot in the World’s Most Livable Cities list over the years.

Could it be true? Is Live Search (Kumo) really getting better than the great Goog?

So I wanted to find a tool that would do fast folder comparisons. Searching on Google gave me a lot of useless stuff most of them related to fast folder access or folder comparison but not ‘fast folder comparison’.

On a whim I switched live.com and lo and behold I was presented with UltraCompare in the search results with this to say:

Save time by performing an extremely fast folder comparison using just the sizes and timestamps of corresponding files.

Even a quoted search for “fast folder comparison” returns 8 results in Live but just 2 in Google.

Will I switch to Live search? No.

But something to keep in mind when I’m not getting the results I want on Google.

BTW: The Google Search Wiki is certainly intriguing. I totally dig neat Ajax stuff.

GooglePromote

A Personal Blog: Spaces vs Blogger

I’ve been wanting to start a personal blog for sometime and finally went ahead with it.

This post though is not about my personal blog but rather about which platform I chose. I’ve been blogging for nearly five years now and went the geek way and hosted my platform. But I’m now fed up with having to maintain the stuff myself whenever new versions are released.

I really wanted to use MSN Live Spaces since it had the nice integration with Messenger and the new Live bits that were going to be released excited me. But after spending a couple of hours tinkering with it I was finally put off by the obnoxious advertising. Not only were they flashy images, there were two of them a big banner as well as one on the right hand.

I was debating between Blogger and WordPress and then went ahead with Blogger as I could host it on my own domain. Other things that pushed me towards Google was that I already had my email hosted at Google, Feedburner integration.

On a side note, for those interested, my personal blog is at http://merill.blogspot.com until I get Custom Domains with Yahoo sorted out. But hey Blogger has permanent redirects so there’s nothing to worry. Really.

Automate Build for a ClickOnce Application Hosted on CodePlex using MSBuild

This is what I wanted my automated build to do:

  1. Get the latest version from CodePlex
  2. Update the version number in AssemblyInfo.cs
  3. Build the project
  4. Check-in the updated AssemblyInfo.cs
  5. Label the project with the version number
  6. Publish the ClickOnce package to my webserver

In order to achieve this I used the CodePlex Source Control Client (cpc.exe) to perform the get latest and check-ins. I was not able to complete #5 as the cpc client does not provide labelling. Maybe once the SvnBridge supports it I can upgrade this guide to use a SubVersion client.

I also wrote a command line utility SetVersion.exe utility that updates the version number on an AsssemblyInfo.cs or .vb file. The source for this is published as SetVersion on the MSDN Code Gallery.

So without further ado this is the MSBuild project file that performs the tasks.

<Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003" DefaultTargets="Build">
  <PropertyGroup>
    <CodePlexProjectName>mycodeplexprojectname</CodePlexProjectName>
    <Version>1.$(CCNetNumericLabel).0.0</Version>
    <BuildFolder>C:\MyBuilds\$(Version)\</BuildFolder>
    <IISPublishFolder>C:\MyInstallLocation\</IISPublishFolder>
    <ProjectFolder>MySolution\Client\</ProjectFolder>
    <BuildPublishFolder>$(ProjectFolder)bin\Release\app.publish</BuildPublishFolder>
    <BuildProject>$(ProjectFolder)Client.vbproj</BuildProject>
    <ToolsFolder>C:\Projects\SolutionFolder\Build\</ToolsFolder>
    <CodePlexClient>$(ToolsFolder)cpc.exe</CodePlexClient>
    <SetVersion>$(ToolsFolder)SetVersion.exe</SetVersion>
    <CodePlexUser>codeplexusername</CodePlexUser>
    <CodePlexPassword>codeplexpassword</CodePlexPassword>
  </PropertyGroup>
 
  <Target Name="Build">
    <Message Text="##Building version: $(Version)" Importance="high"/>
    <Message Text="##Cleaning folder..." Importance="high"/>
    <Exec Command="md $(BuildFolder)"/>
 
    <Message Text="##Getting latest version" Importance="high"/>
    <Exec Command="$(CodePlexClient) checkout $(CodePlexProjectName)" WorkingDirectory="$(BuildFolder)"/>
 
    <Message Text="##Updating build number $(SetVersion) $(BuildFolder)$(ProjectFolder)AssemblyVersionInfo.vb $(Version)" Importance="high"/>
    <Exec Command="$(SetVersion) $(BuildFolder)$(ProjectFolder)AssemblyVersionInfo.vb $(Version)"/>
 
    <Message Text="##Building" Importance="high"/>
    <MSBuild Projects="$(BuildFolder)$(BuildProject)" Targets="Publish" Properties="Configuration=Release;ApplicationVersion=$(Version)" />
 
    <Message Text="##Commiting version change" Importance="high"/>
    <Exec Command="$(CodePlexClient) commit /username $(CodePlexUser) /password $(CodePlexPassword) /message ---Automated-Build---$(Version)" WorkingDirectory="$(BuildFolder)"/>
 
    <Message Text="##Publishing" Importance="high"/>
    <Exec Command="rd $(IISPublishFolder) /s /q"/>
    <Exec Command="xcopy $(BuildFolder)$(BuildPublishFolder)\*.* $(IISPublishFolder) /e /i /y"/>
    <Exec Command="xcopy $(IISPublishFolder)..\default.htm $(IISPublishFolder)"/>
  </Target>
</Project>

Although I’ve worked with MSBuild files in the past this was the very first time I wrote one so a few things to remember. Items defined in the <PropertGroup> node are like declaring variables. You can then use them anywhere in your script in this format $(VariableName).

What's even better is that you can override the default values you provide in the build file with values from the command line like. For example to force a particular version number I can peform the build like this:

MSBuild ReleaseBuild.proj /p:Version=1.6.0.0

When getting the latest version I opted to create a new folder for each build and pull the files into that folder. This way I didn’t have to worry about cleaning the bin folders. Every new build would start from an empty folder.

In my case rather than building the solution file I opted to build just the Client project since that would compile all the dependant projects. The key part that helped me create the necessary files for the ClickOnce publishing was the Targets=”Publish” parameter, that along with the ability to set the ClickOnce version using the properties provided an elegant solution for the tricky problem of keeping the AssemblyVersion and the ClickOnce application version in sync.

<MSBuild Projects="$(BuildFolder)$(BuildProject)" Targets="Publish" Properties="Configuration=Release;ApplicationVersion=$(Version)" />

I use CruiseControl to kick off the build process as well as drive the version numbering. CruiseControl is not absolutely necessary though, as the build file can be run from the command line, as long as a version number is specified.

I’ve packaged the build file along with the SetVersion.exe, cpc.exe (codeplex client) and the ccnet.config for download here.

WCF Performance Optimization Tips

I wound up work on my last project and thought of sharing some performance challenges we faced when the product went live.

Keep in mind though that optimization options heavily rely on your application design and its usage scenarios.

Usage Scenario

The usage scenario for which the following optimizations worked are as follows. The WCF services are hosted under IIS and currently only serve requests that come from the desktop client application through the intranet. Roughly 300 instances of the client application will be used concurrently.


Long running calls
The application design used the BackgroundWorker when making calls to the server but the actual WCF call was synchronous. Synchronous calls work well in most scenarios but when your service performs a long running task (>1 sec) it blocks other clients from connecting to the server. Asynchronous calls make better sense in this scenario.
Ref: Synchronous and Asynchronous Operations

Bindings
When designing your application pick the binding carefully as this also affects performance. The WCF services were initially designed to use WSHttpBinding. This binding though affects performance especially when options such as security, reliable sessions and transaction flow are enabled.

As part of the performance tuning we switched to using BasicHttpBinding as none of the WS features were actually being used by the application. This dramatically improved performance because we cut down on all the acknowledgements.

 

Default WSHttpBinding
with Reliable Session Enabled (9 messages)
BasicHttpBinding (2 messages)
WsHttpBinding-Messages BasicBinding-Messages

  

The security we lost when switching over to BasicHttpBinding will be enforced at the network level by locking down the machines that are allowed to make calls to the service.

In our case we could have squeezed out more performance if we had used NetTcpBinding but that would have required IIS7 and WAS.

Ref:
-    WCF Binding Comparison
-    BasicHttpBinding compared to WSHttpBinding at SOAP packet level [Note: Although this author recommends WS over Basic, Microsoft specifies that the secure and reliable sessions be disabled for Load Balanced environments which basically brings it down to Basic].
-    Load Balancing

Service Model Throttling
The service throttling parameters are another key element to look at when performance tuning services. The name is a little misleading though as it tends to imply that you want to throttle your service when in fact these are the default settings that the Microsoft engineers have put in place to prevent DOS attacks against your service.

In our case due to the nature of our application usage these settings caused the server to queue new requests once the default throttling levels of 10 concurrent sessions were reached. What effectively happened was that once long running queries were being processed other requests started getting queued up even though the server memory and processor usage were very low.

The resolution for this was to increase the default values for these settings (shown below) to a few thousand.

<behaviors> 
  <serviceBehaviors> 
    <behavior name="DefaultThrottlingBehavior"> 
      <serviceThrottling maxConcurrentCalls="16" 
                         maxConcurrentSessions="10" 
                         maxConcurrentInstances="[Int32.MaxValue]" /> 
    </behavior> 
  </serviceBehaviors> 
</behaviors> 


Ref:
-    <serviceThrottling>
-    Using ServiceThrottlingBehavior to Control WCF Service Performance

General guidelines when tuning performance

Benchmarks
When tasked with tuning for performance the first requirement is to establish benchmarks on the current service levels and the expected service level once the tuning is completed.

Identify the bottleneck
The next key point is to identify the area that is causing the bottleneck. For the WCF services we tracked the time taken on the client side against the actual execution of the web service to eliminate the network being the bottleneck.
We worked backwards from the database call to ensure that they completed within the specified time.

Having your application instrumented is a key part of the initial design. The Enterprise Library provides instrumentation and open source tools such as log4net are lightweight and effective as well.

Replicate in your dev environment
Effectively testing out any tuning options can only be done if you can repro the issue in your development environment. The built-in load tester in Visual Studio will be a key part of your load testing armoury.

Code Profiling
If the bottleneck points to your code, code profiling tools will help your isolate the problem areas. At the time of this writing Red Gate ANTS Profiler and JetBrains dotTrace are capable solutions. The code profiler built into Visual Studio 2008 is not as effective as these tools.

Finally a shout-out to WCF MVP Buddhike who saved my day by quickly pointing me towards the binding and security modes as the perf culprits.