Home‎ > ‎

Code Blog


Simple client side breadcrumbs

posted 21 Nov 2011 03:34 by Michael Barry   [ updated 21 Nov 2011 03:36 ]

A web application I began working on was suffering from terrible bloat caused by the usual evolution an enterprise level application is subjected too. New requirements constantly being added without the time to re-factor the original code meant the application was highly functional but unfortunately highly unusable at the same time, particularly for new users.


One of the initial requirements was to re-brand the masthead and at the same time we added in a little bit of jquery magic to help the users navigate through the application with a breadcrumb trail. The structure of the application under the covers (jsp, struts 1) didn't lend itself to doing something simple on the server side to create the breadcrumb so the following very simple method was used instead to provide the users with breadcrumb navigation. Once the apps is in a more settled state, we'll move the breadcrumb generation into the server-side code but for now this seemed a useful interim solution.

The term breadcrumb trail comes from the fairy tale in which two children, Hansel and Gretel drop breadcrumbs to form a trail back to their home. The idea here is to create bit of client side script that can be put in a header.jsp or similar file that is included in all pages in the site and provides the users with a trail on every page so they can find their way back to the home page.

First step is to define our site-map. The site-map itself will be straightforward div element and then the hierarchy and links will be created using lists and link elements:

<div style="display:none" id="sitemap">
<a class="breadcrumblink" href="homeurl.htm">Home</a>
<ul>
    <li><a class="breadcrumblink" href="level1-1url">Level 1.1</a>
        <ul>
            <li><a class="breadcrumblink" href="level1-1-1url">Level 1.1.1</a></li>
            <li><a class="breadcrumblink" href="level1-1-2url">Level 1.1.2</a>
                <ul>
                    <li><a class="breadcrumblink" href="level1-1-2-1url">Level 1.1.2.1</a></li>
              </ul>
            </li>
        </ul>
    </li>
    <li><a class="breadcrumblink" href="level1-2url">Level 1.2</a></li>
    <li><a class="breadcrumblink" href="level1-3url">Level 1.3</a>
                <ul>
                    <li><a class="breadcrumblink" href="level1-3-1url">Level 1.3.1</a></li>
              </ul>
   
    </li>
    <li><a class="breadcrumblink" href="level1-4url">Level 1.4</a>
            <ul>
            <li><a class="breadcrumblink" href="level1-4-1url">Level 1.4.1</a>
                <ul>
                    <li><a class="breadcrumblink" href="level1-4-1-1url">Level 1.4.1.1</a></li>
              </ul>
            </li>
        </ul>
    </li>
</ul>
</div>

We need to have an element in the page where we can put our final breadcrumb trail so we create a simple span as follows:

<span id="breadcrumb"></span>


As a page loads we want to figure out what the current URL is:

      var fullurl = window.location.href;


There may be some query string variables on the url so we can parse those out if necessary     

      fullurl = fullurl.replace(/&varname=(.*)/,"");


And then remove the domain name and possible app name from the URL so we're just left with the page location
     
      var currentloc = fullurl.match(/\/AppName\/(.*)/)[1];


And then use jquery selectors to find this url in our sitemap and work out the route through the sitemap to that link

      var selector = '#sitemap a[href="' + currentloc + '"]';
      var list = $(selector).parents().children("a").map(function ()  {            
                return this;                 
      }).get().reverse();

     
And finally just add these links to our breadcrumb span.

      $.each(list, function(){$('#breadcrumb').append(this);});


All the code together in one page below:

<html>
<head>
 <script src="http://code.jquery.com/jquery-1.7rc2.js">
 </script>
 <title>Bread crumbs example</title>
</head>
<body>
<div style="display:none" id="sitemap">
<a class="breadcrumblink" href="homeurl.htm">Home</a>
<ul>
    <li><a class="breadcrumblink" href="level1-1url">Level 1.1</a>
        <ul>
            <li><a class="breadcrumblink" href="level1-1-1url">Level 1.1.1</a></li>
            <li><a class="breadcrumblink" href="level1-1-2url">Level 1.1.2</a>
                <ul>
                    <li><a class="breadcrumblink" href="level1-1-2-1url">Level 1.1.2.1</a></li>
              </ul>
            </li>
        </ul>
    </li>
    <li><a class="breadcrumblink" href="level1-2url">Level 1.2</a></li>
    <li><a class="breadcrumblink" href="level1-3url">Level 1.3</a>
                <ul>
                    <li><a class="breadcrumblink" href="level1-3-1url">Level 1.3.1</a></li>
              </ul>
   
    </li>
    <li><a class="breadcrumblink" href="level1-4url">Level 1.4</a>
            <ul>
            <li><a class="breadcrumblink" href="level1-4-1url">Level 1.4.1</a>
                <ul>
                    <li><a class="breadcrumblink" href="level1-4-1-1url">Level 1.4.1.1</a></li>
              </ul>
            </li>
        </ul>
    </li>
</ul>
</div>
<span id="breadcrumb"></span>
 <script>
 
      //Get the current location and parse out any
      //trailing characters like #,& and query string variables
      var fullurl = window.location.href;
     
      fullurl = fullurl.replace(/&varname=(.*)/,"");
     
      //Remove the hostname/Corp from the URL
      var currentloc = fullurl.match(/\/AppName\/(.*)/)[1];

      //Now that we have determined our location, work out the
      //navigation back through the sitemap tree to create our breadcrumb
      var selector = '#sitemap a[href="' + currentloc + '"]';
      var list = $(selector).parents().children("a").map(function ()  {            
                return this;                 
      }).get().reverse();
      $(list).last().attr("href",window.location.href);

      //Move the links to the breadcrumb span
      $.each(list, function(){$('#breadcrumb').append(this);});

</script>
</body>
</html>

Multi Config with Clickonce deployments

posted 30 Sep 2011 12:30 by Michael Barry   [ updated 21 Nov 2011 03:54 ]

Writing lots of lovely code is nice but at some point the application your working on is going to need to be deployed somewhere. If you're in a large organisation chances are the final build of your application will need to go through a few stages as it strolls confidently towards production. This means your need a process to deploy the same application to multiple environments e.g. DEV, SIT, UAT, PROD but you don't want to have to compile and build and package the application at every stage.


This tutorial walks through the creation of a generic msbuild script that can be used to take a set of .NET assemblies and create a full Clickonce deployment for those assemblies merging in the appropriate environment settings along the way. Coupled with a good automated build process this provides a config driven mechanism for deploying any set of .NET binaries to any environment within your organisation.


Structure

I want to have a folder structure containing the necessary configuration files for each environment. The name of the folder will be key to identifying that environment and the folder itself will contain the app.config file the environment aswell as an msbuild target file containing the environment specific properties needed by the packaging process.


Example Folder Structure:


\scripts\package.proj
\envs\dev\app.config
\envs\dev\properties.target
\envs\sit\app.config
\envs\sit\\properties.target  


Package Script

So now to the script itself (package.proj). The script will need to take in two arguments: the path to the binaries we want to package and what environment we want to deploy to. You don't need to add in a PropertyGroup for the properties we're going to pass in but I like adding this at the top of the script as it makes it easier to keep track of what you've named them.


   <Project ToolsVersion="4.0" DefaultTargets="Package" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">

   <PropertyGroup>
     <PathToBinaries></PathToBinaries>
    <Environment></Environment>
   </PropertyGroup>     



Someone calling the script will pass in the Environment property to the script so we want to use this to work our way back to the properties file and app.config file for that environment. The properties.target file for each environment will look like this:


<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">

  <PropertyGroup>
     <ApplicationName>MyApp</ApplicationName>
  <MainExecutableName>MyApp.MyRegion</MainExecutableName>
  <TeamName>MyDivision.MyTeam</TeamName>
  <MagePath>"C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\Bin\NETFX 4.0 Tools\mage.exe"</MagePath>
  <Certificate>C:\temp\MyApp_TemporaryKey.pfx</Certificate>       
  <ApplicationName>MyApp</ApplicationName>
     <DeployToUrl>http://MyWebServer/MyAppFolder/</DeployToUrl>
  <SupportUrl>http://MyWiki/MyApp/support.htm</SupportUrl>
  <Description>MyApp Dev Deployment</Description>
  <PackageVersion>1.0.0.1</PackageVersion>
   </PropertyGroup>
   
</Project>


If this is the config for our DEV package then we'll use these settings to create a clickonce package for the development environment using the version number, deployment URL, Name etc. as set in the file. We add the following line to the package.proj script to import in the properties for the specified environment:


   <Import Project="..\envs\$(Environment)\properties.target" />

 

The key part of the packaging is identifying the assemblies and other files that are to be included as part of the build so that they can be listed in the application manifest we will be generating. We use the ItemGroup element to get the lists of assemblies and other files that we want to include:

   

<ItemGroup>
        <EntryPoint Include="$(PathToBinaries)\$(MainExecutableName).exe" />
  <Dependency Include="$(PathToBinaries)\*.dll">
             <AssemblyType>Managed</AssemblyType>
            <DependencyType>Install</DependencyType>
        </Dependency>
  <IconFile Include="$(PathToBinaries)\$(ApplicationName).ico"/>
  <ConfigFile Include="$(PathToBinaries)\$(MainExecutableName).exe.config">
            <TargetPath>$(MainExecutableName).exe.config</TargetPath>
        </ConfigFile>
 </ItemGroup>


Although the name of the executable file will probably be the same for all environments the version number of the main executable will not so rather than add this to the properties file and have to update it all the time, I've written an msbuild task using inline C# to pull out the version number from the executable file and use this in the build process:


  <UsingTask

    TaskName="GetAssemblyVersion"

    TaskFactory="CodeTaskFactory"

    AssemblyFile="$(MSBuildToolsPath)\Microsoft.Build.Tasks.v4.0.dll" >

   

 <ParameterGroup>

   <FilePath ParameterType="System.String" Required="true" />

   <VersionNumber ParameterType="System.String" Output="true" />

 </ParameterGroup>

    <Task>

      <Using Namespace="System.Reflection"/>

      <Code Type="Fragment" Language="cs">

      <![CDATA[

     this.VersionNumber = AssemblyName.GetAssemblyName(this.FilePath).Version.ToString();

   ]]>

      </Code>

    </Task>

  </UsingTask>

 

This task will dig out the version number of the assembly and then return the version as a string in an output parameter. To use this in the script we call the task as follows:


   <GetAssemblyVersion FilePath="$(PathToBinaries)\$(MainExecutableName).exe">

    <Output PropertyName="AssemblyVersion" TaskParameter="VersionNumber"/>

   </GetAssemblyVersion>


This gives us a property that we can then use later in the script. Next we want to copy across the app.config for our package again using the environment parameter to determine the path of the app.config file to use.

 

<Copy SourceFiles="..\envs\$(Environment)\app.config" DestinationFiles="$(PathToBinaries)\$(MainExecutableName).exe.config" />


And now to the meat of the script. Generating the manifests. The GenerateApplicationManifest task is used, funnily enough, to generate the application manifest. The properties and parameters we talked about earlier are used to fill out the attributes of this task. The AssemblyName is really the name of the package rather than the actual name of the main executable and making sure this is unique for each environment is important in ensuring the different packages and can installed and run on the one machine. If the application identity is the same for different packages you'll get conflicts when you try to run them on the same PC.


<GenerateApplicationManifest
            AssemblyName="$(ApplicationName).$(Environment)"
            AssemblyVersion="$(AssemblyVersion)"
            ConfigFile="@(ConfigFile)"
            Dependencies="@(Dependency)"
            Description="$(Description)"
            EntryPoint="@(EntryPoint)"
            IconFile="@(IconFile)"
            InputManifest="@(BaseManifest)"
            OutputManifest="$(PathToBinaries)\$(MainExecutableName).exe.manifest">
            <Output
                ItemName="ApplicationManifest"
                TaskParameter="OutputManifest"/>
            </GenerateApplicationManifest>



Mage is then used to sign this manifest using a signing certificate. This gives us some security that once a package is created you cannot change the contents of the package without corrupting it and rendering the package invalid unless you sign it again.


<Exec Command="$(MagePath) -Sign @(ApplicationManifest) -cf $(Certificate)"/>


A deployment manifest is created to facilitate the clickonce install and again, just like the application manifest, the assembly name should be unique across deployment manifests on different environments to ensure there's no conflict at install time.


     <GenerateDeploymentManifest

            AssemblyName="$(ApplicationName).$(Environment)"

            AssemblyVersion="$(PackageVersion)"

            DeploymentUrl="$(DeployToUrl)$(MainExecutableName).application"

            Description="$(Description)"

            EntryPoint="@(ApplicationManifest)"

            Install="true"

            TargetFrameworkMoniker=".NETFramework,Version=v4.0"

   OutputManifest="$(PathToBinaries)\$(MainExecutableName).application"

            Product="$(ApplicationName).$(Environment)"

            Publisher="$(TeamName)"

            SupportUrl="$(SupportUrl)"

            UpdateEnabled="true"

            UpdateMode="Foreground">

            <Output

                ItemName="DeployManifest"

                TaskParameter="OutputManifest"/>

        </GenerateDeploymentManifest>


One the deployment manifest is created we add a few tweaks to ensure it forces the install of the latest package using the minimum version switch (-mv) and we set what the start menu for the application will be using the -Publisher switch.


    <Exec Command="$(MagePath) -u @(DeployManifest) -Publisher $(TeamName).$(Environment) -v $(PackageVersion) -mv $(PackageVersion)" />



We then sign the deployment manifest and that is the package complete.


  <Exec Command="$(MagePath) -Sign @(DeployManifest) -cf $(Certificate)"/>


The package is tightly-coupled to the deployment URL specified in the property file so you will only be able to install it from that location. The final step then is to transfer the contents of the PathToBinaries folder to the location you specified in the DeployToUrl property. The clickonce package is then installed by accessing the deployment manifest from this location in your IE webbrowser (There is a firefox plugin that allows the clickonce apps to work for that browser too). For the example above you would install the application from here:

http://MyWebServer/MyAppFolder/MyApp.MyRegion.application


Full Script:


<Project ToolsVersion="4.0" DefaultTargets="Package" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">

<!--

   How to call it: msbuild PackageBinaries.proj /:pEnvironment=dev;PathToBinaries=C:\temp

  

  -->

<!-- These are the command line parameters required to run the package  -->

  

 <!-- This import statement brings in the environment specific properties required for the packaging -->

  

   <Import Project="..\envs\$(Environment)\properties.target" />

  

<!--

     This ItemGroup element builds up the sets of files that are listed in the application manifest

     for the deployment. Note that an iconfile is expected in the PathtoBinaries location called

     ApplicationName.ico where ApplicationName is a property in the property.target file.

 -->


    <ItemGroup>

        <EntryPoint Include="$(PathToBinaries)\$(MainExecutableName).exe" />

  <Dependency Include="$(PathToBinaries)\*.dll">

             <AssemblyType>Managed</AssemblyType>

            <DependencyType>Install</DependencyType>

        </Dependency>

  <IconFile Include="$(PathToBinaries)\$(ApplicationName).ico"/>

  <ConfigFile Include="$(PathToBinaries)\$(MainExecutableName).exe.config">

            <TargetPath>$(MainExecutableName).exe.config</TargetPath>

        </ConfigFile>

 </ItemGroup>

    <Target Name="Package">

        <Message Text="******************************************" />

        <Message Text="** Target Environment:$(Environment)" />

     <Message Text="** Binaries Location: $(PathToBinaries)" />

     <Message Text="******************************************" />

    

   <!-- Get the version of the Applications executable  -->

  

   <GetAssemblyVersion FilePath="$(PathToBinaries)\$(MainExecutableName).exe">

    <Output PropertyName="AssemblyVersion" TaskParameter="VersionNumber"/>

   </GetAssemblyVersion>

            <Message Text="$(AssemblyVersion)" />  

       

      <!-- Copy across the app.config file for this environment  -->

    

      <Message Text="Getting App.Config File: ..\envs\$(Environment)\app.config" />

   <Copy SourceFiles="..\envs\$(Environment)\app.config" DestinationFiles="$(PathToBinaries)\$(MainExecutableName).exe.config" />


         <!--

   

    An application manifest is now generated called ApplicationName.Environment

    e.g. MyApp.Dev. It's important to have a unique application identity for each

    environment so that you can run dev/sit/uat and prod versions of the same app

    on the same machine without conflicts.

   

   -->


   <Message Text="Creating Application Manifest: $(PathToBinaries)\$(MainExecutableName).exe.manifest" />

         <GenerateApplicationManifest

            AssemblyName="$(ApplicationName).$(Environment)"

            AssemblyVersion="$(AssemblyVersion)"

            ConfigFile="@(ConfigFile)"

            Dependencies="@(Dependency)"

            Description="$(Description)"

            EntryPoint="@(EntryPoint)"

            IconFile="@(IconFile)"

            InputManifest="@(BaseManifest)"

            OutputManifest="$(PathToBinaries)\$(MainExecutableName).exe.manifest">

            <Output

                ItemName="ApplicationManifest"

                TaskParameter="OutputManifest"/>

            </GenerateApplicationManifest>


     <Message Text="Signing Application Manifest:@(ApplicationManifest)" />

    

      <!--

    The application manifest is signed at this point with the certificate specified

    in the imported properties file

   -->

    

     <Exec Command="$(MagePath) -Sign @(ApplicationManifest) -cf $(Certificate)"/>

   

   <!--

   

    A Deployment manifest is now generated called ApplicationName.Environment

    e.g. MyApp.Dev. Again it's important to have a unique application identity for each

    environment so that you can run dev/sit/uat and prod versions of the same app

    on the same machine without conflicts.

   

   -->

    

     <Message Text="Creating Deployment Manifest: $(PathToBinaries)\$(MainExecutableName).application" />

     <GenerateDeploymentManifest

            AssemblyName="$(ApplicationName).$(Environment)"

            AssemblyVersion="$(PackageVersion)"

            DeploymentUrl="$(DeployToUrl)$(MainExecutableName).application"

            Description="$(Description)"

            EntryPoint="@(ApplicationManifest)"

            Install="true"

            TargetFrameworkMoniker=".NETFramework,Version=v4.0"

   OutputManifest="$(PathToBinaries)\$(MainExecutableName).application"

            Product="$(ApplicationName).$(Environment)"

            Publisher="$(TeamName)"

            SupportUrl="$(SupportUrl)"

            UpdateEnabled="true"

            UpdateMode="Foreground">

            <Output

                ItemName="DeployManifest"

                TaskParameter="OutputManifest"/>

        </GenerateDeploymentManifest>


  <Message Text="Setting Package Version: $(PackageVersion)" />


  <!--

    We add versions to the package. Adding a matching minimum version forces

    the users machine to take the update whenever we build a new package.

    Touching the deployment manifest resets the Publisher property of the

    deployment manifest so we need to add this again so that the start menu

    folder the app is installed under is something meaningful.

  -->


  <Exec Command="$(MagePath) -u @(DeployManifest) -Publisher $(TeamName).$(Environment) -v $(PackageVersion) -mv $(PackageVersion)" />


  <Message Text="Signing Deployment Manifest:@(DeployManifest)" />


  <Exec Command="$(MagePath) -Sign @(DeployManifest) -cf $(Certificate)"/>


    </Target>


 <!--

   The task below uses inline C# to get the Assembly version of the executable file.

 -->


  <UsingTask

    TaskName="GetAssemblyVersion"

    TaskFactory="CodeTaskFactory"

    AssemblyFile="$(MSBuildToolsPath)\Microsoft.Build.Tasks.v4.0.dll" >

   

 <ParameterGroup>

   <FilePath ParameterType="System.String" Required="true" />

   <VersionNumber ParameterType="System.String" Output="true" />

 </ParameterGroup>

    <Task>

      <Using Namespace="System.Reflection"/>

      <Code Type="Fragment" Language="cs">

      <![CDATA[

     this.VersionNumber = AssemblyName.GetAssemblyName(this.FilePath).Version.ToString();

   ]]>

      </Code>

    </Task>

  </UsingTask>

 

</Project>

Seperating Unit Tests and Integration Tests in your .NET build process

posted 28 Sep 2011 13:56 by Michael Barry   [ updated 8 Dec 2011 12:46 ]

I had a look at different ways we can separate out tests within a .NET project so that they can be run by the different Jenkins/CruiseControl/<insert your CI server of choice>   jobs.

The two options easiest options are:

·         Test Containers
·         Categories

I’ve added explains of each below with some extracts from the Microsoft documentation on each. Both can easily be adopted into an automated build process.

Test Containers
The Test Container is the Assembly file containing the tests. Our current build runs the tests using mstest and the /testcontainer: switch to specify the assembly to run the tests against. Using this method we could just put the Integrations tests into a separate assembly and then call this using the /testcontainer switch when we run to run it (e.g. during the nightly build but not during the CI build).  There’s also a /testmetadata switch that can be used instead if you want to run tests in multiple test containers

/testcontainer:[ file name ]
The test container is a file that contains the tests you want to run. For example, for ordered tests, the test container is the .orderedtest file that defines the ordered test. For unit tests, it is the assembly built from the test project that contains the unit test source files.
/testmetadata:[ file name ]
You can use the /testmetadata option to run tests in multiple test containers.
The test metadata file is created for your solution when you create test lists using the Test List Editor window. This file contains information about all the tests listed in the Test List Editor window. These are all the tests that exist in all test projects in your solution.
The test metadata file is an XML file that is created in the solution folder. This file is shown in Solution Explorer under the Solution Items node. A test metadata file has the extension .vsmdi, and is associated with the Test List Editor window. That is, if you double-click a .vsmdi file in Windows Explorer, the file opens Visual Studio and its contents. All the tests in a solution's test projects are displayed in the Test List Editor window.
You can change the test metadata file only by making changes that are reflected in the Test List Editor window, such as creating or deleting tests, or changing a test's properties.

Categories
We can add attributes above the methods for the tests in the files to distinguish what categories they belong to. Advantage here is that all test classes can be in the same Assembly file but also that a test can belong to multiple categories making it easier to define the test runs for the different build processes.

Sample Attributes:

[TestCategory("Nightly"), TestCategory("Unit"), TestCategory("Integration"), TestMethod()]
public Void MyTest()
{
}

The tests for a category can then by run by mstest using the category switch (/category)

You can only use the /category option one time per command line, but you can specify multiple test categories with the test category filter. The test category filter consists of one or more test category names separated by the logical operators '&', '|', '!', '&!'. The logical operators '&' and '|' cannot be used together to create a test category filter.
For Example:
·         /category:group1 runs tests in the test category "group1".
·         /category:"group1&group2" runs tests that are in both test categories "group1" and "group2." Tests that are only in one of the specified test categories will not be run.
·         /category:"group1|group2" runs tests that are in test category "group1" or "group2". Tests that are in both test categories will also be run.
·         /category:"group1&!group2" runs tests from the test category "group1" that are not in the test category "group2." A test that is in both test category "group1" and "group2" will not be run.

Jenkins for .NET - Part 2 - Continous Integration Job

posted 15 Jul 2011 13:38 by Michael Barry   [ updated 15 Jul 2011 13:43 ]

Following on from Jenkins for .NET - Part 1 where I described the tools to be used in our approach, this post will detail how these tools are implemented to create the following: 
  1. The CI build, that attempts to run all unit tests and compile the application every time there's a check-in of the source code
  2. The Nightly build, this will be, as the name suggests, a nightly build that runs heavier sets of unit and integration tests along with looking for any fx cop violations before compiling, packaging and deploying the application.

CI Build
First off you need to get Jenkins up and running on your windows build machine. The jenkins guys have provided a nice wiki post about it so there's no need for me to go through that here. To ensure you can use MSTest successfully within your build process you may need to install Visual Studio 2010 on the machine your using for Jenkins. This became a necessary evil for us in order to get up and running with MSTest in the build process. When I come up with a workaround I'll update this post accordingly.

Once your Jenkins installation is up and running go to the 'Manage Jenkins' sections and click the 'Manage Plugins' link.  On the Available tab click the checkboxes for the MSBuild Plugin and the MSTest Plugin and then click the install button at the bottom right of the page. This allows us to use MSBuild and MSTest easily within our CI job. Once the plugins are installed, on the Manage Jenkins page click the 'Configure System' link and scroll down to the MSBuild builder section that should now be available. Click the Add button and then configure msbuild as needed for your build. e.g. to setup MSBuild for the .NET 4.0 framework so it can be used in our build job. Name would be msbuild (4.0) and Path to msbuild.exe is C:\WINDOWS\Microsoft.NET\Framework\v4.0.30319\msbuild.exe

Once these plugin are installed go back to the 'Manage Jenkins' page and click on the 'New Job' link on the left hand menu. On the subsequent screen enter a name for the Job e.g. MyProject CI Build, select the 'Build a free-style software project' option and then click OK.  This will bring you to the main configuration page for setting up our build job.

In the Source Code Management section Select the 'Subversion' option which then displays the controls for configuring how the job behaves with Subversion. In the Repository URL textbox put in (you guessed it) the URL of the subversion repo for your project. Leave local module directory as is with just a . in it. In the Check-out strategy select 'Use 'svn update' as much as possible' because this is our CI build and is going to be running constantly so we don't want it doing fresh checkouts all the time. Repository browser can remain as (Auto).

In the Build Triggers section tick the 'Poll SCM' check box and then in the schedule put in the expression that determines how often this bolling should be done. For example, to poll ever 10 minutes enter  0,10 * * * *

Add MSBuild Task
We're now ready to add our build tasks. In the 'Build' section click the Add Build Step and select 'Build a Visual Studio project or solution using MSBuild'. A drop down and 2 textboxes are now avilable to configure the build. Select the MSBuild Version with the name corresponding to what we setup earlier .e.g msbuild (4.0) and then in MSBuild file we put in the name of the project (.csproj) or the solution (.sln) file that we want to build. This is based on the relative path to where the project is checked out to. For example if the solution file is in the root of the repo that we're checking out then you would just put in myproject.sln. If it's a folder or two down then it would be more like myprojectfolder/myproject.sln. The Command line arguments allows us to add any switches onto the build command that we might want e.g. /nologo /property:Configuration=Release

Add MSTest task
Unfortunately MSTest isn't as nicely integrated as MSBuild so to get it to run our Unit Tests it's a little bit trickier. Click the 'Add build step' button and select 'Execute Windows batch command'. In the large text area provided input the following:

"%VS100COMNTOOLS%\..\IDE\mstest.exe" /resultsfile:"%WORKSPACE%\myprojectfolder\bin\Release\MyTests.Results.xml" /testcontainer:"%WORKSPACE%\myprojectfolder\bin\Release\\bin\Release\MyUnitTests.dll" /nologo

This will run mstest from the Visual Studio folder, create an XML output file of those test results called MyTests.Results.xml in the bin/release folder associated with the project and specifies what assemblies in the project contain the unit tests to run. I found by adding the results xml file to the bin\Release folder I could assume it was always being cleared out for every build and I wasn't getting 'file already exists' everytime it build. As might guess, %WORKSPACE% is Jenkins' environment variable for the current jobs local workspace i.e it correspons to the . in the local module directory fied in Source Code Management.

Post-build Actions
Once you've got the above tasks created then in the Post-build Actions section click the Publish MSTest test result report checkbox and in the Test report TRX file textbox put in .\myprojectfolder\bin\Release \MyTests.Results.xml  or whatever corresponds to what you put in as the output location and file of our MSTest command. This gives you a nice graphical output on the main page of your job.

And that's it. Jenkins job to build and test a .NET project on every checkin of your project.

Jenkins for .NET Part 1 - Tooling Up

posted 5 Jul 2011 17:30 by Michael Barry   [ updated 6 Jul 2011 16:54 ]

We're building a new .NET 4.0 client application and I want to use a continuous integration server for nightly builds, Continuous integration and then release de ployment. Although this project focuses on a .NET client, there are java backend components and as they are already using Jenkins and it's the more widely used CI Server within this organisation we'll use Jenkins for our purposes too.



What is Co
ntinous Integration (CI)?
On software development projects consisting of more than 1 developer there will frequently be times when the developers need to integrate th eir work to ensure the project still functions correctly. As more developers are involved stopping to integrate can hamper progress but leaving the integration progress until much later can result in errors or problems not being identified until much later in the development lifecycle.

Continous Integration is a  software development practice designed to address this where the developers integrate their work frequently and it's verified by an automated build process to detect any integration errors as quickly as possible. Unit tests and Integrations tests are usually completed as part of this process. This approach contributes to significantly reduced intregration problems and earlier detection of any potential issues.

What are the requirements?
For my build process I want to be able to...

  • Regularly check the source code repository for any code changes
  • Run Unit-tests 
  • Run QA tests  
  • Ensure the application can be build successfully
  • Deploy nightly build of the application
And the tools I've chosen to complete each of these tasks:

  •  Regularly check the source code repository for any code changes
    Subversion is the organisations standard for source code control so anywhere in these articles that I'm mentioning source code repositories I mean subversion repos. Jenkins provide built in functionality for checking repos for change and I'll explain how in Pat 2.
  • Run Unit tests
    When the prototype of our project was created the initial developer used mstest for the unit tests and there's no real incentive to switch to another unit testing framework. nUnit is the other contender but at this point both frameworks are mature enough to be considered equal so just pick one that works and use it. No point wasting time on  drawing a big massive comparison matrix for what is essentially a minor decisison. Mstest it is then!
  • Run QA tests
    Sticking with our 'All things Microsoft' approach I'm going to use FxCop for analysis of the code and practices  and use that to enforce any rules on the code.
  • Ensure the Application can be built successfully
    MSBuild will be at the heart of the build and compile of the application. Custom msbuild scripts can (and most probably will) be written to perform any cheeky little bits and pieces that are outside the norm of a basic vanilla application.
  • Deploy nightly build of the application
    Microsofts Clickonce deployment process will allow us to package our compiled application for easy deployment every night. The minimum required version feature of the Clickonce deployment allows me to force the latest version of the development build onto a user so they always have the latest client installed. This becomes important in tracking the progress within a build cycle.

Build Jobs
There'll initially be 2 build jobs for my process
  1. The CI build, that attempts to run all unit tests and compile the application every time there's a checkin of the source code
  2. The Nightly build, this will be, as the name suggests, a nightly build that runs heavier sets of unit and integration tests along with looking for any fx cop violations before compiling, packaging and deploying the application.

In Part 2 I'll cover how each of the items above are implemented.

Creating your own MSBuild Tasks

posted 5 Jul 2011 17:28 by Michael Barry   [ updated 6 Jul 2011 16:57 ]

I wanted to add a few specific build tasks onto our build server outside of the normal functionality available with the standard msbuild tasks so decided to have a play with creating my own tasks. Turns out it's very easy.

What I wanted was a simple task that I could call in an msbuild script that would increment the version number of the current build. This would be used by our automated build server when doing releases on one of the subversion branches.

 In Visual Studio I created a new Class Library project and referenced the Microsoft.Build.Framework.dll and Microsoft.Build.Utilities.v4.0.dll assemblies. The project consists of one class named after what you want your task to be called. This class subclasses the Micrsoft.Build.Utilities.Task class and has one public method Execute. This is what gets run when you call your task from an msbuild script. I wanted to pass the name of the file that contains the version into the task so that it gets incremented and then saved. This is done by creating a property with an appropriate getter and setter (See FilePath string in the code below).

One the task is run the current Assembly version is extracted from the file using regular expressions and some string parsing (bit messy in it's current form) and then the rightmost number (we're using that as our build number) is incremented by one. The file is then saved and the task completes. Full code is below:

using System;
using System.IO;
using System.Text.RegularExpressions;
using Microsoft.Build.Utilities;

namespace MyTasks.Build
{
  public class Upversion : Task
  {
    public string FilePath { get; set; }
       
    public override bool Execute()
    {
        try
        {
            StreamReader reader = File.OpenText(FilePath);
            string contents = reader.ReadToEnd();
            reader.Close();

            MatchCollection match = Regex.Matches(contents, @"\[assembly: AssemblyVersion\("".*""\)\]",
                                                    RegexOptions.IgnoreCase);

            string versionNumber = match[0].Value;

            int startpoint = versionNumber.IndexOf("\"");
            int firstdot = versionNumber.IndexOf(".");
            int seconddot = versionNumber.IndexOf(".",firstdot+1);
            int thirddot = versionNumber.LastIndexOf(".");
            int endpoint = versionNumber.LastIndexOf("\")");


            string majVersion = versionNumber.Substring(startpoint + 1, firstdot - (startpoint + 1));
            string minVersion = versionNumber.Substring(firstdot + 1, seconddot - (firstdot + 1));
            string relVersion = versionNumber.Substring(seconddot + 1, thirddot - (seconddot+1));
            string buildNumber = versionNumber.Substring(thirddot + 1, endpoint - (thirddot+1));

            buildNumber = (Convert.ToInt32(buildNumber) + 1).ToString();
            string version = String.Format("{0}.{1}.{2}.{3}", majVersion, minVersion, relVersion, buildNumber);

            string replaceWithText = String.Format("[assembly: AssemblyVersion(\"{0}\")]", version);
          
            string newText = Regex.Replace(contents, @"\[assembly: AssemblyVersion\("".*""\)\]", replaceWithText);
            var writer = new StreamWriter(FilePath, false);
            writer.Write(newText);
            writer.Close();
           
            return true;
        }
        catch (Exception ex)
        {
            Console.Out.Write(ex.Message);
            return false;
        }
    }
  }
}



Save the class and then compile it to an appropriately named assembly (MyTasks.Build.dll in this case.  To use the task in an msbuild script simple add a 'UsingTask' to the script to reference the required assembly and taskname and then just call is. See below for example:

<?xml version="1.0" encoding="utf-8"?>
<Project DefaultTargets="Increment" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">

  <UsingTask AssemblyFile="MyTasks.Build.dll" TaskName="UpVersion"/>

  <Target Name="Increment">
    <UpVersion FilePath="Version.cs"/>
  </Target>

</Project>

Latency

posted 5 Jul 2011 17:24 by Michael Barry   [ updated 6 Jul 2011 16:58 ]

In a recent meeting where there was a discussion around latency within trading systems the following analogy was used and I thought it quite cool:

2 men are in Africa when they see a cheetah in the distance charging straight for them. The first man starts to run but the other stops to put on his trainers.

"What are you doing" cried the first man, "You'll never outrun a Cheetah even with your trainers on"

"I don't need to outrun the Cheetah" replied the first man, " I just need to outrun you!"

How does this relate to trading systems? Well, why invest 10s (100s?) of millions of dollars ensuring a latency of 1 millisecond between submitting a trade and it appearing on your blotter when your nearest competitor can only do it in 50 milliseconds. To win, you just need to be faster then the other guy.

1-7 of 7