Sync Problems

Synchronization With Interval Tree Clocks

Sync ProblemsI’ve been working with mobile devices for a long time, and inevitably the most painful piece of the development process is getting data to be consistent across all replicas.
For years, I’ve been trying to find a consistent means of taking care of this in a way which is OS and repository agnostic for all replicas. It isn’t 100% clear to me why this isn’t a solved problem, but I have a feeling there are several contributing factors:

  1. Internecine conflict between all relevant parties.
  2. Rapidly changing means and standards for data storage and transmission.
  3. Figuring out causal relationships between data on different replicas is really, really difficult.

It seems to me that number 1 and 2 having become somewhat better lately because of ubiquitous JavaScript.  I’m not saying it’s trivial, but you can make an app that works just about everywhere now if you write it in HTML and JavaScript.

When dealing with data, browser based apps are still likely to be a problem with large data sets and long periods without connectivity, but it might be worth exploring the possibilities again.

To this end, I’ve been looking at solving the causal problem with Interval Tree Clocks (ITCs) lately.  They are interesting in the way that licking battery terminals is interesting.  They are painfully tedious, but if you can stick with it, you may eventually power a solution (or be brain damaged).

For a long time, I think the standard way to handle the problem of causal relationships has been vector clocks, but they have well documented limitations around space usage which do not apply to Interval Tree Clocks.

Also, you can make pretty diagrams with ITCs.

ITC Node Diagram

So I’ve been trying to rewrite the ITC algorithm in C#.  This may seem ironic since I just told you that JavaScript seems to be one solution to some of the industry’s synchronization problems, but the reality is, I’m much better at exploring ideas with type safe code.

I’ve gotten most of the C# working, and I’ve created tests.  My intent is to use those to safely port the C# over to JavaScript.

You can check the code out here.

If you prefer Java, Erlang or c, there is a repository from the original designers of the algorithm here.  A word of warning: if you try to use that repository to follow along with my code, it will be very difficult.  Conceptually, the code is somewhat similar to what I have written, but my implementation is almost entirely different.

Getting Started with RavenDB Using Pure JavaScript

RavenVsCouchYou might ask: “Why in the world would you create a pure JS app with RavenDB?”  I’m so glad we’re interested in the same things!  I’ve been toying around for a little while with CouchApp – which is a way to host applications completely within a CouchDB NoSQL database.  The idea is to greatly speed development and performance for certain application use cases by avoiding (most of) a server side middle layer.

Use Case

Let’s say you are building an application for internal use.  Assuming you are responsible on the client side, do you really need to have server side data validation?  I suppose you could find a few arguments in favor of it, but do they actually outweigh the cost of implementing a middle tier for this use case?  Really??

I’m going to pretend like you said, “No, Dave – by golly, you’re right.”

In looking at CouchApp, the first problem I ran into is that it’s hard.  Like, really hard.  I mean these guys are probably all into mod’ed out Linux distros and neckbeards and shit.  Which is cool, but the problem is that they are NOT into creating canonical, orderly, convention-based documentation/tests/examples that explain how the hell to do anything.  They are too “relaxed” I guess.  Instead you can do whatever you want, man.  For instance, they have all these different ways to do html rendering.  Half of them are outdated, and the other half are poorly demonstrated.  You get the feeling they are too smart and excited to let stuff mature for 2 months before moving on to the next shiny byte. 

(I’ll admit that last paragraph is probably unfair, but give CouchApp a few hours and see if you don’t feel the same way.)

The other problem I ran into is the sneaking suspicion that the whole thing is dead.  If you look at all the docs, posts and hubbub, it seems to center pretty tightly around 2010 and then tail off after that.  I tried to get some people from the community to give me some feedback about my last post, and all I heard were crickets.

The final straw for me on the whole CouchApp thing was that there is no easy way TO ACCESS THE DATABASE CROSS-DOMAIN.  Are you kidding me?  What the hell is the use of having a database that faces http if you can’t access the thing via http?  The solution is to install a proxy on your Apache server.  WHAT!?  I’m done.

Quoth the Raven

Enter the RavenDB.  If you compare www.RavenDB.net to www.CouchApp.org, it’s pretty glaringly obvious who has their shit together and who doesn’t.  I can hear my imaginary friend say, “Hey Dave, that’s not fair.  CouchApp is like, a side project, dude.  You should be comparing it to http://couchdb.apache.org/.”  And my friend would be right.  So go look at http://couchdb.apache.org/ then.  I guess it is better than CouchApp.org ….

And when that same friend then says –

“But Dave, RavenDB Costs Money and Shit”

– if he is truly concerned about RavenDB costing money, he should use CouchDB, MongoDB or Cassandra or some crap like that … (freeloader).  He should have fun with that.  I’m trying to get things done.  But hey … if my buddy really feels software should be free, then he should probably open source his own project.  Then he could use RavenDB for free.

Ok, maybe that’s a little harsh.  Maybe I’m being too hard on my outspoken pal.  But you know … the tone that I used was EXACTLY HOW I MEANT IT BE.

Brass Tacks

… As in it’s time to get down to them.  How the heck do you get going with RavenDB anyway?  Well the first thing to do is drive your browser over to Mr. Ayende’s shop and get yourself a build.

Ok, on another side note, does this guy Ayende or Oren or Auryn or whatever his name is kick ass or what?  I mean, I know he’s been putting out awesomeness for something like a decade now, but who decides one day that, “Hey, I think I’ll build a NoSQL database by myself.  Oh, and while I’m at it, I’ll make it the best one available on the market.  I’ll actually make it work well, have good documentation, be a (C#) developer’s dream to use, be easily distributable and you know what else?  If somebody sends a message to my mailing list, I’ll respond in less than 5 minutes even if I don’t know them AT ALL.”  Too bad he has a problem with run-on sentences.  Oh wait, that’s me.

Go over to http://ravendb.net/download and pick your poison.  Usually I prefer all things NuGet, but in this case, I didn’t want to find out whether or not the server is in there.  I just downloaded the zipped build.  Unfortunately, the most current build I found (960) had a bug with posting new documents using $.ajax.  This struck me as so egregious that I nearly didn’t write all those nice things above about Ayende, but stumbled in despair back to Couchappland.  Fortunately, the “unstable” build 2063 works … even if it does have some weird ass shit going on with a system database being the default and going completely paisley if you try to do the advanced database creation stuff …. It is labeled “unstable” after all, but I digress … again.

Once you’ve done the dance of unblocking the zip file and extracting it and all that, you can go to the Server subdirectory and type

raven.server /install

Congratulations.  You now have a running RavenDB service.  Beats the hell out of installing SQL Server doesn’t it?  You might also want to compare this process to that of CouchApp in my last post.

Oh wait, do I need to install a management studio?  No, it’s there already.  Just go to http://localhost:8080 if you don’t believe me.  Oh ok, do I need to install some configuration app?  No, you can just hack the config file.  And while we’re talking about it, why don’t we do some configuration file hacking?

Configuring RavenDB

You will find raven.server.exe.config in that same Server directory.  Open it with your favorite text editor and you will see something like this:

<?xml version=”1.0″ encoding=”utf-8″ ?>
<configuration>
  <appSettings>
    <add key=”Raven/Port” value=”8080″/>
    <add key=”Raven/DataDir” value=”~\Data”/>
    <add key=”Raven/AnonymousAccess” value=”Get”/>  </appSettings>
    <runtime>
        <loadFromRemoteSources enabled=”true”/>
        <assemblyBinding xmlns=”urn:schemas-microsoft-com:asm.v1″>
            <probing privatePath=”Analyzers”/>
        </assemblyBinding>
    </runtime>
</configuration>

 

Ok, I was actually a little surprised that port 8080 was available on my machine, so I changed that right away.  Also, I don’t want to fiddle with security right now.  Because I’m behind my firewall, I’m going to enable anon access on all interactions, and I’m going to leave Cross Domain Access wide open.  So now I have:

<?xml version=”1.0″ encoding=”utf-8″ ?>
<configuration>
  <appSettings>
    <add key=”Raven/Port” value=”49589″/>
    <add key=”Raven/DataDir” value=”~\Data”/>
    <add key=”Raven/AnonymousAccess” value=”All”/>
    <add key=”Raven/AccessControlAllowOrigin” value=”*” />
  </appSettings>
    <runtime>
        <loadFromRemoteSources enabled=”true”/>
        <assemblyBinding xmlns=”urn:schemas-microsoft-com:asm.v1″>
            <probing privatePath=”Analyzers”/>
        </assemblyBinding>
    </runtime>
</configuration>

 

Restart your service.  It’s in the Windows service.msc app or you can just type Raven.Server /restart from the command line in the Server directory.

Let’s Write Some JavaScript Already

Break open your favorite IDE/text editor, and because they all support NuGet, get yourself the QUnit-MVC package.  Or maybe they don’t, and you can get QUnit at www.qunitjs.com.  It’s hidden away down there at the bottom of the page for some stupid reason.

Now we need a test page.  Create an html file, and put this in it:

<!DOCTYPE html PUBLIC “-//W3C//DTD XHTML 1.0 Transitional//EN” “http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd”>
<html xmlns=”
http://www.w3.org/1999/xhtml”>
<head>
    <title>QUnit Test Page</title>
    <link rel=”stylesheet” href=”qunit.css” type=”text/css” />
    <script src=jquery-1.7.1.min.js” type=”text/javascript”> </script>
    <script type=”text/javascript” src=”qunit.js”></script>
         <!– App code goes here –>
    <script src=”app.js” type=”text/javascript”></script>
    <!– Unit test code goes here –> 
    <script src=”appTests.js” type=”text/javascript”></script>
</head>
<body>
    <h1 id=”qunit-header”>Intertwyne QUnit Test</h1>
    <h2 id=”qunit-banner”></h2>
    <h2 id=”qunit-userAgent”></h2>
    <ol id=”qunit-tests”></ol>
</body>
</html>

In a nutshell, this is what QUnit wants in order to display your test results  Obviously it might be better to put these scripts into special directories according to whatever conventions to which you subscribe.  If you actually did get qunit from NuGet, then you’ll need to square up your script and CSS URLs to match the Visual Studio conventions (duh).

Ok, now open up a new file called app.js.  In it, you’ll need to put something like this:

story = window.story || {};

story.url = “http://localhost:49589/docs”;

story.basicInsert = function (insertData, requestorCallback) {
    $.ajax({
        cache:false,
        type: ‘POST’,
        url: story.url,
        dataType: ‘json’,
        contentType: “application/json”,
        data: JSON.stringify( insertData),
        success: function (data) {
            requestorCallback(data);
        }
    });
};

story.basicGet = function (collectionAndKey, requestorCallback) {
    $.ajax({
        url: story.url + ‘/’ + collectionAndKey,
        dataType: ‘jsonp’,
        jsonp: ‘jsonp’,
        success: function (data, textStatus, jqxhr) {
            requestorCallback(data, textStatus);
        }
    });
};

These are a couple of JavaScript functions to write some JSON in and out of your RavenDB database.  I am using a POST rather than a PUT because I didn’t feel like finding a time sequential UUID generator for my IDs.  RavenDB will do that for me if I POST, sending back the results as JSON. 

The GET is requesting the results as JSONP so that my browser doesn’t freak out about cross-domain request results.  If you don’t know what that means, then Google it or ignore it because I took care of it for you.  This blog is already getting epic in length.

The other thing I do for both of these is pass in a callback parameter so our consuming functions can get the asynchronous results.  If you don’t know what a callback is, then consider a different vocation/hobby.

Ok, now onto the tests!!

Open yourself an appTests.js file and put something like this in it:

module(“TheRedCircuit’s tests for to show the good people”,  {
    setup: function () {
        // you can do some setup type stuff here
    }
});

test(“basicInsert testStory insertsIt”, 1, function () {
    stop(1000);
    var insertData = {name:”some title”,body:”some test body”};
    story.basicInsert(insertData, function (insertedData) {
        var key = insertedData.Key;
        story.basicGet(key, function (results, textStatus) {
            equal(textStatus, “success”);
            start();
        });
    });
});

Ok, so I’m cheating pretty badly here on the unit testing front.  I’m testing two functions at once, but seriously, how would you do it?  The chicken has to come before the omelet right?

When testing asynchronous functions, you have to tell QUnit to hold its horses while you go off across http land and do your thing.  That’s what the stop (and timeout after 1000 milliseconds) function is for.

Then I’ve nested all the calls so that we can only pass the equal function assertion if everything behaves nicely and gives us a “success” result.  Then the start function tells QUnit it can have the reins back.

Summary

I’ve done some whining about how hard CouchApp is.  I’ve verbally abused my imaginary friend.  Then I told you RavenDB is a lot easier because it is.  Then I showed you how brain-dead easy it is to get a RavenDB server going.  Lastly I did some POST and GET data access using jQuery.  Oh, and I showed you how to do some JavaScript unit testing with QUnit.

Because I know you are just falling all over yourself to know more, I’ll probably post a more complete version of this application next time, exploring RavenDB’s HTTP API some more … kind of like I did with CouchApp.

Building a CouchApp using Node.js on Windows

In this post, I’d like to help out Windows users who wish to create a Node.js application on a windows box and are interested in a easy alternative to traditional web server hosting. 

I love relational database applications, and have been writing them for (literally) decades now.  Recently I’ve been trying to explore what NoSql has to offer, and I discovered an interesting use of NoSql, that is, hosting. 

CouchDB offers a capability called CouchApp which allows your Couch to host data and act as a web server simultaneously.  Evidently, this does scale surprisingly well, and because of CouchDB’s built in replication capabilities, it offers a pretty flexible solution in a dead simple package.

Quite a few of the people who pioneer and describe this tech are UNIX/Mac users.  I don’t have a UNIX or Mac development machine set up right now, so I thought I’d give it a try on Windows.

Most of the information in this post was adapted from Max Ogden’s video post.

You’ll need the following installs:

You can use NPM to install CouchApp

Create a directory for your application.  I’ve put mine in C:\Temp\MomsTattoos.

If you’ve installed Node.Js in the normal place, you should be able to open up a command prompt in your new directory and type something like this, to get CouchApp installed:

%PROGRAMFILES%\nodejs\npm install couchapp

My Node JS is in the system path, so this is what I did:

image001

Take a look around

You can see your new couch app options by typing: node_modules\.bin\couchapp

image

The boiler option creates a generic version of a CouchApp, but there may be a small problem when you try it.

image

This can be fixed by opening the main.js file in node_modules\.bin\couchapp

and editing line 2 from

, sys = require(‘sys’)

to

, sys = require(‘util’)

image003

If you attempt the boiler command again you will get:

image004

This is because the command actually did generate the app the first time it was used (despite the error).  Your project should now look something like this:

image005

Now you can open up the starting point of your application, App.js.  This file contains code similar to the routing constructs in MS MVC (if you are familiar with that).  Go ahead and change the _id property in the ddoc variable to match your application.  Here’s what mine looks like:

var couchapp = require('couchapp')

  , path = require('path')

  ;

 

ddoc =

  { _id:'_design/MomsTattoos'

  , rewrites :

    [ {from:"/", to:'index.html'}

    , {from:"/api", to:'../../'}

    , {from:"/api/*", to:'../../*'}

    , {from:"/*", to:'*'}

    ]

  }

  ;

 

ddoc.views = {};

 

ddoc.validate_doc_update = function (newDoc, oldDoc, userCtx) {

  if (newDoc._deleted === true && userCtx.roles.indexOf('_admin') === -1) {

    throw "Only admin can delete documents on this database.";

  }

}

 

couchapp.loadAttachments(ddoc, path.join(__dirname, 'attachments'));

 

module.exports = ddoc;

view rawapp.js

Open up Index.html and alter it slightly to show some content in the body.

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">

<html>

  <head>

    <meta http-equiv="Content-Type" content="text/html; charset=utf-8">

    <meta name="keywords" content="" />

    <meta name="description" content="" />

   

    <link rel="shortcut icon" type="image/x-icon" href="favicon.ico" />

    <link rel="shortcut icon" type="image/png" href="favicon.png" />

   

    <title></title>

    <link href="layout.css" rel="stylesheet" type="text/css" />

    <script language="javascript" type="text/javascript" src="jquery-1.4.4.min.js"></script>

    <script language="javascript" type="text/javascript" src="sammy/sammy.js"></script>

    <script language="javascript" type="text/javascript" src="site.js"></script>

  </head>

 

  <body>

    <h1>Hello World</h1>

  </body>

</html>

view rawIndex.html

You will need a database in the couch to store the application.  Open a browser and go to

http://127.0.0.1:5984/_utils/

Create a database and name it momstattoos (or whatever your app is called).

CreateDbInFuton

Now save yourself some hassle by copying the app files (app.js and attachments folder) one level down (to the first MomsTattoos directory) and delete the inner MomsTattoos directory.

Now your directory tree should look more like this:

DirectoryTreeAfterCombining

Deploy your application

From the MomsTattoos directory, use this command: node_modules\.bin\couchapp push %CD%\app.js http://localhost:5984/momstattoos

PushAppToCouch

Now you’re hosting your first CouchApp. To check it out go to http://127.0.0.1:5984/momstattoos/_design/MomsTattoos/%5Cindex.html.

In Summary

In this post, I pulled together the basics for building a hello world CouchApp on a Windows box.  In my next post, I’ll try to get a more complete application running with the goal of accessing NoSql data from a more complex client application.

TFS Build With Octopus Deploy (Part 2)

In my previous post, I talked about Paul Stovell’s Octopus tool.  In this one I’ll just a add a little bit of code and instruction for those of you using Team Foundation Server for your builds.

I’ll begin by saying that there are probably better ways to do almost anything I show here.  The intent is to give you a starting point if you would like to use Octopus deploy in conjunction with TFS.  I know I’m not the Iron Chef of build coding.

We’ve designed our environment to run QA deploys, acceptance tests and function tests nightly.  To that end, we have two TFS builds:  the continuous integration build, using the DefaultTemplate.xaml, and the nightly build derived from the LabDefaultTemplate.xaml.  I won’t go into great detail about how to set up your build environment using TFS.  The subject is covered much better in other places.  If you are a book kind of person, I like Inside the Microsoft Build Engine Using MSBuild and Team Foundation Build Second Edition (despite the ridiculously long title).  If you prefer videos, the MSDN “Using and Managing Team Foundation Server” series is actually a pretty good resource.

There is nothing special about our CI build, but our lab build has some nice little additions.  I’ve added a sequence to the workflow that looks like this:

 

<Sequence DisplayName=Deploy Using Octopus sap:VirtualizedContainerService.HintSize=860,1441>

      <sap:WorkflowViewStateService.ViewState>

        <scg:Dictionary x:TypeArguments=x:String, x:Object>

          <x:Boolean x:Key=IsExpanded>True</x:Boolean>

        </scg:Dictionary>

      </sap:WorkflowViewStateService.ViewState>

      <sba:CreateNuget DisplayName=Create Nuget for Database sap:VirtualizedContainerService.HintSize=464,22 NugetFailed=[NugetFailed] NuspecPath=[PrimaryDatabaseNuspecPath] OutputDirectory=[NugetRepositoryDirectory] TimeoutMilliseconds=[Timeout] VerboseLogging=False />

      <sba:CreateNuget DisplayName=Create Nuget for Site sap:VirtualizedContainerService.HintSize=464,22 NugetFailed=[NugetFailed] NuspecPath=[PrimarySiteNuspecPath] OutputDirectory=[NugetRepositoryDirectory] TimeoutMilliseconds=[Timeout] VerboseLogging=False />

      <sba:CreateNuget DisplayName=Create Nuget for MTA Scheduling Service sap:VirtualizedContainerService.HintSize=464,22 NugetFailed=[NugetFailed] NuspecPath=[MtaSchedulingServiceNuspecPath] OutputDirectory=[NugetRepositoryDirectory] TimeoutMilliseconds=[Timeout] VerboseLogging=False />

      <If Condition=[NugetFailed] sap:VirtualizedContainerService.HintSize=464,309>

        <If.Then>

          <Sequence DisplayName=Nuget failed sap:VirtualizedContainerService.HintSize=279,208>

            <sap:WorkflowViewStateService.ViewState>

              <scg:Dictionary x:TypeArguments=x:String, x:Object>

                <x:Boolean x:Key=IsExpanded>True</x:Boolean>

              </scg:Dictionary>

            </sap:WorkflowViewStateService.ViewState>

            <mtbwa1:SetBuildProperties DisplayName=Set build status failed sap:VirtualizedContainerService.HintSize=200,22 PropertiesToSet=Status Status=[Microsoft.TeamFoundation.Build.Client.BuildStatus.Failed] />

            <TerminateWorkflow sap:VirtualizedContainerService.HintSize=200,22 Reason=Nuget creation failure />

          </Sequence>

        </If.Then>

      </If>

      <sba:CreateOctopusRelease DeployEnvironment=[DeployEnvironment] DeployFailed=[OctopusReleaseCreationFailed] sap:VirtualizedContainerService.HintSize=464,22 OctopusServerUrl=[OctopusServerUrl] Project=[Project] TimeoutMilliseconds=[OctopusTimeout] />

      <If Condition=[OctopusReleaseCreationFailed] sap:VirtualizedContainerService.HintSize=464,309>

        <If.Then>

          <Sequence DisplayName=Deploy failed sap:VirtualizedContainerService.HintSize=279,208>

            <sap:WorkflowViewStateService.ViewState>

              <scg:Dictionary x:TypeArguments=x:String, x:Object>

                <x:Boolean x:Key=IsExpanded>True</x:Boolean>

              </scg:Dictionary>

            </sap:WorkflowViewStateService.ViewState>

            <mtbwa1:SetBuildProperties DisplayName=Set build status failed sap:VirtualizedContainerService.HintSize=200,22 PropertiesToSet=Status Status=[Microsoft.TeamFoundation.Build.Client.BuildStatus.Failed] />

            <TerminateWorkflow sap:VirtualizedContainerService.HintSize=200,22 Reason=Octopus release creation failure />

          </Sequence>

        </If.Then>

      </If>

      <sba:DetermineDeploymentState ConnectionString=[ConnectionString] sap:VirtualizedContainerService.HintSize=464,22 OctopusProject=[Project] PingInterval=[OctopusDeploymentStatePingInterval] Succeeded=[OctopusDeploySucceeded] TimeOut=[OctopusTimeout] />

      <If Condition=[Not OctopusDeploySucceeded] sap:VirtualizedContainerService.HintSize=464,309>

        <If.Then>

          <Sequence DisplayName=Octopus Deploy failed sap:VirtualizedContainerService.HintSize=279,208>

            <sap:WorkflowViewStateService.ViewState>

              <scg:Dictionary x:TypeArguments=x:String, x:Object>

                <x:Boolean x:Key=IsExpanded>True</x:Boolean>

              </scg:Dictionary>

            </sap:WorkflowViewStateService.ViewState>

            <mtbwa1:SetBuildProperties DisplayName=Set build status failed sap:VirtualizedContainerService.HintSize=200,22 PropertiesToSet=Status Status=[Microsoft.TeamFoundation.Build.Client.BuildStatus.Failed] />

            <TerminateWorkflow sap:VirtualizedContainerService.HintSize=200,22 Reason=Octopus deploy failure />

          </Sequence>

        </If.Then>

      </If>

    </Sequence>

 

The idea here is to:

  • Create three Nuget units of deployment (using the “CreateNuget” activity).
  • Create an Octopus release and deploy it (using the “CreateOctopusRelease” activity).
  • Wait for the Octopus release to finish and add any errors to the build log (using the “DetermineDeploymentState” activity).

If any of the activities fail, the build will report the failure and exit.  You will feel sadness, but at least know what happened.

I’ve positioned this new sequence between the portion of the lab build responsible for creating the lab environment (look for DisplayName=” If Restore Snapshot”) and the portion of the lab build traditionally responsible for deployment (look for DisplayName=”If deployment needed”).

Now to talk a little about the activities themselves.

All of the activities in this sequence are created using the Code Activity workflow template found in Visual Studio.

You might want to set up a solution that looks something like this:

If you set up your builds as links (as is shown above), you can edit them in the context of your solution.  This allows you to more easily use source control and see your custom activities in the toolbox while you are working on your build workflow.  To save you some time looking, I found the Microsoft.TeamFoundation.Build.Workflow DLL on my machine here: C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\PrivateAssemblies.

To save hassles, the build server has a utilities directory which contains the executables called by these activities (Nuget and Octopus API).  The directory has been added to the machine path.  If you don’t like this, obviously you can alter these activities to pass in the path of the executables.

CreateNuget Workflow Activity

using System;

using System.Activities;

using Microsoft.TeamFoundation.Build.Client;

using Microsoft.TeamFoundation.Build.Workflow.Activities;

using System.Diagnostics;

 

namespace SST.BuildTasks.Activities

{

       [BuildActivity(HostEnvironmentOption.All)]

       public sealed class CreateNuget : CodeActivity

       {

              public InArgument<string> NuspecPath { getset; }

              public InArgument<int> TimeoutMilliseconds { getset; }

              public InArgument<bool> VerboseLogging { getset; }

              public InOutArgument<bool> NugetFailed { getset; }

              public InArgument<string> OutputDirectory { getset; }

 

              protected override void Execute(CodeActivityContext context)

              {

                    string nuspecPath = context.GetValue(NuspecPath);

                    string outputDirectory = context.GetValue(OutputDirectory);

                    var buildDetail = context.GetExtension<IBuildDetail>();

                    string buildNumber = buildDetail.BuildNumber;

                    string logging = context.GetValue(VerboseLogging) ? “-Verbose “ : “”;

                    string args = string.Format(“pack \”{0}\” {2}-Version {1} -OutputDirectory \”{3}\””, nuspecPath, buildNumber, logging, outputDirectory);

 

                    try

                    {

                           using (Process nugetProcess = new Process())

                           {

                                  nugetProcess.StartInfo.FileName = “Nuget.exe”;

                                  nugetProcess.StartInfo.Arguments = args;

                                  nugetProcess.StartInfo.RedirectStandardError = true;

                                  nugetProcess.StartInfo.RedirectStandardOutput = true;

                                  nugetProcess.StartInfo.UseShellExecute = false;

                                  nugetProcess.StartInfo.CreateNoWindow = true;

                                  nugetProcess.Start();

                                  nugetProcess.WaitForExit(context.GetValue(TimeoutMilliseconds));

                                  context.TrackBuildMessage(nugetProcess.StandardOutput.ReadToEnd());

                                  if (!nugetProcess.HasExited)

                                  {

                                         throw new Exception(string.Format(“Nuget creation for {0} timed out.”, nuspecPath));

                                  }

                                  if (nugetProcess.ExitCode != 0)

                                  {     

                                         throw new Exception(nugetProcess.StandardError.ReadToEnd());

                                  }

                           }

                     }

                    catch (Exception ex)

                    {

                           context.SetValue(NugetFailed, true);

                           context.TrackBuildMessage(string.Format(“Nuget args: {0}, args));

                           context.TrackBuildError(ex.ToString());

                    }

              }

       }

}

 

Essentially, this kicks off Nuget with a set of arguments passed into the activity from the workflow:

  • NuspecPath: the path to the .nuspec xml file used to configure your Nuget unit of deployment.
  • OutputDirectory:  we made this the shared server path where Octopus pulls Nuget files.
  • VerboseLogging: probably want to keep this false most of the time.
  • NugetFailed: set by this activity to inform the workflow if the activity has failed.
  • TimeoutMilliseconds:  what it sounds like.

CreateOctopusRelease Workflow Activity

using System;

using System.Diagnostics;

using System.Activities;

using Microsoft.TeamFoundation.Build.Client;

using Microsoft.TeamFoundation.Build.Workflow.Activities;

 

namespace SST.BuildTasks.Activities

{

 

       public sealed class CreateOctopusRelease : CodeActivity

       {

              public InOutArgument<bool> DeployFailed { getset; }

              public InArgument<int> TimeoutMilliseconds { getset; }

              public InArgument<string> OctopusServerUrl { getset; }

              public InArgument<string> Project { getset; }

              public InArgument<string> DeployEnvironment { getset; }

 

              protected override void Execute(CodeActivityContext context)

              {

                    string octopusServerUrl = context.GetValue(OctopusServerUrl);

                    string project = context.GetValue(Project);

                     string deployEnvironment = context.GetValue(DeployEnvironment);

 

                    var buildDetail = context.GetExtension<IBuildDetail>();

                    string buildNumber = buildDetail.BuildNumber;

                    string args = string.Format(“create-release –server={0} –project={1} –version={2} –deployto={3}

                                                 , octopusServerUrl, project, buildNumber, deployEnvironment);

 

 

                    try

                    {

                           using (Process nugetProcess = new Process())

                           {

                                  nugetProcess.StartInfo.FileName = “octo”;

                                  nugetProcess.StartInfo.Arguments = args;

                                  nugetProcess.StartInfo.RedirectStandardError = true;

                                  nugetProcess.StartInfo.RedirectStandardOutput = true;

                                  nugetProcess.StartInfo.UseShellExecute = false;

                                  nugetProcess.StartInfo.CreateNoWindow = true;

                                  nugetProcess.Start();

                                  nugetProcess.WaitForExit(context.GetValue(TimeoutMilliseconds));

                                  context.TrackBuildMessage(nugetProcess.StandardOutput.ReadToEnd());

                                  if (!nugetProcess.HasExited)

                                  {

                                         throw new Exception(string.Format(“Octopuse deploy for {0} timed out.”, project));

                                  }

                                  if (nugetProcess.ExitCode != 0)

                                  {

                                         throw new Exception(nugetProcess.StandardError.ReadToEnd());

                                  }

                           }

                    }

                    catch (Exception ex)

                    {

                           context.SetValue(DeployFailed, true);

                           context.TrackBuildMessage(string.Format(“Nuget args: {0}, args));

                           context.TrackBuildError(ex.ToString());

                    }

              }

       }

}

 

This code is nearly identical to the Nuget code.  It kicks of Octo.exe (the Octopus command line API) with a set of arguments passed in from the workflow:

  • DeployFailed: set by this activity to inform the workflow if the activity has failed.
  • TimeoutMilliseconds: what it sounds like.
  • OctopusServerUrl: what it sounds like.  Url is usually in the format: http://YourOctopusServer:8081.
  • Project: The project you are deploying as defined in Octopus.
  • DeployEnvironment: The environment as defined in Octopus (for instance, Dev, QA or Staging) that will receive the deployment.

DetermineDeploymentState Workflow Activity

using System;

using System.Collections.Generic;

using System.Data.SqlClient;

using System.Text;

using System.Activities;

using System.Threading;

using Microsoft.TeamFoundation.Build.Client;

using Microsoft.TeamFoundation.Build.Workflow.Activities;

 

namespace SST.BuildTasks.Activities

{

 

       public sealed class DetermineDeploymentState : CodeActivity

       {

              // Define an activity input argument of type string

              public InArgument<int> PingInterval { getset; }

              public InArgument<string> ConnectionString { getset; }

              public InArgument<int> TimeOut { getset; }

              public InOutArgument<bool> Succeeded { getset; }

              public InArgument<string> OctopusProject { getset; }

 

              private const string Sql =

                    @”SELECT [State]

       FROM 

       [Octopus].[Octopus].Project P

       INNER JOIN 

       [Octopus].[Octopus].[Release] R

       ON P.Id = R.Project_Id 

       INNER JOIN 

       [Octopus].[Octopus].Deployment D

       ON R.Id = D.Release_Id

       INNER JOIN 

       [Octopus].[Octopus].Task T

       ON D.Task_Id = T.Id

       WHERE Version = @Version

       AND P.Name = @ProjectName”;

 

              private const string ErrorSql =

                    @”SELECT [ErrorMessage]

       FROM 

       [Octopus].[Octopus].Project P

       INNER JOIN 

       [Octopus].[Octopus].[Release] R

       ON P.Id = R.Project_Id 

       INNER JOIN 

       [Octopus].[Octopus].Deployment D

       ON R.Id = D.Release_Id

       INNER JOIN 

       [Octopus].[Octopus].Task T

       ON D.Task_Id = T.Id

       WHERE Version = @Version

       AND P.Name = @ProjectName”;

 

              // If your activity returns a value, derive from CodeActivity<TResult>

              // and return the value from the Execute method.

              protected override void Execute(CodeActivityContext context)

              {

 

                    var buildDetail = context.GetExtension<IBuildDetail>();

                    string buildNumber = buildDetail.BuildNumber;

                    int pingInterval = context.GetValue(PingInterval);

                    string connectionString = context.GetValue(ConnectionString);

                    int timeOut = context.GetValue(TimeOut);

                    string project = context.GetValue(OctopusProject);

 

                    try

                    {

                           using (var cnnc = new SqlConnection(connectionString))

                           {

                                  cnnc.Open();

                                  using (var cmd = new SqlCommand(Sql, cnnc))

                                  {

                                         cmd.Parameters.AddWithValue(“@Version”, buildNumber);

                                         cmd.Parameters.AddWithValue(“@ProjectName”, project);

                                         int count = 0;

                                         bool keepPinging = true;

                                        while (keepPinging & count < timeOut)

                                         {

                                                object result = cmd.ExecuteScalar();

                                               if (result != null)

                                               {

                                                      string state = result.ToString();

                                                      switch (state)

                                                      {

                                                             case “Success” :

                                                                    context.SetValue(Succeeded, true);

                                                                    keepPinging = false;

                                                                    context.TrackBuildMessage(string.Format(“Octopus deploy {0} for {1} successful.”, buildNumber, project));

                                                                    break;

                                                             case “Failed” :

                                                                    context.SetValue(Succeeded, false);

                                                                    keepPinging = false;

                                                                    cmd.CommandText = ErrorSql;

                                                                    result = cmd.ExecuteScalar();

                                                                    context.TrackBuildError(result.ToString());

                                                                    break;

                                                      }

                                               }

                                               // Wait one interval

                                               Thread.Sleep(pingInterval);

                                               count += pingInterval;

                                         }

                                  }

                           }

                    }

                    catch (Exception ex)

                    {

                           context.SetValue(Succeeded, false);

                           context.TrackBuildError(ex.ToString());

                    }

 

              }

       }

}

 

This code is a little bit of a hack.  It polls the Octopus tables to determine if the deployment has finished.  If the deployment fails, it send the error to the build log.  My guess is that this will be replaced by an Octopus API call in the future.  (Update below)* If the table structure used by Octopus changes in future releases, obviously the SQL in the activity will need to be altered accordingly.

Arguments passed in by workflow:

  • PingInterval:  How often the code polls the database.  I am impatient so I’ve set this to 1000 (milliseconds).
  • ConnectionString: This is an ADO connection string used to access your Octopus database on your SQL Server.
  • Timeout: in milliseconds.
  • Succeeded: set by this activity to inform the workflow if the deployment has succeeded or not.
  • OctopusProject:  The project as defined in Octopus.

I think that should wrap it up for now.  To summarize, this post is intended to aid teams who wish to integrate Octopus deployments into their TFS builds.  It covers the workflow changes needed in the LabDefaultTemplate.xaml, and the code for several workflow activities which enable Octopus integration into the build.

- Update: Paul confirmed that the Octopus API will be updated

“This (build status) information is available via the API – I’ll extend Octo.exe to let you pass a –waitForComplete flag. It can even return an error code if the deployment fails.”

Thanks Paul!

TFS Build With Octopus Deploy (Part 1)

I’m using Paul Stovell’s shiny new deployment tool called Octopus on my current work project. I think it’s safe to say, for deploying a site to IIS, it beats the crud out of anything else I know . If you haven’t heard of it, the basic idea is to use Nuget packages as units of deployment. A site on IIS receives instructions to deploy releases from either a web page or an API.  A machine receiving the deployment (a tentacle) receives a push of the Nuget package(s) containing your application. The tentacle runs the Nuget package(s), deploying your functionality.

For proponents of Continuous Delivery, this is one of the missing pieces for getting a TFS shop to one touch deployment.

I’ve been using this technology in conjunction with the TFS continuous integration server, and I have to say that I’m very excited about the results so far.  To be clear, from the moment we brought TFS in house I’ve had a love/hate relationship with it.  I love that EVERYTHING is integrated and can be audited and automated when using TFS.  I can trace every workitem to the code that resulted, to the tests that were created and to the build that fixed it. …I hate just about everything else.

I’m definitely not a big fan of the Team Build process.  The way I see it, Microsoft took MSBuild which doesn’t totally suck, and wrapped it with Workflow Foundation – which does.  I’m sure many would disagree, but I they probably haven’t been scrambling for the past week to pull together bits and pieces of Team Build instruction from all over the internet.  And they might not have had their IDE crashing constantly and inexplicably when trying to edit their build workflow.  They also might not be big fans of TDD which, as far as I can tell, is close to impossible when writing Team Build code.

In the interest of saving others a bit of that heartache, I’ll post some helpful tidbits of my own in the next blog post.  I’ve created a Team Build code activity to initiate an Octopus release creation and deployment.  I’ve created another which will monitor the deployment and incorporate the results in your build.

Sound helpful?  Read on: next blog post.

Mobility is a Mess

Lately, I’ve been trying to learn mobile development for both professional and open source coding.  The irony is that I’ve been doing mobile development for 8 years.  Unfortunately, mobile enterprise data acquisition has been tied closely to Windows CE/Windows Mobile for a long time.

Recently Microsoft created chaos in the market by abandoning support for Windows CE in Visual Studio 2010.  They also decided not to release any new version of .NET Compact Framework as a development platform.  When they realized how stupid this was, there appeared to be some sort of internal mess as MS tried to make up lost ground with the vendors that had sold bazillions of Windows CE units for them in the past.  After stumbling over themselves for a while, they’ve consolidated their embedded offering with their handheld offering to create Windows Embedded CE and pulled together some sort of backward compatible offering called Windows Embedded Handheld (at least that’s what they are calling it this week). At this point, I really don’t care very much.  I get the feeling that the market has been burned and will be going to Android.

Motorola Solutions is coming out with its first Android offerings now, and this means a lot of choices are ahead for the company I work for.  Should we attempt to write Java when we have never deployed a line of Java code in the past?  Should we use an offering like Mono for Android?   What if we want to offer applications in iOS as well?  Should we learn Objective C too?

Motorola Solutions is offering Rho Elements to allow developers to create out of the browser applications on their devices using JavaScript and HTML, but they are asking users to pay a steep per device license fee.  It seems like such an obvious losing strategy, I have no idea why they have even considered it.  Perhaps they aren’t aware of this widely used tool called PhoneGap that does (by all appearances) the exact same thing … for free.  Oh, and PhoneGap is affiliated with Adobe so PhoneGap gets first class support in DreamWeaver, and a high probability of continued adoption from the community.

PhoneGap seems like the obvious choice in this scenario, but there are a couple of other tools to consider.  Appcelerator is really starting to make a name in the mobile market by doing much the same thing as PhoneGap.  The difference between them appears to be that Appcelerator actually has a per-developer license fee associated with it, compiles to native code (making it speedier), leaks a lot of memory if not handled correctly and has relatively horrible documentation.  Also, it uses JavaScript only.  I haven’t figured out how the UI is created yet without HTML.  I don’t really feel like forcing our designer to use another IDE, or paying for the additional license for that matter.

The last option I’ve seriously considered is Mono for Android.  Any of the options in this article could probably fill an entire blog post with pros and cons, but Mono in particular is an interesting beast to me because I’ve come from a .NET background.  The main problem with Mono is that it is such a niche technology and is pretty much owned by Xamarin.  Although Xamarin has some very smart people running it, the company is less than a year old and I haven’t seen wide adoption of their technologies.  In the development blogs I read, there is rarely discussion of them at all.  It just feels like a big risk at this point.  The $800 per developer (to get both iOS and Android) is negligible relative to the potential development losses.

On the other hand, JavaScript and HTML, are free and being adopted by everybody.  In case you’ve been living under a rock for the past few years, JS isn’t a toy language anymore.  There are plenty of serious frameworks aimed at creating professional, maintainable code in JavaScript.  Windows 8 is favoring JS and HTML as the primary mode of delivering windows out-of-the-browser apps.  Adobe is killing off mobile Flash in favor of JS and HTML, Google uses little else, and Microsoft probably will not be offering a Silverlight version after (the current) v5.

How can you really go wrong?  If PhoneGap miraculously disappears, you can (obviously) run your app’s hardware agnostic code in the browser.  You can consider Rho Elements if you have to port your code as quickly as possible, or you can reuse your business logic in Appcelerator.  You can even reuse your client side business logic on the server if you like (using Node.JS).

Of course, the mobile development community changes so quickly that I could be laughing ruefully at this post next year.

A New Application Effort from VeeCollective (and Me)

My friends at VeeCollective proposed that we begin work together on something fresh and sparkly.  We all feel the need to rumble in the mobile dev bandwagon.

It seems though that the most difficult part of writing something that smells like rainbows is, in fact, to think of something fresh and lively.

At first, we felt certain that we should develop a real estate app.  We wanted to create code that would allow us to circle a spot on a map and find all the desperate souls trapped there in potential foreclosures.  Their tormented need would bubble up through iOS and Android portals everywhere to help our users take advantage of their misfortunes.

But then we decided, no … too Dante.  What we needed was something that would make people happy on both ends of the interface.  To that end, we decide on building Six Degrees … or Integrity … or ….  Turns out the second most difficult part of creating this app is thinking of something to call it.

The idea is to open source an app that is a little bit like DropBox in that it would provide users with their information anywhere.  The major difference is that the application would integrate directly with other apps that provide the information, reach in and (at user discretion) change the structure of the information to be consistent across platforms and applications.

So for instance, if you had a task hierarchy set up in Outlook, the app would replicate that structure in iCloud or Google or Remember the Milk or whatever app somebody feels like writing an interface for.  The intent is to write the application backbone in a way that encourages other open source (or private) developers to create their own application interfaces.

Oh, and did I mention we want to write the synchronization to be full mesh?  No central repository that forces users to pay for storage.

Oh yeah, and we would like to provide a complete indexing of the content, so that in our spiffy hierarchical interface, when looking at a contact (for instance) the most relevant, related data would bubble up and cling to that contact with little gossamer threads.

I’d say we’ve got our work cut out for us.