Nodegyp (or bcrypt) will not build on Windows

Nodegyp often fail to build/rebuild modules on Windows.  This happens to me every time I rebuild my Windows machine (and sometimes when I switch node versions).

Usually I attempt `npm install bcrypt` or similar and the log throws one of many errors related to node-gyp:

  • error: Can’t find “msbuild.exe”. Do you have Microsoft Visual Studio C++ 2008 installed?
  • LINK : fatal error LNK1181: cannot open input file ‘kernel32.lib’
  • msbuild.exe` failed with exit code: 1 (accompanied by something about nodegyp rebuild)
  • Failed to locate: “CL.exe”. The system cannot find the file specified.
  • and others…

Nodegyp is a way to abstract building native modules for NodeJs.   It is supposed to make things easier for cross-platform development using modules that have C or C++ code underlying them.

The problem is that Windows does not really have a simple tool chain for building native stuff.  With `nix systems, acquiring all the necessary tools is usually a single command line entry.  Windows is more a la carte due largely to the number of C++ runtimes that exist.

After a long and hideous season of pain, some folks tried to make some of this easier by creating the npm `windows-build-tools` package.  This will do quite a lot of the hard work of getting dependencies installed.  An alternative manual process is described here.

If this does not work, you may be forced to install Visual Studio 2013 Express.  This is pretty difficult to swallow for those of us who have already gone through the rather long download and install process of getting a newer version of Visual Studio running.

One thing you might try before going there is deleting your node cache.  This may seem like the nuclear option, but my experience is that dependencies can be hanging out in there, messing up the build stream.  My current guess is that this is caused by an install of Nodegyp or a Nodegyp dependency based on a different version of the C++ runtime or a different version of NodeJs.  Unfortunately doing a global uninstall of nodegyp did not fix the problem … so cache delete it was.

To blow away the cache, go to the %appdata% path (just drop that alias in your Windows Explorer path or `cd` to it in your command line), then delete the `npm` and `npm-cache` directories.

If the problem seems isolated to one project, consider blowing away node_modules and doing a fresh npm install.

Sync: Operational Transformation vs. Conflict-Free Replication Data Types (CRDTs)

I need a solution for data sync/replication of offline data that doesn’t require my team to read whitepapers and understand theoretical mathematics.

There is an argument going on right now as to whether Operational Transformations (OT) or Conflict-Free Replication Data Types  (CRDTs) are the way to go here.  Both technologies are intended to solve the thorny problem of handling (or removing the potential) of conflicts when multiple parties are working on the same data without direct awareness of the efforts of another party or parties (perhaps because of temporal or location differences).


I really like the idea of CRDTs, but there isn’t really a practical (or at least popular) implementation of full document CRDTs (think JSON) that I know of right now.  There also seems to be the (old) problem where the CRDTs are replicating correctly, but we are asking them to do the wrong thing… To be a little more clear, we are having difficulty getting the intent of the users expressed in the data structure that prevents conflicts.  We can get eventual consistency between the two bodies of data, but what would the two (or more) parties have created if they did it together side-by-side?

This is a problem that has been explored a little more in the world of Operational Transformation so the solutions (that I am aware of) are a little more mature.

Sync via peer to peer?

The primary downside of OT (that I can see) is that there really needs to be a single source of truth (think server) with Operational Transformations whereas CRDTs allow full mesh or peer-to-peer (P2P) sync.

Because P2P communication is almost as difficult right now as sync itself, it may just be practical to work with OT.

Some libraries to look at

I have been playing a bit with sharedb.  This seems to be the best OT work going on in JavaScript right now.  That said, there isn’t a huge community around the library and the owner’s (though amazing and brilliant people with other real jobs to keep down) do not appear to be super responsive to pull requests and issues.

If you are looking at doing the P2P thing, it seems like Scuttlebutt is a protocol/replication technology that is getting a bit of traction.  I believe it is inherently duplex though… so YMMV.  Here is a JavaScript implementation that might interest you.


Where is my template in karma-ng-html2js-preprocessor?

karma-ng-html2js-preprocessor is great when it works, but it can be fiddly to line up your template URL with the key it stores in $templateCache.

karma-ng-html2js-preprocessor is used for Karma testing when you want an Angular html template to be loaded in the cache.  This is helpful because it will prevent $httpBackend from complaining when your directive tries to use $http to get your template html.  Instead, the directive will find the template in the cache and skip the $http call.

But… the problem is that it can be difficult to figure out what is wrong when things are not set up just right.  The biggest problem I’ve found is actually seeing what is inside the $templateCache.  Often, the key in the cache (which will be the URL for your template) will not have a forward-slash (“/”) in front, or will have a problem with case being different from what was loaded.

The problem with examining $templateCache is that it doesn’t expose its contents as a property.  You have to ALREADY KNOW the key to see if it is inside the cache.

Fortunately, if you are using WebStorm, you can get past this.  WebStorm will show you the function scope of an object constructor.  So you can do this:


If you do not use webstorm, you can place a hack in your karma config:

preprocessors: {
       "**/*.html": ['ng-html2js']
ngHtml2JsPreprocessor: {
    cacheIdFromPath: function(filepath) {
        debug("ng-html2j filepath: " + filepath);
        return filepath;
    prependPrefix: '/',
    moduleName: 'my.templates'

Notice the “moduleName” property in the config above.  Another thing I often forget is to call my template module in the “beforeEach” method for Jasmine/Mocha:


Occasionally, I might also forget to tell Karma (in my karma config file) it can serve the html files:

files: [
   { pattern: 'bower_components/angular/angular.js', watched: false, included: true, served: true },

DerbyJs or Racer on Windows

You could argue this just isn’t meant to be … and you might be right.  Unfortunately for me, flailing around in my Ubuntu VM is just a slow way for me to develop.  I know that makes me a terrible person, but sometimes the tools you use every day belong to the dark side.

In any case, I set out to get DerbyJs and Racer working on my windows machine.

DerbyJs and Racer are created by pretty much the same group of people.  They are JavaScript frameworks that run on Node.Js and use Operational Transformation to synchronize data in real-time across clients.

The tweaks

The first smack in the face is Redis.  You’ll need to install the windows version of that here, but that’s not remotely the hard part.  The difficulty is when you try to do your npm install for the example repositories of DerbyJs or Racer  .  Then all hell breaks loose, and you run into the “we don’t support windows” contingent with npm install redis.

Turns out the lovely people at hiredis do support windows though.  So here’s the trick(s).  Install hiredis globally FIRST (npm install -g hiredis).  Then go into your global npm cache and copy everything from %appdata%\npm\node_modules\hiredis\build\Release one directory down to %appdata%\npm\node_modules\hiredis\build (because REASONS).

Now npm install type things will magically start working — if you’ve put %appdata%\npm\node_modules into your system environment variables as NODE_PATH.  NPM doesn’t do that on installation because… it’s fun to google?

System Node Environment Variable
How to set your npm cache path


Now after all this effort  (at least at the time of this writing)  nothing will work.  You’ve got a couple more tweaks to make.

For Racer

You need to go in and remove all the “release kind of like this one” stuff from your package.json. (See JSON below with red things to remove.)  This is because the current release of Racer will not work with the example code.  No idea why because the problem seems to be somewhere in the Operational Transformation code — which is fiddly stuff.

"dependencies": {
"coffeeify": "~0.6.0",
"express": "~3.4.8",
"handlebars": "^1.3.0",
"livedb-mongo": "~0.4.0",
"racer": "^0.6.0-alpha32",
"racer-browserchannel": "^0.3.0",
"racer-bundle": "~0.1.1",
"redis": "^2.4.2"

BTW, below is, I think, my favorite bit of code ever.  When running the “Pad” example of Racer, it is fired over and over by a dependency (called syntax-error) of browserfy:

module.exports = function (src, file) {
if (typeof src !== 'string') src = String(src);
try {
eval('throw "STOP"; (function () { ' + src + '})()');
catch (err) {
if (err === 'STOP') return undefined;
if ( !== 'SyntaxError') throw err;
return errorInfo(src, file);

I’m sure there is a good reason for it, but I can’t fathom it myself.  I had to comment it out.

 Now for DerbyJs

There is some sort of issue with how it is creating the paths for your views.  To get it to work, there is a patch you’ll need to add to your package.json.

After the patch is installed, you’ll need to go into the index.js of each of the examples and add it before the require(‘derby’):

var app = module.exports = require('derby').createApp('hello', __filename);

I think that should do it.  I was able to get things running fairly well after that.

One last thing, if you happen to be using Visual Studio, I can recommend the Node.Js Tools for Visual Studio with some confidence now.  During the beta stage they were pretty bad about crashing my IDE, but (except when running unit tests) I actually prefer them to WebStorm now .  I know… sacrilege.


Running cucumber.js in WebStorm

If you are like me, the instructions for running Cucumber.js in WebStorm did not yield good results.  The configuration dialog expects an Executable path, but the instructions on the github page only show the .js file  in the command line execution.  I assumed this meant that I needed to run the command line directly in node, but that didn’t seem to work either.  Ugh.

In a nutshell, you need to specify node as your executable in the configuration dialog and pass your cucumber.js file as an argument.  Like so:

WebStorm Cucumber.js Configuration

Turns out this will occasionally fail when WebStorm inserts an argument between your cucumber.js path and node.  Grrrrrr.

To make it work anyway, create a batch file (cucumber.bat) with the following contents:

CALL “<insert your node path here>\node.exe” <insert your cucumber bin path here>\cucumber.js %

Then change your configuration to look like so:

Cucumber Configuration With Batch

I hope this saves you some pain.


Configuring MongoDB with YAML

I just went through the process of configuring the MongoDB server on Windows as a service now that they’ve switched the configuration to YAML.

What a pain in the ass.

All I wanted to do is run it from a directory I typically use on this machine for database files and enable the HTTP interface (including REST).

A couple of pointers that might save you about an hour of hassle.  If you are using Notepad++ tell it to use the YAML configuration for tabs.  I have no idea what it does differently, but that stopped an error where it was complaining about an invalid value at a position right in the middle of a reserved word of the YAML.

Also, if you are specifying that you would like to store your log as a file.  You damn well better specify that destination = file.  I guess specifying the path as a file name wasn’t a big enough hint.

Here is my configuration:

  enabled: true
  RESTInterfaceEnabled: true
 dbPath: “c:\\Data\\mongodb”
 destination: file
 path: “c:\\Data\\mongodb\\log\\mongo.log”

Configuration Reference

Personal Continuous Delivery

I am fairly convinced that Continuous Delivery is the best way to build an application, but I’ve felt for a while that it is a daunting process for an individual to use the same tools that are needed for an enterprise.

There are several offerings that attempt to makes this easier by consolidating your build and delivery process in SAAS or into one package.  CodeShip, Microsoft Team Foundation Server and TeamCity are a few I’ve considered.

I’m going to try TeamCity today because I’ve had good luck with JetBrains tools in the past and because it supports all (or most) of the technologies I develop with.

The download is rather large at half a gig.  At the time of this writing, the link is

I’m installing this (for now) on a virtual machine on my dev laptop.  I have about 12 gigs of RAM, so I think this might work out in the short term.

I begin by accepting all of the defaults in the installer.  This puts the application just off the root directory, but I don’t care because this is a VM with nothing on it.

installing teamcity
Lots of Java getting extracted…


After the extraction, I am asked to select a server port.  Oddly, the default was port 80.  I select 8080 in case I want my default HTTP port later.

I accepted all the default build agent properties, and allow it to run both the build agent and team city server service under the Windows SYSTEM account because I didn’t want to set up a service user right now.

First start up screen

The installer allowed me to open the UI on completion:

Choose a database server
Next, the UI asks me how I’d like to store my data.  I elect the “Internal” HSQLDB because I don’t feel like installing a database server on the VM or using the servers I have on other machines.  I have a feeling I will regret this later.  The configuration does offer all the major relational database servers.


Eula and then Create Admin Account:

On the profile page which follows, I’m guessing the Version Control Username Settings should match my GitHub:ProfileEditor

Next I’m going to try clicking on “Projects” in the navigation header and then click “Create project from URL.” CreateNewProject

PullFromGitHubI’m going to point at a Github repository.






Next, TeamCity tries to guess what steps I need in my build, AutoDetectedNothingbut not much is going on in that empty repository so….


I suppose I better add some code and return to this endeavor in a bit.

So far, pretty straight forward….

Synchronization Using Interval Tree Clocks in JavaScript

As a follow up to my previous post, I’ve implemented the Interval Tree Clock code in JavaScript with tests.  I’ve also begun a synchronization framework to go with it.

Github ITC in Javascript

The framework would be for synchronizing documents in full mesh mode — so peer to peer.

Everything has tests, so you can easily see the direction and progress by just reviewing the tests.

I stab the Synchronization with big knife

Synchronization With Interval Tree Clocks

Sync ProblemsI’ve been working with mobile devices for a long time, and inevitably the most painful piece of the development process is getting data to be consistent across all replicas.
For years, I’ve been trying to find a consistent means of taking care of this in a way which is OS and repository agnostic for all replicas. It isn’t 100% clear to me why this isn’t a solved problem, but I have a feeling there are several contributing factors:

  1. Internecine conflict between all relevant parties.
  2. Rapidly changing means and standards for data storage and transmission.
  3. Figuring out causal relationships between data on different replicas is really, really difficult.

It seems to me that number 1 and 2 having become somewhat better lately because of ubiquitous JavaScript.  I’m not saying it’s trivial, but you can make an app that works just about everywhere now if you write it in HTML and JavaScript.

When dealing with data, browser based apps are still likely to be a problem with large data sets and long periods without connectivity, but it might be worth exploring the possibilities again.

To this end, I’ve been looking at solving the causal problem with Interval Tree Clocks (ITCs) lately.  They are interesting in the way that licking battery terminals is interesting.  They are painfully tedious, but if you can stick with it, you may eventually power a solution (or be brain damaged).

For a long time, I think the standard way to handle the problem of causal relationships has been vector clocks, but they have well documented limitations around space usage which do not apply to Interval Tree Clocks.

Also, you can make pretty diagrams with ITCs.

ITC Node Diagram

So I’ve been trying to rewrite the ITC algorithm in C#.  This may seem ironic since I just told you that JavaScript seems to be one solution to some of the industry’s synchronization problems, but the reality is, I’m much better at exploring ideas with type safe code.

I’ve gotten most of the C# working, and I’ve created tests.  My intent is to use those to safely port the C# over to JavaScript.

You can check the code out here.

If you prefer Java, Erlang or c, there is a repository from the original designers of the algorithm here.  A word of warning: if you try to use that repository to follow along with my code, it will be very difficult.  Conceptually, the code is somewhat similar to what I have written, but my implementation is almost entirely different.

Getting Started with RavenDB Using Pure JavaScript

RavenVsCouchYou might ask: “Why in the world would you create a pure JS app with RavenDB?”  I’m so glad we’re interested in the same things!  I’ve been toying around for a little while with CouchApp – which is a way to host applications completely within a CouchDB NoSQL database.  The idea is to greatly speed development and performance for certain application use cases by avoiding (most of) a server side middle layer.

Use Case

Let’s say you are building an application for internal use.  Assuming you are responsible on the client side, do you really need to have server side data validation?  I suppose you could find a few arguments in favor of it, but do they actually outweigh the cost of implementing a middle tier for this use case?  Really??

I’m going to pretend like you said, “No, Dave – by golly, you’re right.”

In looking at CouchApp, the first problem I ran into is that it’s hard.  Like, really hard.  I mean these guys are probably all into mod’ed out Linux distros and neckbeards and shit.  Which is cool, but the problem is that they are NOT into creating canonical, orderly, convention-based documentation/tests/examples that explain how the hell to do anything.  They are too “relaxed” I guess.  Instead you can do whatever you want, man.  For instance, they have all these different ways to do html rendering.  Half of them are outdated, and the other half are poorly demonstrated.  You get the feeling they are too smart and excited to let stuff mature for 2 months before moving on to the next shiny byte. 

(I’ll admit that last paragraph is probably unfair, but give CouchApp a few hours and see if you don’t feel the same way.)

The other problem I ran into is the sneaking suspicion that the whole thing is dead.  If you look at all the docs, posts and hubbub, it seems to center pretty tightly around 2010 and then tail off after that.  I tried to get some people from the community to give me some feedback about my last post, and all I heard were crickets.

The final straw for me on the whole CouchApp thing was that there is no easy way TO ACCESS THE DATABASE CROSS-DOMAIN.  Are you kidding me?  What the hell is the use of having a database that faces http if you can’t access the thing via http?  The solution is to install a proxy on your Apache server.  WHAT!?  I’m done.

Quoth the Raven

Enter the RavenDB.  If you compare to, it’s pretty glaringly obvious who has their shit together and who doesn’t.  I can hear my imaginary friend say, “Hey Dave, that’s not fair.  CouchApp is like, a side project, dude.  You should be comparing it to”  And my friend would be right.  So go look at then.  I guess it is better than ….

And when that same friend then says –

“But Dave, RavenDB Costs Money and Shit”

– if he is truly concerned about RavenDB costing money, he should use CouchDB, MongoDB or Cassandra or some crap like that … (freeloader).  He should have fun with that.  I’m trying to get things done.  But hey … if my buddy really feels software should be free, then he should probably open source his own project.  Then he could use RavenDB for free.

Ok, maybe that’s a little harsh.  Maybe I’m being too hard on my outspoken pal.  But you know … the tone that I used was EXACTLY HOW I MEANT IT BE.

Brass Tacks

… As in it’s time to get down to them.  How the heck do you get going with RavenDB anyway?  Well the first thing to do is drive your browser over to Mr. Ayende’s shop and get yourself a build.

Ok, on another side note, does this guy Ayende or Oren or Auryn or whatever his name is kick ass or what?  I mean, I know he’s been putting out awesomeness for something like a decade now, but who decides one day that, “Hey, I think I’ll build a NoSQL database by myself.  Oh, and while I’m at it, I’ll make it the best one available on the market.  I’ll actually make it work well, have good documentation, be a (C#) developer’s dream to use, be easily distributable and you know what else?  If somebody sends a message to my mailing list, I’ll respond in less than 5 minutes even if I don’t know them AT ALL.”  Too bad he has a problem with run-on sentences.  Oh wait, that’s me.

Go over to and pick your poison.  Usually I prefer all things NuGet, but in this case, I didn’t want to find out whether or not the server is in there.  I just downloaded the zipped build.  Unfortunately, the most current build I found (960) had a bug with posting new documents using $.ajax.  This struck me as so egregious that I nearly didn’t write all those nice things above about Ayende, but stumbled in despair back to Couchappland.  Fortunately, the “unstable” build 2063 works … even if it does have some weird ass shit going on with a system database being the default and going completely paisley if you try to do the advanced database creation stuff …. It is labeled “unstable” after all, but I digress … again.

Once you’ve done the dance of unblocking the zip file and extracting it and all that, you can go to the Server subdirectory and type

raven.server /install

Congratulations.  You now have a running RavenDB service.  Beats the hell out of installing SQL Server doesn’t it?  You might also want to compare this process to that of CouchApp in my last post.

Oh wait, do I need to install a management studio?  No, it’s there already.  Just go to http://localhost:8080 if you don’t believe me.  Oh ok, do I need to install some configuration app?  No, you can just hack the config file.  And while we’re talking about it, why don’t we do some configuration file hacking?

Configuring RavenDB

You will find raven.server.exe.config in that same Server directory.  Open it with your favorite text editor and you will see something like this:

<?xml version=”1.0″ encoding=”utf-8″ ?>
    <add key=”Raven/Port” value=”8080″/>
    <add key=”Raven/DataDir” value=”~\Data”/>
    <add key=”Raven/AnonymousAccess” value=”Get”/>  </appSettings>
        <loadFromRemoteSources enabled=”true”/>
        <assemblyBinding xmlns=”urn:schemas-microsoft-com:asm.v1″>
            <probing privatePath=”Analyzers”/>


Ok, I was actually a little surprised that port 8080 was available on my machine, so I changed that right away.  Also, I don’t want to fiddle with security right now.  Because I’m behind my firewall, I’m going to enable anon access on all interactions, and I’m going to leave Cross Domain Access wide open.  So now I have:

<?xml version=”1.0″ encoding=”utf-8″ ?>
    <add key=”Raven/Port” value=”49589″/>
    <add key=”Raven/DataDir” value=”~\Data”/>
    <add key=”Raven/AnonymousAccess” value=”All”/>
    <add key=”Raven/AccessControlAllowOrigin” value=”*” />
        <loadFromRemoteSources enabled=”true”/>
        <assemblyBinding xmlns=”urn:schemas-microsoft-com:asm.v1″>
            <probing privatePath=”Analyzers”/>


Restart your service.  It’s in the Windows service.msc app or you can just type Raven.Server /restart from the command line in the Server directory.

Let’s Write Some JavaScript Already

Break open your favorite IDE/text editor, and because they all support NuGet, get yourself the QUnit-MVC package.  Or maybe they don’t, and you can get QUnit at  It’s hidden away down there at the bottom of the page for some stupid reason.

Now we need a test page.  Create an html file, and put this in it:

<!DOCTYPE html PUBLIC “-//W3C//DTD XHTML 1.0 Transitional//EN” “”>
<html xmlns=””>
    <title>QUnit Test Page</title>
    <link rel=”stylesheet” href=”qunit.css” type=”text/css” />
    <script src=jquery-1.7.1.min.js” type=”text/javascript”> </script>
    <script type=”text/javascript” src=”qunit.js”></script>
         <!– App code goes here –>
    <script src=”app.js” type=”text/javascript”></script>
    <!– Unit test code goes here –> 
    <script src=”appTests.js” type=”text/javascript”></script>
    <h1 id=”qunit-header”>Intertwyne QUnit Test</h1>
    <h2 id=”qunit-banner”></h2>
    <h2 id=”qunit-userAgent”></h2>
    <ol id=”qunit-tests”></ol>

In a nutshell, this is what QUnit wants in order to display your test results  Obviously it might be better to put these scripts into special directories according to whatever conventions to which you subscribe.  If you actually did get qunit from NuGet, then you’ll need to square up your script and CSS URLs to match the Visual Studio conventions (duh).

Ok, now open up a new file called app.js.  In it, you’ll need to put something like this:

story = window.story || {};

story.url = “http://localhost:49589/docs”;

story.basicInsert = function (insertData, requestorCallback) {
        type: ‘POST’,
        url: story.url,
        dataType: ‘json’,
        contentType: “application/json”,
        data: JSON.stringify( insertData),
        success: function (data) {

story.basicGet = function (collectionAndKey, requestorCallback) {
        url: story.url + ‘/’ + collectionAndKey,
        dataType: ‘jsonp’,
        jsonp: ‘jsonp’,
        success: function (data, textStatus, jqxhr) {
            requestorCallback(data, textStatus);

These are a couple of JavaScript functions to write some JSON in and out of your RavenDB database.  I am using a POST rather than a PUT because I didn’t feel like finding a time sequential UUID generator for my IDs.  RavenDB will do that for me if I POST, sending back the results as JSON. 

The GET is requesting the results as JSONP so that my browser doesn’t freak out about cross-domain request results.  If you don’t know what that means, then Google it or ignore it because I took care of it for you.  This blog is already getting epic in length.

The other thing I do for both of these is pass in a callback parameter so our consuming functions can get the asynchronous results.  If you don’t know what a callback is, then consider a different vocation/hobby.

Ok, now onto the tests!!

Open yourself an appTests.js file and put something like this in it:

module(“TheRedCircuit’s tests for to show the good people”,  {
    setup: function () {
        // you can do some setup type stuff here

test(“basicInsert testStory insertsIt”, 1, function () {
    var insertData = {name:”some title”,body:”some test body”};
    story.basicInsert(insertData, function (insertedData) {
        var key = insertedData.Key;
        story.basicGet(key, function (results, textStatus) {
            equal(textStatus, “success”);

Ok, so I’m cheating pretty badly here on the unit testing front.  I’m testing two functions at once, but seriously, how would you do it?  The chicken has to come before the omelet right?

When testing asynchronous functions, you have to tell QUnit to hold its horses while you go off across http land and do your thing.  That’s what the stop (and timeout after 1000 milliseconds) function is for.

Then I’ve nested all the calls so that we can only pass the equal function assertion if everything behaves nicely and gives us a “success” result.  Then the start function tells QUnit it can have the reins back.


I’ve done some whining about how hard CouchApp is.  I’ve verbally abused my imaginary friend.  Then I told you RavenDB is a lot easier because it is.  Then I showed you how brain-dead easy it is to get a RavenDB server going.  Lastly I did some POST and GET data access using jQuery.  Oh, and I showed you how to do some JavaScript unit testing with QUnit.

Because I know you are just falling all over yourself to know more, I’ll probably post a more complete version of this application next time, exploring RavenDB’s HTTP API some more … kind of like I did with CouchApp.