This is the 20th in series of posts leading up to Node.js Knockout about how to use Joyent’s service. This post was written by architect and Node.js Knockout judge Isaac Schlueter.

These instructions will tell you how to deploy your code on Joyent’s service.

Create an Account

Go to and click “Sign up”.

Then fill in the stuff. You’ve done this before.

Now you’re logged in. If you’re not logged in now, email support.

Add an SSH Key

You need to add an SSH public key to your account to provision Node SmartMachines.

If you’re on a Windows computer, then use the puttygen.exe program which comes along with PuTTY. The key you want is the one marked Public key for pasting into OpenSSH authorized_keys file.

If you’re on any other kind of computer, then your SSH keys are probably in ~/.ssh/*.pub. If you don’t have one, then you can create it by using the ssh-keygen program.

Paste the key into the big box. You can also add a name for the key, if you like labels.

Save it. Now you’ve got a key.

Order a Machine

Click the button on the right that says “Order a Machine”.

Give it a name.

Click “Provision”.

Follow Instructions

On the machine details page, there are a bunch of instructions.

Follow them.

It won’t work unless you follow the instructions.

If you forget, and need to follow them later, that’s fine. They’ll still be there.

It involves pasting some stuff into your .ssh/config file. You can achieve a similar effect on Windows by using this method, or using git and ssh from Cygwin.

Bask in the Cool Glow of the Logo

On the machine details page is a hyperlink to your new zone. Click it.

Enjoy the logo.

When you’re done enjoying the logo, click the logo to return to the machine details page.

Repeat until bored.

Push Some Code

Use the power of the instructions! Push code to your machine! Be a winner!

Some tips:

  • If you have npm dependencies you can add them to a package.json file in the root of your repository.
  • The default start command is node server.js. If you want to have it start up some other way, then you can put something like this in your package.json file: "scripts": { "start" : "my-custom-command" }
  • If you have a dependency that takes a long time to install, you can make deploys faster by ssh-ing into your zone, and npm install <some-dependency> -g. The deploy script will reuse globally installed dependencies if they’re suitable.

If you run into trouble, email support.

This is the 19th in a series of posts leading up to Node.js Knockout on using mongodb with node-mongodb-native This post was written by Node Knockout judge and node-mongo-db-native author Christian Kvalheim.

In the first tutorial we targeted general usage of the database. But Mongo DB is much more than this. One of the additional very useful features is to act as a file storage system. This is accomplish in Mongo by having a file collection and a chunks collection where each document in the chunks collection makes up a Block of the file. In this tutorial we will look at how to use the GridFS functionality and what functions are available.

A simple example

Let’s dive straight into a simple example on how to write a file to the grid using the simplified Grid class.

var mongo = require('mongodb'),
  Server = mongo.Server,
  Db = mongo.Db,
  Grid = mongo.Grid;

var server = new Server('localhost', 27017, {auto_reconnect: true});
var db = new Db('exampleDb', server);, db) {
  if(!err) {
    var grid = new Grid(db, 'fs');
    var buffer = new Buffer("Hello world");
    grid.put.(buffer, {metadata:{category:'text'}, content_type: 'text'}, function(err, fileInfo) {
      if(!err) {
        console.log("Finished writing file to Mongo");

All right let’s dissect the example. The first thing you’ll notice is the statement

var grid = new Grid(db, 'fs');

Since GridFS is actually a special structure stored as collections you’ll notice that we are using the db connection that we used in the previous tutorial to operate on collections and documents. The second parameter 'fs' allows you to change the collections you want to store the data in. In this example the collections would be fs_files and fs_chunks.

Having a live grid instance we now go ahead and create some test data stored in a Buffer instance, although you can pass in a string instead. We then write our data to disk.

var buffer = new Buffer("Hello world");
grid.put.(buffer, {metadata:{category:'text'}, content_type: 'text'}, function(err, fileInfo) {
  if(!err) {
    console.log("Finished writing file to Mongo");

Let’s deconstruct the call we just made. The put call will write the data you passed in as one or more chunks. The second parameter is a hash of options for the Grid class. In this case we wish to annotate the file we are writing to Mongo DB with some metadata and also specify a content type. Each file entry in GridFS has support for metadata documents which might be very useful if you are for example storing images in you Mongo DB and need to store all the data associated with the image.

One important thing is to take not that the put method return a document containing a _id, this is an ObjectID identifier that you’ll need to use if you wish to retrieve the file contents later.

Right so we have written out first file, let’s look at the other two simple functions supported by the Grid class.

the requires and and other initializing stuff omitted for brevity, db) {
  if(!err) {
    var grid = new Grid(db, 'fs');
    var buffer = new Buffer("Hello world");
    grid.put.(buffer, {metadata:{category:'text'}, content_type: 'text'}, function(err, fileInfo) {
      grid.get(fileInfo._id, function(err, data) {
        console.log("Retrieved data: " + data.toString());
        grid.delete(fileInfo._id, function(err, result) {

Let’s have a look at the two operations get and delete

grid.get(fileInfo._id, function(err, data) {});

The get method takes an ObjectID as the first argument and as we can se in the code we are using the one provided in fileInfo._id. This will read all the chunks for the file and return it as a Buffer object.

The delete method also takes an ObjectID as the first argument but will delete the file entry and the chunks associated with the file in Mongo.

This api is the simplest one you can use to interact with GridFS but it’s not suitable for all kinds of files. One of it’s main drawbacks is you are trying to write large files to Mongo. This api will require you to read the entire file into memory when writing and reading from Mongo which most likely is not feasible if you have to store large files like Video or RAW Pictures. Luckily this is not the only way to work with GridFS. That’s not to say this api is not useful. If you are storing tons of small files the memory usage vs the simplicity might be a worthwhile tradeoff. Let’s dive into some of the more advanced ways of using GridFS.

Advanced GridFS or how not to run out of memory

As we just said controlling memory consumption for you file writing and reading is key if you want to scale up the application. That means not reading in entire files before either writing or reading from Mongo DB. The good news it’s supported. Let’s throw some code out there straight away and look at how to do chunk sized streaming writes and reads.

the requires and and other initializing stuff omitted for brevity

var fileId = new ObjectID();
var gridStore = new GridStore(db, fileId, "w", {root:'fs'});
gridStore.chunkSize = 1024 * 256;, gridStore) {
   function writeData() {
     var group =;

     for(var i = 0; i < 1000000; i += 5000) {
       gridStore.write(new Buffer(5000), group());

   function doneWithWrite() {
     gridStore.close(function(err, result) {
       console.log("File has been written to GridFS");

Before we jump into picking apart the code let’s look at

var gridStore = new GridStore(db, fileId, "w", {root:'fs'});

Notice the parameter "w" this is important. It tells the driver that you are planning to write a new file. The parameters you can use here are.

  • "r" - read only. This is the default mode
  • "w" - write in truncate mode. Existing data will be overwritten
  • "w+" - write in edit mode

Right so there is a fair bit to digest here. We are simulating writing a file that’s about 1MB big to Mongo DB using GridFS. To do this we are writing it in chunks of 5000 bytes. So to not live with a difficult callback setup we are using the Step library with its’ group functionality to ensure that we are notified when all of the writes are done. After all the writes are done Step will invoke the next function (or step) called doneWithWrite where we finish up by closing the file that flushes out any remaining data to Mongo DB and updates the file document.

As we are doing it in chunks of 5000 bytes we will notice that memory consumption is low. This is the trick to write large files to GridFS. In pieces. Also notice this line.

gridStore.chunkSize = 1024 * 256;

This allows you to adjust how big the chunks are in bytes that Mongo DB will write. You can tune the Chunk Size to your needs. If you need to write large files to GridFS it might be worthwhile to trade of memory for CPU by setting a larger Chunk Size.

Now let’s see how the actual streaming read works.

var gridStore = new GridStore(db, fileId, "r");, gridStore) {
  var stream =;

  stream.on("data", function(chunk) {
    console.log("Chunk of file data");

  stream.on("end", function() {
    console.log("EOF of file");

  stream.on("close", function() {
    console.log("Finished reading the file");

Right let’s have a quick lock at the streaming functionality supplied with the driver (make sure you are using 0.9.6-12 or higher as there is a bug fix for custom chunksizes that you need)

var stream =;

This opens a stream to our file, you can pass in a boolean parameter to tell the driver to close the file automatically when it reaches the end. This will fire the close event automatically. Otherwise you’ll have to handle cleanup when you receive the end event. Let’s have a look at the events supported.

  stream.on("data", function(chunk) {
    console.log("Chunk of file data");

The data event is called for each chunk read. This means that it’s by the chunk size of the written file. So if you file is 1MB big and the file has chunkSize 256K then you’ll get 4 calls to the event handler for data. The chunk returned is a Buffer object.

  stream.on("end", function() {
    console.log("EOF of file");

The end event is called when the driver reaches the end of data for the file.

  stream.on("close", function() {
    console.log("Finished reading the file");

The close event is only called if you the autoclose parameter on the method as shown above. If it’s false or not set handle cleanup of the streaming in the end event handler.

Right that’s it for writing to GridFS in an efficient Manner. I’ll outline some other useful function on the Gridstore object.

Other useful methods on the Gridstore object

There are some other methods that are useful

gridStore.writeFile(filename/filedescriptor, function(err fileInfo) {});

writeFile takes either a file name or a file descriptor and writes it to GridFS. It does this in chunks to ensure the Eventloop is not tied up., function(err, data) {});

read/readBuffer lets you read a #length number of bytes from the current position in the file., seekLocation, function(err, gridStore) {});

seek lets you navigate the file to read from different positions inside the chunks. The seekLocation allows you to specify how to seek. It can be one of three values.

  • GridStore.IO_SEEK_SET Seek mode where the given length is absolute
  • GridStore.IO_SEEK_CUR Seek mode where the given length is an offset to the current read/write head
  • GridStore.IO_SEEK_END Seek mode where the given length is an offset to the end of the file

    GridStore.list(dbInstance, collectionName, {id:true}, function(err, files) {})

list lists all the files in the collection in GridFS. If you have a lot of files the current version will not work very well as it’s getting all files into memory first. You can have it return either the filenames or the ids for the files using option.

gridStore.unlink(function(err, result) {});

unlink deletes the file from Mongo DB, that’s to say all the file info and all the chunks.

This should be plenty to get you on your way building your first GridFS based application. As in the previous article the following links might be useful for you. Good luck and have fun.

Links and stuff

This is the 18th in series of posts leading up Node.js Knockout, and covers using to load test your node app.

What’s, powered by Mu Dynamics, is a self-service load and performance testing platform. Built for API, cloud, web and mobile application developers, quickly and inexpensively helps you ensure performance and scalabilty. And we make this super fun.

Why Load Test?

Node.js is purdy fast, but if you are not careful in the way you invoke backend services like CouchDB or MongoDB, you can easily cause pipeline stall making your app not scale to a large number of users. Typically you will end up with each concurrent request taking longer and longer resulting in timeouts and fail whales. Load testing shows you what kind of concurrency you can achieve with your app and how it’s actually scaling out.

Signing up

Go to our login page and use your Facebook or Google accounts to login in with just 2 clicks. As simple as that. You will immediately be able to run load test against your app from the blitz bar.

Running a Load Test (rush)

If your app is at, the following blitz line will generate concurrent hits against your app:

--pattern 1-250:60 --region virginia

As simple as that. If your express and connect routes have parameters in them that you use for looking up in your favorite database, you can read up on variables to parameterize query arguments and route paths so you can simulate production workloads on your app.

During the Node.js Knockout

We are super excited about sponsoring Node.js Knockout and have something fun planned.

At the start of the event, we are providing all contestants with enough blitz-power so you can generate lots of hits against your cool node.js app for 48 hours. We are also working on a scoreboard so you get bragging rights on the app with the most number of hits. Watch this page at the start of the event and you’ll know what to do.

Check it out!

Command-Line Testing

For those developers that don’t like UI and prefer command line, here’s the simplest way to run iterative load tests right after you git push your changes to the app:

$ gem install blitz
$ blitz api:init
$ blitz curl --pattern 1-250:60 --region virginia

To build cool node.js apps is awesome, to watch it scale out? priceless!

This is the 17th in series of posts leading up Node.js Knockout, and covers using natural in your node app. This post was written by natural author and Node.js Knockout judge Chris Umbel.

"natural" is a general-purpose natural language processing library for node.js developed principally by Chris Umbel. Various algorithms in the way of stemming, classification, inflection, and phonetics are currently supported as well as basic WordNet usage.

At the time of writing “natural” is still young and support for new algorithms in the aforementioned categories or even other categories still are being feverishly developed. If you have anything to contribute consult the github repository.

This post will walk you through the installation of “natural”, consumption of the various components, and outline the future plans.


"natural" is available as an npm and can be installed as such:

$ npm install natural



Stemming is the processing of taking a word and stripping of affixes down to the base stem of the word. “natural” currently provides two algorithms for stemming: the Porter Stemmer and the Lancaster Stemmer.

Porter Stemmer

The Porter Stemmer was developed in 1979 by Martin Porter and was originally implemented in BCPL.

This example stems the string “words” to its root “word”.

var stemmer = require('natural').PorterStemmer;

This example illustrates a common pattern used throughout “natural”. The attach() method patches String to have stem() and tokenizeAndStem() helper methods.

The tokenizeAndStem() splits the string up on whitespace and punctuation, removes noise words, and then stems each remaining token into an array.

var stemmer = require('natural').PorterStemmer;
console.log("i am waking up to the sounds of chainsaws".tokenizeAndStem());

Lancaster Stemmer

The Lancaster Stemmer (AKA Paice/Husk) algorithm was developed by Chris Paice at Lancaster University with some help by Gareth Husk. The Lancaster algorithm is somewhat aggressive in its removal of suffixes resulting is stems that aren’t correct spellings of their respective word. If used for comparison in systems such as full-text searches that’s typically acceptable.

var stemmer = require('natural').LancasterStemmer;
console.log("i am waking up to the sounds of chainsaws".tokenizeAndStem());


Classification is the process of categorizing texts into predetermined classes automatically. Before the classification can occur it’s necessary to train the classifier on sample texts.

The only algorithm currently supported for classification in “natural” is Naive Bayes.

Notice that the training text can either be arrays of tokens or strings. Strings will be stemmed and have noise words removed so if you want your training data to be unmodified supply token arrays directly. This example will output “computing” on the first line and “literature” on the second.

var natural = require('natural'),
    classifier = new natural.BayesClassifier();

classifier.train([{classification: 'computing', text: ['fix', 'box']},
    {classification: 'computing', text: 'write some code.'},
    {classification: 'literature', text: ['write', 'script']},
    {classification: 'literature', text: 'read my book'}

console.log(classifier.classify('there is a bug in my code.'));
console.log(classifier.classify('write a book.'));


"natural" provides inflectors for transforming words. Currently a noun inflector is provided to pluralize and singularize nouns, a count inflector is provided to transform integers to their string ordinals i.e. "1st", "2nd", "3rd" and an experimental present tense verb inflector is provided for pluralizing/singularizing relevant verbs.

Noun Inflector

The following example uses the NounInflector to transform the word “beer” to “beers”.

var natural = require('natural'),
    nounInflector = new natural.NounInflector();


Much like the stemmers an attach() method exists to patch String to perform the inflections with pluralizeNoun() and singularizeNoun() methods.


Count Inflector

In this example the CountInflector converts the integers 1, 3 and 111 to “1st”, “3rd” and “111th” respectively.

var natural = require('natural'),
    countInflector = natural.CountInflector;


Present Tense Verb Inflector

At the time of writing the PresentVerbInflector is still experimental and likely does not correctly handle all cases. It is, however, designed to transform present tense verbs between their singular and plural forms.

var verbInflector = new natural.PresentVerbInflector();

And, of course, the attach() method is provided to patch String.



"natural" employes two phonetic algorithms to determine if words sound alike, SoundEx and Metaphone.


SoundEx is an old algorithm that was originally designed for use in physical filing systems and was patented in 1918. Despite its age it’s been widely adopted in modern computing to determine if words sound alike.

Here’s an example of using “natural“‘s implementation.

var soundEx = require('natural').SoundEx;

if('ruby', 'rubie'))
    console.log('they sound alike');

The raw SoundEx phonetic code can be obtained with the process() method. The following example outputs a cryptic “R100”.


Of course an attach() method is provided to patch string with helpers. Note that the tokenizeAndPhoneticize() method splits a string up into words, and returns an array of phonetic codes.

console.log('phonetics rock'.tokenizeAndPhoneticize());

    console.log('they sound alike');


"natural" also implements the Metaphone phonetic algorithm which is considerably newer (developed in 1990 by Lawrence Philips) and more robust than SoundEx. Its implementation in "natural" mirrors SoundEx.

var metaphone = require('natural').Metaphone;

if('ruby', 'rubie'))
    console.log('they sound alike');


console.log('phonetics rock'.tokenizeAndPhoneticize());

    console.log('they sound alike');


A new and somewhat experimental feature of “natural” is WordNet database integration. WordNet organizes English words into synsets (groups of synonyms), and contains example sentences and definitions.


Consider the following example which looks up all entries for the word “node” in WordNet.

Note the path parameter passed in to the WordNet constructor. That’s the path where the WordNet database files are to be stored. If the files do not exist “natural” will download them for you.

var natural = require('natural'),
    wordnet = new natural.WordNet('.');

wordnet.lookup('node', function(results) {
    results.forEach(function(result) {


In this example a list of synonyms are retrieved for the first result of a lookup via the getSynonyms() method.

var natural = require('natural'),
    wordnet = new natural.WordNet('.');

wordnet.lookup('entity', function(results) {
    wordnet.getSynonyms(results[0], function(results) {
        results.forEach(function(result) {

Future Plans

While “natural” has a reasonable amount of functionality at this point it has quite a way to go to make it to the level of projects like Python’s Natural Language Toolkit.

To make up that gap in the short term plans are in the works to implement part of speech (pos) tagging, the double-metaphone phonetic algorithm, and a maximum entropy classifier.

In the longer term extending “natural” beyond English is a hope, but will require additional expertise.

If you have the interest to help out please do so!

This is the 16th in series of posts leading up Node.js Knockout, and covers using TradeKing in your node app.

At TradeKing, we’ve all been infatuated with Node. From its inception we’ve been touting its swift performance, reasonable learning curve, and its particular ability to add a completely new dimension to web applications.

While developing the API we were always thinking about the angles developers might use to create riveting new experiences for traders, and many of those angles have a very common intersection: real-time. Whether it’s streaming market data or interactive real-time charting, the financial industry moves incredibly quick and requires web technologies to match its pace. Node combines perfectly with web sockets allowing us to meet those needs in a very agile way. The latest of which was a quick mashup demo for an internal board meeting.

Here is a quick tutorial of how we got Node and Sockets working with our API in a demo watchlist application. The idea: a streaming watchlist tool that integrates with Twitter. What’s a watchlist? Think of it as an interactive list of stocks you might hold or be interested in holding.

TradeKing Screenshot


First things first, grab the project repository from Once you clone that locally, hop in the new repository and run npm install to grab all the projects dependencies.


Crack open the server.js file and fill in the configuration here:

// Configuration!
global.tradeking = {
  api_url: "",
  consumer_key: "",
  consumer_secret: "",
  access_token: "",
  access_secret: ""
global.twitter_user = {
  consumer_key : '',
  consumer_secret : '',
  access_token_key : '',
  access_token_secret : ''

You can get all of your TradeKing keys at by creating a developer application. Create a Twitter application ( to get those keys as well.


The TradeKing API uses OAuth authentication so it was a snap to start talking to the API and there was no shortage of Twitter modules to snag their stream. Since we’ve supplied all of our keys, we don’t need the full flow so we’ll just setup the consumer and bring our own access tokens to the table (see the next step).

global.tradeking_consumer = new oauth.OAuth(

global.twitter_consumer = new oauth.OAuth(

Making Requests to TradeKing

Now that the consumer is set up, making requests is a breeze!

  function(error, data, response) {
    quotes = JSON.parse(data);
    if(quotes.response.type != "Error") {
      client.emit('watchlist-quotes', quotes.response.quotes.instrumentquote);

This bit of code makes a GET request to a specified URL and using our access token/secret. Once completed the callback is executed. In this particular instance we are parsing the returned JSON data, checking for errors, and then sending a socket event to the client.

Want to know more?

Since we’ve open sourced the whole application and slapped it up on Github, pull it down, throw your keys in and check out how it all works — maybe even make some upgrades and submit a pull request! Head over to our forums to see what the rest of the devs are up to or to drop us a note about your progress with the API.

Online trading has inherent risk due to system response and access times that may vary due to market conditions, system performance, and other factors. An investor should understand these and additional risks before trading.*

© 2011 TradeKing. All rights reserved. Member FINRA and SIPC

This is the 15th in series of posts leading up Node.js Knockout, and covers using PubNub in your node app.

PubNub lets you connect mobile phones, tablets, web browsers and more with a 2 Function Publish/Subscribe API (send/receive).

HTML Interface

If you are building HTML5 Web Apps, start by copying and pasting the code snippet below. If not, skip to Other Languages.

<div pub-key="demo" sub-key="demo" id="pubnub"></div>
<script src=""></script>

    // Listen For Events
        channel  : "hello_world",      // Channel
        error    : function() {        // Lost Connection (auto reconnects)
            alert("Connection Lost. Will auto-reconnect when Online.")
        callback : function(message) { // Received An Event.
        connect  : function() {        // Connection Established.

            // Send Message
                channel : "hello_world",
                message : { anything : "Hi from PubNub." }



Other Languages

Follow the instructions linked below to use PubNub APIs from other programming languages: Node, Ruby, PHP, Python, Perl, Erlang and more programming languages on GitHub.

This is the 14th in a series of posts leading up Node.js Knockout, and covers deploying your Node.js app to a Linode VPS.

A Linode VPS means freedom. You get everything from the Linux kernel and root access on up. All managed by a simple yet very powerful control panel.

This post will get you going with a Node.js/Socket.IO app on Linode.

Do I need to sign up with Linode?

Short answer: no. Linode will be providing VPSs during the competition and judging period for teams as a deploy option. More details before the competition.

Pick your Linux distro

Linode has a choice of Linux distribution. This blog post will be using Ubuntu 11.04. The same instructions definitely apply to Ubuntu 10.04 (32-bit or 64-bit) and are easily adaptable to Debian.

32-bit or 64-bit?

If you’re going to be installing something like mongodb, 64-bit is highly recommended. If you’re going to be using redis heavily, maybe you want 32-bit. The choice is up to you and it’s possible to wipe the VPS later and pick a different option, but that could take time you don’t have.

TL;DR StackScripts

Setting up your own server from scratch is not for the faint of heart. If you know what you’re doing, then this guide should be full of good directions to take: read on. If you don’t want to muck around with apt-get, upstart, sudoers, and more, use the StackScript:

Deploy using StackScripts
Search for knockout

After your linode is booted up from that, skip to the deploy script section.

Boot and SSH in

Boot your Linode from the Linode dashboard. When creating your Linode, you picked a root password. SSH in as root to complete the next steps. Your linode’s IP address can be found on the Remote Access tab from the control panel.

All of the commands below prefixed with # should be run as root. Any prefixed with $ are run as the deploy user (set up later).

Install git and other tools

We’ll definitely need git and most likely a C compiler (for compiling node modules with C-bindings):

# apt-get install -y build-essential curl
# apt-get install -y git || apt-get install -y git-core

Install node.js

The easiest way to install node.js is via apt:

# apt-get install -y python-software-properties
# add-apt-repository ppa:chris-lea/node.js
# apt-get update
# apt-get install -y nodejs nodejs-dev

If you’d really rather compile from source:

# apt-get install -y build-essential python libssl-dev
# curl -O
# tar xzf node-v0.4.11.tar.gz
# cd node-v0.4.11
# ./configure
# make install

Install npm

# curl | clean=no sh

By default, this will install npm in /usr/bin when using the apt-get method above. When you install modules with npm later, they’ll get installed to your local working directory. If you use npm to install modules globally, you’ll need to be root or use sudo: sudo npm install -g coffee-script.


Now that we have node and npm installed on our linode, we want to get our app out there and running. The rest of this guide uses a version of the Knocking out Socket.IO example app to deploy with.

Setting up a deploy user

No one wants their own code running as root, right? Create a deploy user to own where your app code lives and switch to it:

# useradd -U -m -s /bin/bash deploy
# su - deploy

Set NODE_ENV to production

Setting NODE_ENV will tell frameworks such as Express to turn on its caching features. It’s also important for telling our knockout check-in module to notify us of a deploy from your server.

$ echo 'export NODE_ENV="production"' >> ~/.profile

Add to known_hosts

$ ssh
The authenticity of host ' (' can't be established.
RSA key fingerprint is 16:27:ac:a5:76:28:2d:36:63:1b:56:4d:eb:df:a6:48.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ',' (RSA) to the list of known hosts.
Permission denied (publickey).

You can safely ignore the “Permission denied (publickey)” part for now.

SSH keys

Drop your SSH public keys into /home/deploy/.ssh/authorized_keys to make deploying and SSHing in much easier later. While you’re at it, you should add the Knockout organizers’ public ssh key for auditing at the end of the competition. SSH access for organizers is a required step in deploys to Linode.

$ curl >> ~/.ssh/authorized_keys
$ chmod 600 ~/.ssh/authorized_keys

Upstart script

We’re going to use upstart to make sure our node app is running on server start along with restarting it if it should die. As root:

# cat <<'EOF' > /etc/init/node.conf 
description "node server"

start on filesystem or runlevel [2345]
stop on runlevel [!2345]

respawn limit 10 5
umask 022

  . $HOME/.profile
  exec /usr/bin/node $HOME/app/current/app.js >> $HOME/app/shared/logs/node.log 2>&1
end script

post-start script
  PID=`status node | awk '/post-start/ { print $4 }'`
  echo $PID > $HOME/app/shared/pids/
end script

post-stop script
  rm -f $HOME/app/shared/pids/
end script

To use upstart as the deploy user, we’ll have to give it sudo permission for stopping and starting the node process:

# cat <<EOF > /etc/sudoers.d/node
deploy     ALL=NOPASSWD: /sbin/restart node
deploy     ALL=NOPASSWD: /sbin/stop node
deploy     ALL=NOPASSWD: /sbin/start node
# chmod 0440 /etc/sudoers.d/node

Deploy script

Ok! The server’s ready. Now onto our local development machine setup.

We’re going to use (a fork of) TJ's deploy shell script to make deploying our code repeatable and easy for everyone on the team. On your local machine, in your project’s root directory:

$ curl -O
$ chmod +x ./deploy
$ cat <<EOF > deploy.conf
user deploy
host __96.126.102.14__
ref origin/master
path /home/deploy/app
post-deploy npm install && [ -e ../shared/pids/ ] && sudo restart node || sudo start node
test sleep 1 && curl localhost >/dev/null

Make sure to change the IP address and GitHub repo to ones for your team.

Now run ssh-add && ./deploy linode setup to get things setup:

$ ssh-add && ./deploy linode setup
  ○ running setup
  ○ cloning
Cloning into /home/deploy/app/source...
  ○ setup complete

And finally ./deploy linode to deploy:

$ ./deploy linode
  ○ deploying
  ○ hook pre-deploy
  ○ fetching updates
Fetching origin
  ○ resetting HEAD to origin/master
HEAD is now at bfadb51 bind to port 80 and downgrade
  ○ executing post-deploy `npm install && [ -e ../shared/pids/ ] && sudo restart node || sudo start node`

node start/running, process 13623
  ○ executing test `sleep 1 && curl localhost >/dev/null`
  ○ successfully deployed origin/master

You should commit both ./deploy and ./deploy.conf to your git repo. That way, anyone on your team can just run ./deploy linode later to push a new deploy out. Make sure to add everyone’s SSH keys to the deploy user too.

An aside on SSH agent forwarding

We’re taking advantage of SSH agent. In practice, this translates into needing to run ssh-add at inopportune times to get around errors like Permission denied (publickey). Read through the Wikipedia article on it though if you’re heavily concerned about security (or like to geek out about public key cryptography).

Binding to port 80

Take note of the listen call in our app.js:

app.listen(process.env.NODE_ENV === 'production' ? 80 : 8000, function() {

  // if run as root, downgrade to the owner of this file
  if (process.getuid() === 0)
    require('fs').stat(__filename, function(err, stats) {
      if (err) return console.log(err)

It specifically binds to port 80 when run in production mode and otherwise to port 8000. Because we’re running node under upstart (and therefore as root initially), node has the chance to bind to the privileged port 80. Once it’s bound though, it downgrades its uid to the owner of the app.js file, namely our deploy user. This is much more secure than running your app as the root user.

Try it out

You should now be able to hit your linode directly and see your app running!


If you run into any problems, we’re here to help. Email or try us on Twitter.

This is the 13th in series of posts leading up Node.js Knockout, and covers using Tropo in your node app.

Tropo is a multi-channel communication platform that lets you build Phone, SMS and IM apps - all using the same Node.js codebase.

On the phone side, Tropo integrates with SIP (the industry standard for VoIP telephony) and Skype. On the SMS side, Tropo supports sending inbound and outbound text messages from both U.S. and Canadian numbers. (It’s also possible to send to a host of international destinations from U.S. numbers.)

Tropo is 100% free for development use - no upfront commitments and no strings attached. Signing up for an account is free, and you can deploy phone and SMS numbers for free in development (we have tons in both the US and Canada). We won’t ask you for payment information until you’re ready to deploy your application to production.

For Node.js developers, getting started using Tropo to build powerful communication apps is as simple as installing the Tropo Node.js module.

npm install tropo-webapi

Your Node.js application will interact with the Tropo platform by consuming and generating JSON that is delivered over HTTP. It’s simple to use a Node-based web server for this:

var TropoWebApi = require('tropo-webapi').TropoWebAPI;
var http = require('http');

var server = http.createServer(function (request, response) {

  var tropo = new TropoWebAPI();
  tropo.say("Hello, World!");
  response.writeHead(200, {'Content-Type': 'application/json'});


This simple web server listening on port 8000 will respond to incoming HTTP requests (Tropo uses the POST method to to connect to your app) with the following JSON:

{"tropo":[{ "say":{"value":"Hello, World!" }}]}

When a user makes a phone call to this app, Tropo to output the phrase “Hello, World” via Text-to-Speech (TTS) with the standard TTS engine. One of the really nice features is that we support TTS in multiple languages - 24 in all - so if your app has an international audience, Tropo is a logical fit.

Now lets look at a slightly more advanced example:

var TropoWebApi = require('tropo-webapi').TropoWebAPI;
var express = require('express');

var port = process.ARGV[2] || 8000;
var app = express.createServer();

// Required to process the body of HTTP responses from Tropo.

// Base route, plays welcome message.'/', function(req, res){
  var tropo = new TropoWebAPI();

  tropo.say("Welcome to the Tropo Web API node demo.");
  tropo.on("continue", null, "/start", true);


// Route to start asking caller for selection.'/start', function(req, res){

  var tropo = new TropoWebAPI();

  // Set up options for question to ask caller.
  var choices = new Choices("Node JS, PHP, Ruby, Python, Scala");
  var attempts = 3;
  var bargein = false;
  var minConfidence = null; // Use the platform default.
  var name = "test";
  var recognizer = "en-us";
  var required = true;
  var say = new Say("What is your favorite programming language?");
  var timeout = 5;
  var voice = "Allison";
  tropo.ask(choices, attempts, bargein, minConfidence, name, recognizer, required, say, timeout, voice);

  tropo.on("continue", null, "/answer", true);
  tropo.on("error", null, "/error", true);


// Route to handle valid answers.'/answer', function(req, res){

  var tropo = new TropoWebAPI();
  var selection = req.body['result']['actions']['value'];
  tropo.say('You chose, ' + selection + '. Thanks for playing.');


// Route to handle errors or invalid responses.'/error', function(req, res){

  var tropo = new TropoWebAPI();
  tropo.say("Whoops, something bad happened. Please try again later.");


console.log('Tropo demo running on port: ' + port);

Since interaction with the Tropo platform occurs via HTTP, Tropo apps are a great fit for the Express Framework. When you create your Tropo application, simply set the URL to this app - wherever it happens to be running - as the application start URL.

This sample application has 4 basic steps:

  • A welcome message.
  • An input collection segment, where the caller is asked to name their favorite programming language.
  • An input inspection segment, where the value of the caller’s input is simply read back to them.
  • An error handler to tell the user if an error occurs (always a good idea in phone applications).

At the end of each segment, JSON is rendered in the HTTP response and sent to Tropo. This rendered JSON is used to interact with the user on whatever channel they have chosen to connect to your application with.

You may notice the following code in the input collection segment:

var choices = new Choices("Node JS, PHP, Ruby, Python, Scala");

This is the list of choices that the user may select from - if the user calls your application, they will make their selection using their voice. One of the unique features of Tropo is the ability to support speech recognition. This functionality is available to all applications that need it at no additional cost.

When the user makes their selection, it is sent via HTTP POST to the input inspection segment, and read back to the caller. If the user happens to connect to your application via SMS or IM, the result will be delivered on those channels. That’s it! No additional code needed - Tropo apps are born to work on multiple channels.

Tropo’s unique features are perfect for simply building powerful, multi-channel applications that are fully interoperable with the latest telephony and communication standards.

Together, Tropo and Node.js are a knockout.

This is the 12th in series of posts leading up Node.js Knockout, and covers using SpacialDB in your node app.

What is SpacialDB?

SpacialDB is a Geospatial database service that allows you to create, operate and scale dedicated Geospatial databases in the cloud. Your SpacialDB databases can be used transparently in place of any database in cloud such as Amazon RDS or Rackspace Storage or Heroku PostgreSQL.

Building sophisticated location aware applications is hard! SpacialDB makes it easy by:

  • Instantly provisioning Geospatial databases
  • Prebuilt functions to perform spatial queries and analysis
  • Wealth of knowledge and tutorials at the SpacialDB Devcenter
  • Easy mobile SDK integration
  • Built on Open Source PostGIS database with a vibrant community and support
  • Easy nodejs connectivity: Check out this article

With SpacialDB you just sign up and instantly load location data and access spatial functions from your existing app.

Learn to quickly import geospatial data and view a map

The rest of this article is to take us from sign up to a working instance of SpacialDB. By the end we will know how to create a SpacialDB instance and import-data to it and see it on a map. Something like:

German Cities

Sign up via the website or the command-line. For command-line use the SpacialDB Ruby Gem. If you are not familiar with Ruby or Gems (Ruby’s package manager) then you just have to make sure you have ruby installed. Most flavours of *nixs come installed with ruby. If you are on windows you can get a single click installer from:

To install the command-line client just do gem install spacialdb in you terminal or windows console.

For a complete reference of the Command-Line Usage check out this page.

Sign up

Via the website sign up at: or use the special sign-up for NKO participants shown on the services page (login required).

Create a username and password. Username can only contain alpha-numeric values.

CLI command:

  $ spacialdb signup

  Sign up to Spacialdb.
  Username: shoaib
  Password confirmation:
  Signed up successfully.


If you just signed up you are also automatically logged in. Otherwise, to login you need your email or username and password.

Via the website login at:

CLI command:

  $ spacialdb login

  Enter your Spacialdb credentials.
  Email or Username: shoaib
  Logged in successfully.

Creating a Geospatial database

Awesome! Signed up, logged-in; we are ready for our first geospatial database. If you have used PostGIS before you will have all the PostGIS goodness you are used to, with the added bonus of accessibility from anywhere and anytime. Not to mention some of the real-time APIs for mobile development that will be available soon.

Via the website after login you will be redirected to your dashboard at: You will see the New Database button here. Go ahead and click it… and bam! you have a fresh install of a personal Geospatial database. You will see the connection parameters here.

CLI command:

  $ spacialdb create


Connecting via QGIS

QGIS is big! but get it. Why? Its one of the most feature rich open source desktop GIS packages out-there. And it will come in handy as you work with more and more geospatial data. We would recommend the last stable release:

Install it. After installing QGIS its time to connect and import some initial data.

Download data

We want Shapefiles. Most prevalent geospatial data format currently available on the web… SpacialDB envisions changing that (but more on that later). For now lets get the following datasets:

Download the PostGIS plugin

QGIS has a plugin manager. From there we get our hands on the Spit plugin. A great little utility for importing Shapefiles straight into PostGIS.

Upload some data

Lets fire up the plugin to connect to your new database and import the Shapefiles we have just downloaded. It should slurps it right in.

View the data.

Click Add a PostGIS layer button and you can connect to your new database.

Further Reading:

This is the 10th in series of posts leading up Node.js Knockout, and covers deploying your Node.js app to the Heroku platform.

Heroku is a platform that lets you deploy your Node.js app instantly, without needing to deal with servers or systems administration. The recently-released Celadon Cedar stack supports Node.js (alongside other languages such as Ruby and Clojure). You can also use backing services such as SQL or NoSQL databases, memcached, and many others available as add-ons. Manage everything from the Heroku command-line tool, and deploy your code using Git.

This post will get you going with a Node.js/Express app on Heroku Cedar.

Sign Up for a Heroku Account

If you don’t already have a Heroku account, visit the signup page to create an account. It’s free and just takes a minute. Once you sign up, you’ll receive an invitation email that will allow you to set your password.

Even if you don’t sign up for a heroku account, you’ll get an invitation before the competition starts to join an app created by the Knockout organizers. If using Heroku, this will be the app that you should deploy to for the competition.

Install the Heroku Command-line Client

If you have Rubygems on your system, you can install the Heroku client with:

$ gem install heroku

Otherwise, download this tarball, extract it, and put the resulting directory into your $PATH:

$ wget
Saving to: `heroku-client.tgz'
100%[==================================================================>] 412,661      535K/s   in 0.8s

$ tar xzf heroku-client.tgz && echo "Add $PWD/heroku-client to your \$PATH."
Add /Users/adam/heroku-client to your $PATH.

You will need ruby in your path, which is available by default on Mac OS X, can be installed on Ubuntu with apt-get install ruby-dev, or on Windows with RubyInstaller.

Run heroku login and enter your email address and password for your Heroku account. Answer yes when it prompts you whether to upload your ssh public key. Now you’re all set to use Heroku from the command line.

Write Your App

You may be starting from an existing app. If not, here’s a simple “hello, world” sourcefile you can use:


var express = require('express');

var app = express.createServer(express.logger());

app.get('/', function(request, response) {
  response.send('Hello World!');

var port = process.env.PORT || 3000;
app.listen(port, function() {
  console.log("Listening on " + port);

Declare Dependencies With NPM

Cedar recognizes an app as Node.js by the existence of a package.json. Here’s an example package.json for the Express app we created above:


  "name": "node-example",
  "version": "0.0.1",
  "dependencies": {
    "express": "2.2.0"

Run npm install to install your dependencies locally.

You’ll also want to prevent NPM installed packages from going into revision control with this file:



Make sure that all of your app’s dependencies are declared in package.json and that you are not relying on any system-level packages.

Declare Process Types With Foreman/Procfile

To run your web process, you need to declare what command to use. In this case, we simply need to execute web.js with the Node runtime. We’ll use Procfile to declare how our web process type is run.

Here’s a Procfile for the sample app we’ve been working on:

web: node web.js

Optional, but highly recommended, is to test that the Procfile works correctly using the Foreman tool, available as Ruby gem:

$ gem install foreman
$ foreman start
14:39:04 web.1     | started with pid 24384
14:39:04 web.1     | Listening on 5000

Your app will come up on port 5000. Test that it’s working with curl or a web browser, then Ctrl-C to exit.

Deploy to Heroku/Cedar

Store the app in Git:

$ git init
$ git add .
$ git commit -m "init"

Add the Heroku remote that the Node.js Knockout organizers have created for you:

$ git remote add heroku

Alternatively, if you were going to create a repository from scratch, you would create the app on the Cedar stack (note: you should not do this during Node.js Knockout, and should instead use the repository that has been provisioned for your team):

$ heroku create --stack cedar # DON'T DO THIS FOR NODE KNOCKOUT
Creating sharp-rain-871... done, stack is cedar |
Git remote heroku added

Deploy your code:

$ git push heroku master
Counting objects: 9, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (7/7), done.
Writing objects: 100% (9/9), 923 bytes, done.
Total 9 (delta 2), reused 0 (delta 0)

-----> Heroku receiving push
-----> Updating alpha language packs... done
-----> Node.js app detected
-----> Vendoring node 0.4.7
-----> Installing dependencies with npm 1.0.8
       express@2.1.0 ./node_modules/express
       ├── mime@1.2.2
       ├── qs@0.3.1
       └── connect@1.6.2
       Dependencies installed
-----> Discovering process types
       Procfile declares types -> web
-----> Compiled slug size is 3.2MB
-----> Launching... done, v2 deployed to Heroku

 * [new branch]      master -> master

Before looking at the app on the web, we’ll need to scale the web process:

$ heroku ps:scale web=1
Scaling web processes... done, now running 1

Now, let’s check the state of the app’s processes:

$ heroku ps
Process       State               Command
------------  ------------------  --------------------------------------------
web.1         up for 10s          node web.js

The web process is up. Review the logs for more information:

$ heroku logs
2011-03-10T10:22:30-08:00 heroku[web.1]: State changed from created to starting
2011-03-10T10:22:32-08:00 heroku[web.1]: Running process with command: `node web.js`
2011-03-10T10:22:33-08:00 heroku[web.1]: Listening on 18320
2011-03-10T10:22:34-08:00 heroku[web.1]: State changed from starting to up

Looks good. You can now visit the app with heroku open.

Read more about Heroku’s introspection capabilites.

Add Collaborators

To add your Node Knockout team members to the app, use the sharing:add command:

$ heroku sharing:add added as a collaborator on nko2-my-team.

Note: the app that has been provisioned for your team for Node.js Knockout will already have your team members added as collaborators. Contact [] if you need to change collaborators.

Read more about collaborators.

Setting NODE_ENV

The Express framework uses the NODE_ENV environment variable to determine some behaviors related to caching. If you’re using Express, set a config var with this value:

$ heroku config:add NODE_ENV=production
Adding config vars:
  NODE_ENV => production
Restarting app... done, v3.

Note: this will already have been done in the app that has been provisioned for your team for Node.js Knockout.


Cedar allows you to launch a REPL process attached to your local terminal for experimenting in your app’s environment:

$ heroku run node
Running `node` attached to terminal... up, ps.1

This console has nothing loaded other than the Node.js standard library. From here you can require some of your application files.

Read more about one-off admin processes.

Advanced HTTP Features

The HTTP stack available to Cedar apps on the subdomain supports HTTP 1.1, long polling, and chunked responses. Ryan Dahl’s chat example is deployed on Heroku here as a long-polling example.

The WebSockets protocol is still in changing rapidly and is not yet supported on the Cedar stack.

Read more about the HTTP stack.

Running a Worker

The Procfile format lets you run any number of different process types. For example, let’s say you wanted a worker process to complement your web process:


web: node web.js
worker: node worker.js

Push this change to Heroku, then launch a worker:

$ heroku ps:scale worker=1
Scaling worker processes... done, now running 1

All apps get 750 dyno-hours free per month. This means you can run a process formation of up to 750 dyno-hours / 48 hours = about 15 dynos for the duration of Node Knockout without incurring any charges, as long as you scale back down to one or zero dynos at the end. If your app only uses one web process, then you don’t need to worry about this at all.

Read more about dyno-hour accounting.

Using a Postgres Database

To add a PostgreSQL database to your app, run this command:

$ heroku addons:add shared-database

This sets the DATABASE_URL environment variable. Add the postgres NPM module to your dependencies:

"dependencies": {
  "pg": "0.5.4"

And use the module to connect to DATABASE_URL from somewhere in your code:

var pg = require('pg');

pg.connect(process.env.DATABASE_URL, function(err, client) {
  var query = client.query('SELECT * FROM your_table');

  query.on('row', function(row) {

Read more about the Heroku PostgreSQL database.

Using Redis

To add a Redis database to your app, run this command:

$ heroku addons:add redistogo

This sets the REDISTOGO_URL environment variable. Add the redis-url NPM module to your dependencies:

"dependencies": {
  "redis-url": "0.0.1"

And use the module to connect to REDISTOGO_URL from somewhere in your code:

var redis = require('redis-url').connect(process.env.REDISTOGO_URL);

redis.set('foo', 'bar');

redis.get('foo', function(err, value) {
  console.log('foo is: ' + value);

Other Backing Services

Many other services are available in the Heroku add-ons catalog for free, including MongoDB, CouchDB, advanced full text indexing, Memcached, realtime publishing, Neo4j, and SMS publishing.

Note: please contact [] if you need ownership of your app to add addons.

Adding a Custom Domain

Your app automatically gets a hostname like Note: please do not run heroku rename on your Node.js Knockout Heroku app.

You can use your own domain name.