This is the 9th in a series of posts leading up to Node.js Knockout on using mongodb with node-mongodb-native This post was written by Node Knockout judge and node-mongo-db-native author Christian Kvalheim.

Mongo DB has rapidly grown to become a popular database for web applications and is a perfect fit for Node.JS applications, letting you write Javascript for the client, backend and database layer. Its schemaless nature is a better match to our constantly evolving data structures in web applications and the integrated support for location queries is a bonus that’s hard to ignore. Throw in replicasets for scaling and we are looking at really nice platform to grow your storage needs now and in the future.

Now to shamelessly plug my driver. It can be downloaded either using npm or fetched from the github repository. To install via npm do the following:

npm install mongodb

or go fetch it from github at

Once this business is taken care of, let’s move through the types available for the driver and then how to connect to your Mongo DB instance before facing the usage of some crud operations.

Mongo DB data types

So there is an important thing to keep in mind when working with Mongo DB and that is that there is a slight mapping difference between the types supported in Mongo DB and the native types in Javascript. Let’s have a look at the types supported out of the box and then how types are promoted by the driver to try to fit the native Javascript types as closely possible.

  • Float is a 8 byte and is directly convertible to the Javascript type Number
  • Double class a special class representing a float value, this is especially useful when using capped collections where you need to ensure your values are always floats.
  • Integers is a bit trickier due to the fact that Javascript represents all Numbers as 64 bit floats meaning that the maximum integer value is at a 53 bit. Mongo has two types for integers, a 32 bit and a 64 bit. The driver will try to fit the value into 32 bits if it can and promote it to 64 bits if it has to. Similarly it will deserialize attempting to fit it into 53 bits if it can. If it cannot it will return an instance of Long to avoid losing precession.
  • Long class a special class that lets you store 64 bit integers and also lets you operate on the 64 bits integers.
  • Date maps directly to a Javascript Date
  • RegEp maps directly to a Javascript RegExp
  • String maps directly to a Javascript String (encoded in utf8)
  • Binary class a special class that lets you store data in Mongo DB
  • Code class a special class that lets you store javascript functions in Mongo DB, can also provide a scope to run the method in
  • ObjectID class a special class that holds a MongoDB document identifier (the equivalent to a Primary key)
  • DbRef class a special class that lets you include a reference in a document pointing to another object
  • Symbol class a special class that lets you specify a symbol, not really relevant for javascript but for languages that supports the concept of symbols.

As we see the number type can be a little tricky due to the way integers are implemented in Javascript. The latest driver will do correct conversion up to 53 bits of complexity. If you need to handle big integers the recommendation is to use the Long class to operate on the numbers.

Getting that connection to the database

Let’s get around to setting up a connection with the Mongo DB database. Jumping straight into the code, let’s do a direct connection and then look at the code.

var mongo = require('mongodb'),
  Server = mongo.Server,
  Db = mongo.Db;

var server = new Server('localhost', 27017, {auto_reconnect: true});
var db = new Db('exampleDb', server);, db) {
  if(!err) {
    console.log("We are connected");

Let’s have a quick look at the simple connection. The new Server(…) sets up a configuration for the connection and the auto_reconnect tells the driver to retry sending a command to the server if there is a failure. Another option you can set is poolSize, this allows you to control how many tcp connections are opened in parallel. The default value for this is 1 but you can set it as high as you want. The driver will use a round-robin strategy to dispatch and read from the tcp connection.

We are up and running with a connection to the database. Let’s move on and look at what collections are and how they work.

Mongo DB and Collections

Collections are the equivalent of tables in traditional databases and contain all your documents. A database can have many collections. So how do we go about defining and using collections? Well, there are a couple of methods that we can use. Let’s jump straight into code and then look at the code.

the requires and and other initializing stuff omitted for brevity, db) {
  if(!err) {
    db.collection('test', function(err, collection) {});

    db.collection('test', {safe:true}, function(err, collection) {});

    db.createCollection('test', function(err, collection) {});

    db.createCollection('test', {safe:true}, function(err, collection) {});

Three different ways of creating a collection object but slightly different in behavior. Let’s go through them and see what they do

db.collection('test', function(err, collection) {});

This function will not actually create a collection on the database until you actually insert the first document.

db.collection('test', {safe:true}, function(err, collection) {});

Notice the {safe:true} option. This option will make the driver check if the collection exists and issue an error if it does not.

db.createCollection('test', function(err, collection) {});

This command will create the collection on the Mongo DB database before returning the collection object. If the collection already exists it will ignore the creation of the collection.

db.createCollection('test', {safe:true}, function(err, collection) {});

The {safe:true} option will make the method return an error if the collection already exists.

With an open db connection and a collection defined we are ready to do some CRUD operation on the data.

And then there was CRUD

So let’s get dirty with the basic operations for Mongo DB. The Mongo DB wire protocol is built around 4 main operations insert/update/remove/query. Most operations on the database are actually queries with special json objects defining the operation on the database. But I’m getting ahead of myself. Let’s go back and look at insert first and do it with some code.

the requires and and other initializing stuff omitted for brevity, db) {
  if(!err) {
    db.collection('test', function(err, collection) {
      var doc1 = {'hello':'doc1'};
      var doc2 = {'hello':'doc2'};
      var lotsOfDocs = [{'hello':'doc3'}, {'hello':'doc4'}];


      collection.insert(doc2, {safe:true}, function(err, result) {});

      collection.insert(lotsOfDocs, {safe:true}, function(err, result) {});

A couple of variations on the theme of inserting a document as we can see. To understand why it’s important to understand how Mongo DB works during inserts of documents.

Mongo DB has asynchronous insert/update/remove operations. This means that when you issue an insert operation it’s a fire and forget operation where the database does not reply with the status of the insert operation. To retrieve the status of the operation you have to issue a query to retrieve the last error status of the connection. To make it simpler to the developer, the driver implements the {safe:true} options so that this is done automatically when inserting the document. {safe:true} becomes especially important when you do update or remove as otherwise it’s not possible to determine the number of documents modified or removed.

Now let’s go through the different types of inserts shown in the code above.


Taking advantage of the async behavior and not needing confirmation about the persisting of the data to Mongo DB we just fire off the insert (we are doing live analytics, losing a couple of records does not matter).

collection.insert(doc2, {safe:true}, function(err, result) {});

That document needs to stick. Using the {safe:true} option ensure you get the error back if the document fails to insert correctly.

collection.insert(lotsOfDocs, {safe:true}, function(err, result) {});

A batch insert of document with any errors being reported. This is much more efficient if you need to insert large batches of documents as you incur a lot less overhead.

Right, that’s the basics of inserts ironed out. We got some documents in there but want to update them as we need to change the content of a field. Let’s have a look at a simple example and then we will dive into how Mongo DB updates work and how to do them efficiently.

the requires and and other initializing stuff omitted for brevity, db) {
  if(!err) {
    db.collection('test', function(err, collection) {
      var doc = {mykey:1, fieldtoupdate:1};

      collection.insert(doc, {safe:true}, function(err, result) {
        collection.update({mykey:1}, {$set:{fieldtoupdate:2}}, {safe:true}, function(err, result) {});

      var doc2 = {mykey:2, docs:[{doc1:1}]};

      collection.insert(doc2, {safe:true}, function(err, result) {
        collection.update({mykey:2}, {$push:{docs:{doc2:1}}, {safe:true}, function(err, result) {});

Alright before we look at the code we want to understand how document updates work and how to do the efficiently. The most basic and less efficient way is to replace the whole document, this is not really the way to go if you want to change just a field in your document. Luckily Mongo DB provides a whole set of operations that let you modify just pieces of the document Atomic operations documentation. Basically outlined below.

  • $inc - increment a particular value by a certain amount
  • $set - set a particular value
  • $unset - delete a particular field (v1.3+)
  • $push - append a value to an array
  • $pushAll - append several values to an array
  • $addToSet - adds value to the array only if its not in the array already
  • $pop - removes the last element in an array
  • $pull - remove a value(s) from an existing array
  • $pullAll - remove several value(s) from an existing array
  • $rename - renames the field
  • $bit - bitwise operations

Now that the operations are outline let’s dig into the specific cases show in the code example.

collection.update({mykey:1}, {$set:{fieldtoupdate:2}}, {safe:true}, function(err, result) {});

Right, so this update will look for the document that has a field mykey equal to 1 and apply an update to the field fieldtoupdate setting the value to 2. Since we are using the {safe:true} option the result parameter in the callback will return the value 1 indicating that 1 document was modified by the update statement.

collection.update({mykey:2}, {$push:{docs:{doc2:1}}, {safe:true}, function(err, result) {});

This updates adds another document to the field docs in the document identified by {mykey:2} using the atomic operation $push. This allows you to modify keep such structures as queues in Mongo DB.

Let’s have a look at the remove operation for the driver. As before let’s start with a piece of code.

the requires and and other initializing stuff omitted for brevity, db) {
  if(!err) {
    db.collection('test', function(err, collection) {
      var docs = [{mykey:1}, {mykey:2}, {mykey:3}];

      collection.insert(docs, {safe:true}, function(err, result) {


        collection.remove({mykey:2}, {safe:true}, function(err, result) {});


Let’s examine the 3 remove variants and what they do.


This leverages the fact that Mongo DB is asynchronous and that it does not return a result for insert/update/remove to allow for synchronous style execution. This particular remove query will remove the document where mykey equals 1.

collection.remove({mykey:2}, {safe:true}, function(err, result) {});

This remove statement removes the document where mykey equals 2 but since we are using {safe:true} it will back to Mongo DB to get the status of the remove operation and return the number of documents removed in the result variable.


This last one will remove all documents in the collection.

Time to Query

Queries are of course a fundamental part of interacting with a database and Mongo DB is no exception. Fortunately for us it has a rich query interface with cursors and close to SQL concepts for slicing and dicing your datasets. To build queries we have lots of operators to choose from Mongo DB advanced queries. There are literarily tons of ways to search and ways to limit the query. Let’s look at some simple code for dealing with queries in different ways.

the requires and and other initializing stuff omitted for brevity, db) {
  if(!err) {
    db.collection('test', function(err, collection) {
      var docs = [{mykey:1}, {mykey:2}, {mykey:3}];

      collection.insert(docs, {safe:true}, function(err, result) {

        collection.find().toArray(function(err, items) {});

        var stream = collection.find({mykey:{$ne:2}}).streamRecords();
        stream.on("data", function(item) {});
        stream.on("end", function() {});

        collection.findOne({mykey:1}, function(err, item) {});


Before we start picking apart the code there is one thing that needs to be understood, the find method does not execute the actual query. It builds an instance of Cursor that you then use to retrieve the data. This lets you manage how you retrieve the data from Mongo DB and keeps state about your current Cursor state on Mongo DB. Now let’s pick apart the queries we have here and look at what they do.

collection.find().toArray(function(err, items) {});

This query will fetch all the document in the collection and return them as an array of items. Be careful with the function toArray as it might cause a lot of memory usage as it will instantiate all the document into memory before returning the final array of items. If you have a big resultset you could run into memory issues.

var stream = collection.find({mykey:{$ne:2}}).streamRecords();
stream.on("data", function(item) {});
stream.on("end", function() {});

This is the preferred way if you have to retrieve a lot of data for streaming, as data is deserialized a data event is emitted. This keeps the resident memory usage low as the documents are streamed to you. Very useful if you are pushing documents out via websockets or some other streaming socket protocol. Once there is no more document the driver will emit the end event to notify the application that it’s done.

collection.findOne({mykey:1}, function(err, item) {});

This is special supported function to retrieve just one specific document bypassing the need for a cursor object.

That’s pretty much it for the quick intro on how to use the database. I have also included a list of links to where to go to find more information and also a sample crude location application I wrote using express JS and mongo DB.

Links and stuff

This is the 8th in a series of posts leading up to Node.js Knockout on creating PDFs with Node using PDFKit. This post was written by Node Knockout judge and PDFKit author Devon Govett.

Want to generate PDF documents in your Node Knockout app? Then you should be using PDFKit to generate them! PDFKit is a PDF document generation library for Node that makes creating complex, multi-page, printable documents easy. PDFKit supports an HTML5 canvas-like API for manipulating vector graphics as well as an SVG path parser that makes including graphics exported from graphics programs like Adobe Illustrator in your PDF documents much easier. It also has an advanced text engine including support for font embedding, image embedding, annotations and more.


The easiest way to install PDFKit is through npm.

npm install pdfkit

Creating a PDF document

The first thing you’ll need to do to create a document in PDFKit is to require the module and create a PDFDocument instance.

var PDFDocument = require('pdfkit'),
    doc = new PDFDocument();

The first page of the document is automatically added for us, but you can add additional pages at any time by calling doc.addPage(). In this tutorial, we are going to render a vector star at the top of a page, and include some text below it.

Vector graphics

To draw our star, much like with the HTML5 Canvas API, we will first move the imaginary pencil to a point and then draw lines from that point to other points. Finally, we fill in the space within the lines with a red paint color. We also use something called a winding rule, which defines how that space is filled. The default is the non-zero winding rule which fills everything determined to be on the inside of the shape. The even-odd fill rule allows for “holes” in a shape. For this star, we’ll use the even-odd fill rule, but you should experiment with both rules and see what is right for your project.

Here is the code to draw that star:

doc.moveTo(300, 75)
   .lineTo(373, 301)
   .lineTo(181, 161)
   .lineTo(419, 161)
   .lineTo(227, 301)
   .fill('red', 'even-odd');

Because PDFKit supports SVG paths, this can be shortened to the following code:

doc.path('M 300,75 L 373,301 181,161 419,161 227,301 z')
   .fill('red', 'even-odd');

Adding some text

PDFKit’s text APIs are quite powerful including support for embedding custom fonts. By default, any text you add to the page will automatically wrap within the page margins, and you can change various settings such as paragraph gaps and indentation, and the text alignment. PDFKit also supports automatic wrapping of text into columns and automatically inserts new pages as necessary if you have a long piece of text.

Here is the code to insert some text that automatically wraps into two justified columns, with each paragraph indented 20 points and with a gap of 10 points between each paragraph.

var loremIpsum = 'Lorem ipsum dolor sit amet, consectetur adipiscing elit. Etiam in...';

doc.y = 320;
   .text(loremIpsum, {
       paragraphGap: 10,
       indent: 20,
       align: 'justify',
       columns: 2

The first thing we do is set the current Y position of the document to 320 points, which moves the text below our star. Then we set the fill color of the text to black (because it was set to red before when we drew the star), and finally render the text, passing in our options.


Now say we wanted to insert a title in a different font. Luckily, PDFKit makes embedding custom fonts in a PDF quite simple. Just after the call to set the fill color above, a title could be inserted like this:

doc.font('fonts/GoodDog.ttf', 35)
   .text('This is the title!', { align: 'center' })
   .font('Helvetica', 12)

This first thing this code does is to embed the GoodDog font and set the font size to 35 points. Then, we insert the title text, aligning it to the center of the page. Then we set the font back to the default Helvetica, and move down a line. Then, as before, we would insert the page text.


Rendering images in PDFKit documents is really easy. By default, images are automatically placed in the text flow of the document, so inserting an image at the bottom of the last column of our document will be quite simple. Passing the width option will automatically scale the image to fit within that size. There are other options as well, so check out the documentation for a complete rundown.

doc.image('images/test.jpeg', { width: 225 });

Outputting the document

There are two ways to output the PDF document: to a file and as a binary string to be passed as a response to an HTTP request, for example.

Writing to a file is simple, just call the write method with your filename, and optionally, a callback.


Getting a string representation of the document is just as simple:

var string = doc.output();


I hope I’ve shown you enough to get you interested in PDFKit! You can find the PDF document generated from the examples in this tutorial here, and check out a more advanced programming guide and documentation at the PDFKit website. If you find any bugs, I will try to fix them as fast as I can, so please report them as soon as they are found. Now, go generate some knockout PDFs!

This is the 7th in a series of posts leading up to Node.js Knockout on debugging node processes using Node Inspector. This post was written by Node Knockout judge and Node Inspector author Danny Coates.

Node Inspector is a debugger interface for node.js using the WebKit Web Inspector. It’s the familiar javascript debugger from Safari and Chrome.


With npm:

npm install -g node-inspector

Enable debug mode

To use node-inspector, enable debugging on the node you wish to debug. You can either start node with a debug flag like:

$ node --debug your/node/program.js

or, to pause your script on the first line:

$ node --debug-brk your/short/node/script.js

Or you can enable debugging on a node that is already running by sending it a signal:

  1. Get the PID of the node process using your favorite method. pgrep or ps -ef are good

    $ pgrep -l node
    2345 node your/node/server.js

  2. Send it the USR1 signal

    $ kill -s USR1 2345

Great! Now you’re ready to attach node-inspector.


  1. start the inspector. I usually put it in the background

    $ node-inspector &

  2. open in your favorite WebKit based browser

  3. you should now see the javascript source from node. If you don’t, click the scripts tab.

  4. select a script and set some breakpoints (far left line numbers)

  5. then watch the slightly outdated but hilarious screencasts

node-inspector works almost exactly like the web inspector in Safari and Chrome. Here’s a good overview of the UI.


  1. I don’t see one of my script files in the file list.

    try refreshing the browser (F5 or ⌘-r or control-r)

  2. My script runs too fast to attach the debugger.

    use --debug-brk to pause the script on the first line

  3. Can I debug remotely?

    Yes. node-inspector needs to run on the same machine as the node process, but your browser can be anywhere. Just make sure the firewall is open on 8080

  4. I got the ui in a weird state.

    when in doubt, refresh

This is the 6th in a series of posts leading up to Node.js Knockout on using Mongoose. This post was written by Node Knockout judge and Mongoose co-maintainer Aaron Heckmann.

Getting started with Mongoose and Node

In this post we’ll talk about getting started with Mongoose, an object modeling tool for MongoDB and node.js.


We’re going to assume that you have both MongoDB and npm installed for this post. Once you have those, you can install Mongoose:

$ npm install mongoose

Hurray! Now we can simply require mongoose like any other npm package.

var mongoose = require('mongoose');

Schema definition

Though MongoDB is a schema-less database we often want some level of control over what goes in and out of our database collections. We’re confident that we’re going to be the next Netflix so we’ll need a Movie schema and a Rating schema. Each Movie is allowed to have multiple Ratings.

var Schema = mongoose.Schema;
var RatingSchema = new Schema({
    stars    : { type: Number, required: true }
  , comment  : { type: String, trim: true }
  , createdAt: { type: Date, default: }

So far we’ve created a Rating schema with a stars property of type Number, a comment property of type String, and a createdAt property of type Date. Whenever we set the stars property it will automatically be cast as a Number. Note also that we specified required which means validation will fail if an attempt is made to save a rating without setting the number of stars. Likewise, whenever we set the comment property it will first be cast as a String before being set, and since whitespace around comments is very uncool, we use the built-in trim setter.

Now that we’re happy with our Rating model we’ll use it within our Movie model. Each movie should have name, director, year, and ratings properties.

var MovieSchema = new Schema({
    name    : { type: String, trim: true, index: true }
  , ratings : [RatingSchema]
  , director: Schema.ObjectId
  , year    : Number

Here we see that ratings is set to an array of Rating schemas. This means that we’ll be storing Ratings as subdocuments on each Movie document. A subdocument is simply a document nested within another.

You might have noticed the index option we added to the name property. This tells MongoDB to create an index on this field.

We’ve also defined director as an ObjectId. ObjectIds are the default primary key type MongoDB creates for you on each document. We’ll use this as a foreign key field, storing the document ObjectId of another imaginary Person document which we’ll leave out for brevity.

TIP: Note that we needed to declare the subdocument Rating schema before using it within our Movie schema definition for everything to work properly.

This is what a movie might look like within the mongo shell:

{ name: 'Inception',
  year: 2010,
   [ { stars: 8.9,
       comment: 'I fell asleep during this movie, and yeah, you\'ve heard this joke before' },
     { stars: 9.3 } ],
  director: ObjectId("4e4b4a8b73e1d576d6a1438e") }

Now that we’ve finished our schemas we’re ready to create our movie model.

var Movie = mongoose.model('Movie', MovieSchema);

And thats it! Everything is all set with the exception of being able to actually talk to MongoDB. So let’s create a connection.

var db = mongoose.connection;
db.on('open', function () {
  // now we can start talking

Now we’re ready to create a movie and save it.

var super8 = new Movie({ name: "Super 8", director: anObjectId, year: 2011 }); (err) {
  if (err) return console.error(err); // we should handle this

Oh, but what about adding ratings?

Movie.findOne({ name: "Super 8" }).where('year', 2011).run(function (err, super8) {
  if (err) // handle this

  // add a rating
  super8.ratings.push({ stars: 7.7, comment: "it made me happy" });;

To look up our movie we used Model.findOne which accepts a where clause as its first argument. We also took advantage of the Query object returned by this method to add some more sugary filtering. Finally, we called the Query’s run method to execute it.

We didn’t have to do it this way, instead you could just pass all of your where params directly as the first argument like so:

Movie.findOne({ name: "Super 8", year: 2011 }, callback);

Though the first example is more verbose it highlights some of the expressive flexibility provided by the Query object returned.

Here are a couple more ways we could write this query:

Movie.where('name', /^Super 8/i).where('year', 2011).limit(1).exec(callback);

Movie.find({ name: "Super 8", year: { $gt: 2010, $lt: 2012 } }, null, { limit: 1 }, callback);

This is all well and good but what if we look up movies by director and year a lot and need the query to be fast? First we’ll create a static method on our Movie model:

MovieSchema.statics.byNameAndYear = function (name, year, callback) {
  // this could return multiple results
  return this.find({ name: name, year: year }, callback);

We’ll also add a compound index on these two fields to give us a performance boost:

MovieSchema.index({ name: 1, year: 1 });

For good measure we’ll add a movie instance method to conveniently look up the director:

MovieSchema.methods.findDirector = function (callback) {
  // Person is our imaginary Model we skipped for brevity
  return this.db.model('Person').findById(this.director, callback);

Putting it all together:

Movie.byNameAndYear("Super 8", 2011, function (err, movies) {
  if (err) return console.error(err); // handle this
  var movie = movies[0];
  movie.findDirector(function (err, director) {
    if (err) ...
    // woot

Thats it for this post. For more info check out, the github README, or the Mongoose test directory to see even more examples.

This is the 5th in a series of posts leading up to Node.js Knockout on pulling it all together using Node Express Boilerplate. This post was written by @mape, “solo winner” of Node.js Knockout 2010.


Taking a walk every now and then is good for the body and the mind. But as with many other endeavors, often the hardest part is taking that first step.

The same goes for ideas and projects - all the boring preparatory work can potentially delay or altogether squander the best of intentions that otherwise could lead to a joyful and educational experience.

So why not take a lot of the boring prep work out of your projects by utilizing node-express-boilerplate?

node-express-boilerplate gives the developer a clean slate to start with while bundling enough useful features so as to remove all those redundant tasks that can derail a project before it even really gets started.

So what does node-express-boilerplate do?


First of all, it is very easy to understand, allowing you to start using it right away. There is minimal need to dig around in files just to get a good idea of how things work. And if you don’t like how the boiler plate is set up, just fork it and change it according to your own personal preferences.

Features include:

  • Bundling and integrating with the express session store so data can be shared
  • Providing premade hooks to authenticate users via facebook/twitter/github
  • An assetmanager that concatenates/mangles/compresses your CSS/JS assets to be as small and fast to deliver as possible, as well as cache busting using MD5 hashes
  • Auto updates of the browser (inline/refresh) as soon as CSS/JS/template-files are changed in order to remove all those annoying “save, tab, refresh” repetitions
  • Notifications to your computer/mobile phone on certain user actions (This is something I relied heavily on last year when I was involved in NKO; as soon as a new game was started I knew about it and could jump in and interact - nobody enjoys something social if they are stuck there alone.)
  • Sane defaults in regards to productions/development environments
  • Logs errors to in order to track any errors users are encountering
  • Auto matching of urls to templates without having to define a specific route (such as, visiting /file-name/ tries to serve file-name.ejs and fallbacks to index.ejs - this is helpful for quick static info pages)

How do I get started? (on Joyent’s service)

  1. First on your machine

    1. ssh
    2. pkgin update; pkgin install redis
    3. svccfg import /opt/local/share/smf/manifest/redis.xml
    4. svcadm enable redis
  2. Secondly on your development machine

    1. git clone myproject
    2. cd myproject
    3. cp siteConfig.sample.js siteConfig.js
    4. edit siteConfig.js settings
    5. scp siteConfig.js
    6. git remote add joyent
    7. git push joyent master
    8. open

So check it out at github (node-express-boilerplate) and drop by #node.js on irc for feedback and to let me know if you run into any issues.

It’s the first ever knockout drinkup!

The beer and snacks is (for a while) on Joyent, which means come early, but not too early because you should also attend the node.js ops meetup beforehand at ngmoco:). We’re timing it so you can go get smarter there then walk over and forget it all right after.

If you don’t have a team, come find one! If you’re looking for more members, come find some! If you want to drink with us, come find us! Share ideas for entries. We might have some pretty sweet stickers and buttons to give away.

RSVP please. We don’t want to freak out the establishment with our collective awesomeness.

When: Wed August 17 at 8:30PM.

Where: Pete’s Tavern (2ndish St and King)

Remember: RSVP

Mad Props: Joyent

But wait! I’m not 21! You should be ok. We’ve been told that they only card on home game days, which Aug 17 is not.