Monthly Archives: November 2016

Building a JWT Token Cracker with ZeroMQ & Node.js (Part 2.)

By Luciano Mammino

Building a JWT Token Cracker with ZeroMQ & Node.js (Part 2.)

This is the second episode of a two-part tutorial. While the first article (ZeroMQ & Node.js Tutorial – Cracking JWT Tokens) was solely focused on theory, this one is about the actual coding.

You’ll get to know ZeroMQ, how JWT tokens work and how our application can crack some of them! Be aware, that the application will be intentionally simple. I only want to demonstrate how we can leverage some specific patterns.

At the end of the article, I’ll invite you to participate in a challenge and to use your newly acquired knowledge for cracking a JWT token. The first 3 developers who crack the code will get a gift!

Let’s get it started!

Preparing the environment and the project folder

To follow this tutorial, you will need to have the ZeroMQ libraries and Node.js version >=4.0 installed in your system. We will also need to initialize a new project with the following commands:

npm init # then follow the guided setup  
npm install --save big-integer@^1.6.16 dateformat@^1.0.12 indexed-string-variation@^1.0.2 jsonwebtoken@^7.1.9 winston@^2.2.0 yargs@5.0.0 zmq@^2.15.3  

This will make sure that you have all the dependencies ready in the project folder and you can only focus on the code.

You can also checkout the code in the projects’ official GitHub repository and keep it aside as a working reference.

Writing the client application (Dealer + Subscriber) with ZeroMQ and Node.js

We should finally have a clear understanding of the whole architecture and the patterns we are going to use. Now we can finally focus on writing code!

Let’s start with the code representing the client, which holds the real JWT-cracking business logic.

As a best practice, we are going to use a modular approach, and we will split our client code into four different parts:

  • The processBatch module, containing the core logic to process a batch.
  • The createDealer module containing the logic to handle the messages using the ZeroMQ dealer pattern.
  • The createSubscriber module containing the logic to handle the exit message using the subscriber pattern.
  • The client executable script that combines all the modules together and offers a nice command-line interface.

The processBatch module

The first module that we are going to build will focus only on analyzing a given batch and checking if the right password is contained in it.

This is probably the most complex part of our whole application, so let’s make some useful preambles:

  • We are going to use the big-integer library to avoid approximation problems with large integers. In fact, in JavaScript all numbers are internally represented as floating point numbers and thus they are subject to floating point approximation. For example the expression 10000000000000000 === 10000000000000001 (notice the last digit) will evaluate to true. If you are interested in this aspect of the language, you can read more here]( All the maths in our project will be managed by the big-integer library. If you have never used it before, it might look a bit weird at first, but I promise it won’t be hard to understand.
  • We are also going to use the jsonwebtoken library to verify the signature of a given token against a specific password.

Let’s finally see the code of the processBatch module:

// src/client/processBatch.js

'use strict';

const bigInt = require('big-integer');  
const jwt = require('jsonwebtoken');

const processBatch = (token, variations, batch, cb) => {  
  const chunkSize = bigInt(String(1000));

  const batchStart = bigInt(batch[0]);
  const batchEnd = bigInt(batch[1]);

  const processChunk = (from, to) => {
    let pwd;

    for (let i = from; i.lesser(to); i = i.add( {
      pwd = variations(i);
      try {
        jwt.verify(token, pwd, {ignoreExpiration: true, ignoreNotBefore: true});
        // finished, password found
        return cb(pwd, i.toString());
      } catch (e) {}

    // prepare next chunk
    from = to;
    to = bigInt.min(batchEnd, from.add(chunkSize));

    if (from === to) {
      // finished, password not found
      return cb();

    // process next chunk
    setImmediate(() => processChunk(from, to));

  const firstChunkStart = batchStart;
  const firstChunkEnd = bigInt.min(batchEnd, batchStart.add(chunkSize));
  setImmediate(() => processChunk(firstChunkStart, firstChunkEnd));

module.exports = processBatch;  

(Note: This is a slightly simplified version of the module, you can check out the original one in the official repository which also features a nice animated bar to report the batch processing progress on the console.)

This module exports the processBatch function, so first things first, let’s analyze the arguments of this function:

  • token: The current JWT token.
  • variations: An instance of indexed-string-variations already initialized with the current alphabet.
  • batch: An array containing two strings representing the segment of the solution space where we search for the password (e.g. ['22', '150']).
  • cb: A callback function that will be invoked on completion. If the password is found in the current batch, the callback will be invoked with the password and the current index as arguments. Otherwise, it will be called without arguments.

This function is asynchronous, and it is the one that will be executed most of the time in the client.

The main goal is to iterate over all the numbers in the range, and generate the corresponding string on the current alphabet (using the variations function) for every number.

After that, the string is checked against jwt.verify to see if it’s the password we were looking for. If that’s the case, we immediately stop the execution and invoke the callback, otherwise the function will throw an error, and we will keep iterating until the current batch is fully analyzed. If we reach the end of the batch without success, we invoke the callback with no arguments to notify the failure.

What’s peculiar here is that we don’t really execute a single big loop to cover all the batch elements, but instead we define an internal function called processChunk that has the goal of executing asynchronously the iteration in smaller chunks containing at most 1000 elements.

We do this because we want to avoid to block the event loop for too long, so, with this approach, the event loop has a chance to react to some other events after every chunk, like a received exit signal.

(You can read much more on this topic in the last part of Node.js Design Patterns Second Edition).

CreateDealer module

The createDealer module holds the logic that is needed to react to the messages received by the server through the batchSocket, which is the one created with the router/dealer pattern.

Let’s jump straight into the code:

// src/client/createDealer.js

'use strict';

const processBatch = require('./processBatch');  
const generator = require('indexed-string-variation').generator;

const createDealer = (batchSocket, exit, logger) => {  
  let id;
  let variations;
  let token;

  const dealer = rawMessage => {
    const msg = JSON.parse(rawMessage.toString());

    const start = msg => {
      id =;
      variations = generator(msg.alphabet);
      token = msg.token;`client attached, got id "${id}"`);

    const batch = msg => {`received batch: ${msg.batch[0]}-${msg.batch[1]}`);
      processBatch(token, variations, msg.batch, (pwd, index) => {
        if (typeof pwd === 'undefined') {
          // request next batch
`password not found, requesting new batch`);
          batchSocket.send(JSON.stringify({type: 'next'}));
        } else {
          // propagate success
`found password "${pwd}" (index: ${index}), exiting now`);
          batchSocket.send(JSON.stringify({type: 'success', password: pwd, index}));

    switch (msg.type) {
      case 'start':

      case 'batch':

        logger.error('invalid message received from server', rawMessage.toString());

  return dealer;

module.exports = createDealer;  

This module exports a factory function used to initialize our dealer component. The factory accepts three arguments:

  • batchSocket: the ZeroMQ socket used to implement the dealer part of the router/dealer pattern.
  • exit: a function to end the process (it will generally be process.exit).
  • logger: a logger object (the console object or a winston logger instance) that we will see in detail later.

The arguments exit and logger are requested from the outside (and not initialized within the module itself) to make the module easily “composable” and to simplify testing (we are here using the Dependency Injection pattern).

The factory returns our dealer function which in turn accepts a single argument, the rawMessage received through the batchSocket channel.

This function has two different behaviors depending on the type of the received message. We assume the first message is always a start message that is used to propagate the client id, the token and the alphabet. These three parameters are used to initialize the dealer. The first batch is also sent with them, so after the initialization, the dealer can immediately start to process it.

The second message type is the batch, which is used by the server to deliver a new batch to analyze to the clients.

The main logic to process a batch is abstracted in the batch function. In this function, we simply delegate the processing job to our processBatch module. If the processing is successful, the dealer creates a success message for the router – transmitting the discovered password and the corresponding index over the given alphabet. If the batch doesn’t contain the password, the dealer sends a next message to the router to request a new batch.

CreateSubscriber module

In the same way, we need an abstraction that allows us to manage the pub/sub messages on the client. For this purpose we can have the createSubscriber module:

// src/client/createSubscriber.js

'use strict';

const createSubscriber = (subSocket, batchSocket, exit, logger) => {  
  const subscriber = (topic, rawMessage) => {
    if (topic.toString() === 'exit') {`received exit signal, ${rawMessage.toString()}`);

  return subscriber;

module.exports = createSubscriber;  

This module is quite simple. It exports a factory function that can be used to create a subscriber (a function able to react to messages on the pub/sub channel). This factory function accepts the following arguments:

  • subSocket: the ZeroMQ socket used for the publish/subscribe messages.
  • batchSocket: the ZeroMQ socket used for the router/dealer message exchange (as we saw in the createDealer module).
  • exit and logger: as in the createDealer module, these two arguments are used to inject the logic to terminate the application and to record logs.

The factory function, once invoked, returns a subscriber function which contains the logic to execute every time a message is received through the pub/sub socket. In the pub/sub model, every message is identified by a specific topic. This allows us to react only to the messages referring to the exit topic and basically shut down the application. To perform a clean exit, the function will take care of closing the two sockets before exiting.

Command line client script

Finally, we have all the pieces we need to assemble our client application. We just need to write the glue between them and expose the resulting application through a nice command line interface.

To simplify the tedious task of parsing the command line arguments, we will use the yargs library:

// src/client.js

#!/usr/bin/env node

'use strict';

const zmq = require('zmq');  
const yargs = require('yargs');  
const logger = require('./logger');  
const createDealer = require('./client/createDealer');  
const createSubscriber = require('./client/createSubscriber');

const argv = yargs  
  .usage('Usage: $0 [options]')
  .example('$0 --host=localhost --port=9900 -pubPort=9901')
  .default('host', 'localhost')
  .alias('h', 'host')
  .describe('host', 'The hostname of the server')
  .default('port', 9900)
  .alias('p', 'port')
  .describe('port', 'The port used to connect to the batch server')
  .default('pubPort', 9901)
  .alias('P', 'pubPort')
  .describe('pubPort', 'The port used to subscribe to broadcast signals (e.g. exit)')

const host =;  
const port = argv.port;  
const pubPort = argv.pubPort;

const batchSocket = zmq.socket('dealer');  
const subSocket = zmq.socket('sub');  
const dealer = createDealer(batchSocket, process.exit, logger);  
const subscriber = createSubscriber(subSocket, batchSocket, process.exit, logger);

batchSocket.on('message', dealer);  
subSocket.on('message', subscriber);

batchSocket.send(JSON.stringify({type: 'join'}));  

In the first part of the script we use yargs to describe the command line interface, including a description of the command with a sample usage and all the accepted arguments:

  • host: is used to specify the host of the server to connect to.
  • port: the port used by the server for the router/dealer exchange.
  • pubPort: the port used by the server for the pub/sub exchange.

This part is very simple and concise. Yargs will take care of performing all the validations of the input and populates the optional arguments with default values in case they are not provided by the user. If some argument doesn’t meet the expectations, Yargs will take care of displaying a nice error message. It will also automatically create the output for --help and --version.

In the second part of the script, we use the arguments provided to connect to the server, creating the batchSocket (used for the router/dealer exchange) and the subSocket (used for the pub/sub exchange).

We use the createDealer and createSubscriber factories to generate our dealer and subscriber functions and then we associate them with the message event of the corresponding sockets.

Finally, we subscribe to the exit topic on the subSocket and send a join message to the server using the batchSocket.

Now our client is fully initialized and ready to respond to the messages coming from the two sockets.

The server

Now that our client application is ready we can focus on building the server. We already described what will be the logic that the server application will adopt to distribute the workload among the clients, so we can jump straight into the code.


For the server, we will build a module that contains most of the business logic – the createRouter module:

// src/server/createRouter.js

'use strict';

const bigInt = require('big-integer');

const createRouter = (batchSocket, signalSocket, token, alphabet, batchSize, start, logger, exit) => {  
  let cursor = bigInt(String(start));
  const clients = new Map();

  const assignNextBatch = client => {
    const from = cursor;
    const to = cursor.add(batchSize).minus(;
    const batch = [from.toString(), to.toString()];
    cursor = cursor.add(batchSize);
    client.currentBatch = batch;
    client.currentBatchStartedAt = new Date();

    return batch;

  const addClient = channel => {
    const id = channel.toString('hex');
    const client = {id, channel, joinedAt: new Date()};
    clients.set(id, client);

    return client;

  const router = (channel, rawMessage) => {
    const msg = JSON.parse(rawMessage.toString());

    switch (msg.type) {
      case 'join': {
        const client = addClient(channel);
        const response = {
          type: 'start',
          batch: client.currentBatch,
        batchSocket.send([channel, JSON.stringify(response)]);`${} joined (batch: ${client.currentBatch[0]}-${client.currentBatch[1]})`);

      case 'next': {
        const batch = assignNextBatch(clients.get(channel.toString('hex')));`client ${channel.toString('hex')} requested new batch, sending ${batch[0]}-${batch[1]}`);
        batchSocket.send([channel, JSON.stringify({type: 'batch', batch})]);

      case 'success': {
        const pwd = msg.password;`client ${channel.toString('hex')} found password "${pwd}"`);
        // publish exit signal and closes the app
        signalSocket.send(['exit', JSON.stringify({password: pwd, client: channel.toString('hex')})], 0, () => {


        logger.error('invalid message received from channel', channel.toString('hex'), rawMessage.toString());

  router.getClients = () => clients;

  return router;

module.exports = createRouter;  

The first thing to notice is that we built a module that exports a factory function again. This function will be used to initialize an instance of the logic used to handle the router part of the router/dealer pattern in our application.

The factory function accepts a bunch of parameters. Let’s describe them one by one:

  • batchSocket: is the ZeroMQ socket used to send the batch requests to the clients.
  • signalSocket: is the ZeroMQ socket to publish the exit signal to all the clients.
  • token: the string containing the current token.
  • alphabet: the alphabet used to build the strings in the solution space.
  • batchSize: the number of strings in every batch.
  • start: the index from which to start the first batch (generally ‘0’).
  • logger: an instance of the logger
  • exit: a function to be called to shut down the application (usually process.exit).

Inside the factory function, we declare the variables that define the state of the server application: cursor and clients. The first one is the pointer to the next batch, while the second is a map structure used to register all the connected clients and the batches assigned to them. Every entry in the map is an object containing the following attributes:

  • id: the id given by ZeroMQ to the client connection.
  • channel: a reference to the communication channel between client and server in the router/dealer exchange.
  • joinedAt: the date when the client established a connection to the server.
  • currentBatch: the current batch being processed by the client (an array containing the two delimiters of the segment of the solution space to analyze).
  • currentBatchStartedAt: the date when the current batch was assigned to the client.

Then we define two internal utility functions used to change the internal state of the router instance: assignNextBatch and addClient.

The way these functions work is pretty straightforward: the first one assigns the next available batch to an existing client and moves the cursors forward, while the second takes input a new ZeroMQ connection channel as an input and creates the corresponding entry in the map of connected clients.

After these two helper functions, we define the core logic of our router with the router function. This function is the one that is returned by the factory function and defines the logic used to react to an incoming message on the router/dealer exchange.

As it was happening for the client, we can have different type of messages, and we need to react properly to every one of them:

  • join: received when a client connects to the server for the first time. In this case, we register the client and send it the settings of the current run and assign it the first batch to process. All this information is provided with a start message, which is sent on the router/dealer channel (using the ZeroMQ batchSocket).
  • next: received when a client finishes to process a batch without success and needs a new batch. In this case we simply assign the next available batch to the client and send the information back to it using a batch message through the batchSocket.
  • success: received when a client finds the password. In this case, the found password is logged and propagated to all the other clients with an exit signal through the signalSocket (the pub/sub exchange). When the exit signal broadcast is completed, the application can finally shut down. It also takes care to close the ZeroMQ sockets, for a clean exit.

That’s mostly it for the implementation of the router logic.

However, it’s important to underline that this implementation is assuming that our clients always deliver either a success message or a request for another batch. In a real world application, we must take into consideration that a client might fail or disconnect at any time and manages to redistribute its batch to some other client.

The server command line

We have already written most of our server logic in the createRouter module, so now we only need to wrap this logic with a nice command line interface:

// src/server.js

#!/usr/bin/env node

'use strict';

const zmq = require('zmq');  
const isv = require('indexed-string-variation');  
const yargs = require('yargs');  
const jwt = require('jsonwebtoken');  
const bigInt = require('big-integer');  
const createRouter = require('./server/createRouter');  
const logger = require('./logger');

const argv = yargs  
  .usage('Usage: $0 <token> [options]')
  .example('$0 eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiYWRtaW4iOnRydWV9.TJVA95OrM7E2cBab30RMHrHDcEfxjoYZgeFONFh7HgQ')
  .default('port', 9900)
  .alias('p', 'port')
  .describe('port', 'The port used to accept incoming connections')
  .default('pubPort', 9901)
  .alias('P', 'pubPort')
  .describe('pubPort', 'The port used to publish signals to all the workers')
  .default('alphabet', isv.defaultAlphabet)
  .alias('a', 'alphabet')
  .describe('alphabet', 'The alphabet used to generate the passwords')
  .alias('b', 'batchSize')
  .default('batchSize', 1000000)
  .describe('batchSize', 'The number of attempts assigned to every client in a batch')
  .alias('s', 'start')
  .describe('start', 'The index from where to start the search')
  .default('start', 0)
  .check(args => {
    const token = jwt.decode(args._[0], {complete: true});
    if (!token) {
      throw new Error('Invalid JWT token: cannot decode token');

    if (!(token.header.alg === 'HS256' && token.header.typ === 'JWT')) {
      throw new Error('Invalid JWT token: only HS256 JWT tokens supported');

    return true;

const token = argv._[0];  
const port = argv.port;  
const pubPort = argv.pubPort;  
const alphabet = argv.alphabet;  
const batchSize = bigInt(String(argv.batchSize));  
const start = argv.start;  
const batchSocket = zmq.socket('router');  
const signalSocket = zmq.socket('pub');  
const router = createRouter(  

batchSocket.on('message', router);

signalSocket.bindSync(`tcp://*:${pubPort}`);`Server listening on port ${port}, signal publish on port ${pubPort}`);  

We make the arguments’ parsing very easy by using yargs again. The command must be invoked specifying a token as the only argument and must support several options:

  • port: used to specify in which port the batchSocket will be listening.
  • pubPort: used to specify which port will be used to publish the exit signal.
  • alphabet: a string containing all the characters in the alphabet we want to use to build all the possible strings used for the brute force.
  • batchSize: the size of every batch forwarded to the clients.
  • start: an index from the solution space from where to start the search (generally 0). Can be useful if you already analyzed part of the solution space.

In this case, we also add a check function to be sure that the JWT token we receive as an argument is well formatted and uses the HS256 algorithm for the signature.

In the rest of the code we initialize two ZeroMQ sockets: batchSocket and signalSocket – and we take them along with the token and the options received from the command line to initialize our router through the createRouter function that we wrote before.

Then we register the router listener to react to all the messages received on the batchSocket.

Finally, we bind our sockets to their respective ports to start to listen for incoming connections from the clients.

This completes our server application, and we are almost ready to give our little project a go. Hooray!

Logging utility

The last piece of code that we need is our little logger instance. We saw it being used in many of the modules we wrote before – so now let’s code this missing piece.

As we briefly anticipated earlier, we are going to use winston for the logging functionality of this app.

We need a timestamp close to every log line to have an idea about how much time our application is taking to search for a solution – so we can write the following module to export a configured instance of winston that can simply import in every module and be ready to use:

// src/logger.js

'use strict';

const dateFormat = require('dateformat');  
const winston = require('winston');

module.exports = new (winston.Logger)({  
  transports: [
    new (winston.transports.Console)({
      timestamp: () => dateFormat(new Date(), 'yyyy-mm-dd HH:MM:ss'),
      colorize: true

Notice, that we are just adding the timestamp with a specific format of our choice and then enabling the colorized output on the console.

Winston can be configured to support multiple transport layers like log files, network and syslog, so, if you want, you can get really fancy here and make it much more complex.

Running the application

We are finally ready to give our app a spin, let’s brute force some JWT tokens!

Our token of choice is the following:


This token is the default one from and its password is secret.

To run the server, we need to launch the following command:

node src/server.js eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiYWRtaW4iOnRydWV9.TJVA95OrM7E2cBab30RMHrHDcEfxjoYZgeFONFh7HgQ  

This command starts the server and initializes it with the default alphabet (abcdefghijklmnopqrstuwxyzABCDEFGHIJKLMNOPQRSTUWXYZ0123456789). Considering that the password is long enough to keep our clients busy for a while and also that we already know the token password, we can cheat a little bit and specify a much smaller alphabet to speed up the search of the solution. If you feel like wanting to take a shortcut add the option -a cerst to the server start command!

Now you can run any number of clients in separate terminals with:

node src/client.js  

After the first client is connected, you will start to see the activity going on in both the server and the client terminals. It might take a while to discover the password – depending on the number of clients you run, the power of your local machine and the alphabet you choose to use.

In the following picture you can see an example of running both the server (left column) and four clients (right column) applications on the same machine:

Building a JWT Token Cracker with ZeroMQ & Node.js (Part 2.)

In a real world case, you might want to run the server on a dedicated machine and then use as many machines as possible as clients. You could also run many clients per machine, depending on the number of cores in every machine.

Wrapping up

We are at the end of this experiment! I really hope you had fun and that you learned something new about Node.js, ZeroMQ and JWT tokens.

If you want to keep experimenting with this example and improve the application, here there are some ideas that you might want to work on:

Also, if you want to learn more about other Node.js design patterns (including more advanced topics like scalability, architectural, messaging and integration patterns) you can check my book Node.js Design Patterns – Second Edition:

Building a JWT Token Cracker with ZeroMQ & Node.js (Part 2.)

A little challenge

Can you crack the following JWT token?


If you can crack it there’s a prize for you. Append the password you discovered to (e.g., if the password is njdsp2e the resulting URL will be to download the instructions to retrieve your prize! You won’t regret this challenge, I promise.

Have fun! Also, if you have questions or additional insights regarding this topic, please share them in the comments.


This article was peer reviewed with great care by Arthur Thevenet, Valerio De Carolis, Mario Casciaro, Padraig O’Brien, Joe Minichino and Andrea Mangano. Thank you guys for the amazing support!


Using Buffers to share data between Node.js and C++

By Scott Frees

Using Buffers to share data between Node.js and C++

One of the best things about developing with Node.js is the ability to move fairly seamlessly between JavaScript and native C++ code – thanks to the V8’s add-on API. The ability to move into C++ is sometimes driven by processing speed, but more often because we already have C++ code and we just want to be able to use it from JavaScript.

We can categorize the different use cases for add-ons along (at least) two axes – (1) amount of processing time we’ll spend in the C++ code, and (2) the amount of data flowing between C++ and JavaScript.

Using Buffers to share data between Node.js and C++

Most articles discussing C++ add-ons for Node.js are focusing on the differences between the left and right quadrants. If you are in the left quadrants (short processing time), your add-on can possibly be synchronous – meaning the C++ code that executes is running directly in the Node.js event loop when called.

In this case, the add-on function is blocks and waits for the return value, meaning no other operations can be done in the meantime. In the right quadrants, you would almost certainly design the add-on using the asynchronous pattern. In an asynchronous add-on function, the calling JavaScript code returns immediately. The calling code passes a callback function to the add-on, and the add-on does its work in a separate worker thread. This avoids locking up the Node.js event loop, as the add-on function does not block.

The difference between the top and bottom quadrants is often overlooked, however they can be just as important.

V8 vs. C++ memory and data

If you are new to writing native add-ons, one of the first things you must master is the differences between V8-owned data (which you can access from C++ add-ons) and normal C++ memory allocations.

When we say “V8-owned”, we are referring to the storage cells that hold JavaScript data.

These storage cells are accessible through V8’s C++ API, but they aren’t ordinary C++ variables since they can only be accessed in limited ways. While your add-on could restrict itself to ONLY using V8 data, it will more likely create it’s own variables too – in plain old C++. These could be stack or heap variables, and of course are completely independent of V8.

In JavaScript, primitives (numbers, strings, booleans, etc.) are immutable, and a C++ add-on can not alter storage cells associated with primitive JavaScript variables. The primitive JavaScript variables can be reassigned to new storage cells created by C++ – but this means that changing data will always result in new memory allocation.

In the upper quadrant (low data transfer), this really isn’t a big deal. If you are designing an add-on that doesn’t have a lot of data exchange, then the overhead of all the new memory allocation probably doesn’t mean much. As your add-ons move closer to the lower quadrant, the cost of allocation / copying will start to hurt you.

For one, it costs you in terms of peak memory usage, and it also costs you in performance!

The time cost of copying all this data between JavaScript (V8 storage cells) to C++ (and back) usually kills the performance benefits you might be getting from running C++ in the first place!For add-ons in the lower left quadrant (low processing, high data usage), the latency associated with data copying can push your add-on towards the right – forcing you to consider an asynchronous design.

V8 memory and asynchronous add-ons

In asynchronous add-ons we execute the bulk of our C++ processing code in a worker thread. If you are unfamiliar with asynchronous callbacks, you might want to check out a few tutorials (like here and here).

A central tenant of asynchronous add-ons is that you can’t access V8 (JavaScript) memory outside the event-loop’s thread. This leads us to our next problem. If we have lots of data, that data must be copied out of V8 memory and into your add-on’s native address space from the event loop’s thread, before the worker thread starts. Likewise, any data produced or modified by the worker thread must be copied back into V8 by code executing in the event loop (in the callback). If you are interested in creating high throughput Node.js applications, you should avoid spending lots of time in the event loop copying data!

Using Buffers to share data between Node.js and C++

Ideally, we’d prefer a way to do this:

Using Buffers to share data between Node.js and C++

Node.js Buffers to the rescue

So, we have two somewhat related problems.

  1. When working with synchronous add-ons, unless we aren’t changing/producing data, it’s likely we’ll need to spend a lot of time moving our data between V8 storage cells and plain old C++ variables – which costs us.
  2. When working with asynchronous add-ons, we ideally should spend as little time in the event loop as possible. This is why we still have a problem – since we must do our data copying in the event loop’s thread due to V8’s multi-threaded restrictions.

This is where an often overlooked feature of Node.js helps us with add-on development – the Buffer. Quoting the Node.js official documentation,

Instances of the Buffer class are similar to arrays of integers but correspond to fixed-sized, raw memory allocations outside the V8 heap.

This is exactly what we are looking for – because the data inside a Buffer is not stored in a V8 storage cell, it is not subjected to the multi-threading rules of V8. This means that we can interact with it in place from a C++ worker thread started by an asynchronous add-on.

How Buffers work

Buffers store raw binary data, and they can be found in the Node.js API for reading files and other I/O devices.

Borrowing from some examples in the Node.js documentation, we can create initialized buffers of a specified size, buffers pre-set with a specified value, buffers from arrays of bytes, and buffers from strings.

// buffer with size 10 bytes
const buf1 = Buffer.alloc(10);

// buffer filled with 1's (10 bytes)
const buf2 = Buffer.alloc(10, 1);

//buffer containing [0x1, 0x2, 0x3]
const buf3 = Buffer.from([1, 2, 3]); 

// buffer containing ASCII bytes [0x74, 0x65, 0x73, 0x74].
const buf4 = Buffer.from('test');

// buffer containing bytes from a file
const buf5 = fs.readFileSync("some file");  

Buffers can be turned back into traditional JavaScript data (strings) or written back out to files, databases, or other I/O devices.

How to access Buffers in C++

When building an add-on for Node.js, the best place to start is by making use of the NAN (Native Abstractions for Node.js) API rather than directly using the V8 API – which can be a moving target. There are many tutorials on the web for getting started with NAN add-ons – including examples in NAN’s code base itself. I’ve written a bit about it here, and it’s also covered in a lot of depth in my ebook.

First, let’s see how an add-on can access a Buffer sent to it from JavaScript. We’ll start with a simple JS program that requires an add-on that we’ll create in a moment:

'use strict';  
// Requiring the add-on that we'll build in a moment...
const addon = require('./build/Release/buffer_example');

// Allocates memory holding ASCII "ABC" outside of V8.
const buffer = Buffer.from("ABC");

// synchronous, rotates each character by +13
addon.rotate(buffer, buffer.length, 13);


The expected output is “NOP”, the ASCII rotation by 13 of “ABC”. Let’s take a look the add-on! It consists of three files (in the same directory, for simplicity):

// binding.gyp
  "targets": [
        "target_name": "buffer_example",
        "sources": [ "buffer_example.cpp" ], 
        "include_dirs" : ["<!(node -e "require('nan')")"]
  "name": "buffer_example",
  "version": "0.0.1",
  "private": true,
  "gypfile": true,
  "scripts": {
    "start": "node index.js"
  "dependencies": {
      "nan": "*"
// buffer_example.cpp
#include <nan.h>
using namespace Nan;  
using namespace v8;

NAN_METHOD(rotate) {  
    char* buffer = (char*) node::Buffer::Data(info[0]->ToObject());
    unsigned int size = info[1]->Uint32Value();
    unsigned int rot = info[2]->Uint32Value();

    for(unsigned int i = 0; i < size; i++ ) {
        buffer[i] += rot;

   Nan::Set(target, New<String>("rotate").ToLocalChecked(),

NODE_MODULE(buffer_example, Init)  

The most interesting file is buffer_example.cpp. Notice that we’ve used node::Buffer‘s Data method to convert the first parameter sent to the add-on to a character array. This is now free for us to use in any way we see fit. In this case, we just perform an ASCII rotation of the text. Notice that there is no return value, the memory associated with the Buffer has been modified in place.

We can build the add-on by just typing npm install. The package.json tells npm to download NAN and build the add-on using the binding.gyp file. Running it will give us the “NOP” output we expect.

We can also create new buffers while inside the add-on. Let’s modify the rotate function to increment the input, but return another buffer containing the string resulting from a decrement operation:

NAN_METHOD(rotate) {  
    char* buffer = (char*) node::Buffer::Data(info[0]->ToObject());
    unsigned int size = info[1]->Uint32Value();
    unsigned int rot = info[2]->Uint32Value();

    char * retval = new char[size];
    for(unsigned int i = 0; i < size; i++ ) {
        retval[i] = buffer[i] - rot;
        buffer[i] += rot;

   info.GetReturnValue().Set(Nan::NewBuffer(retval, size).ToLocalChecked());
var result = addon.rotate(buffer, buffer.length, 13);


Now the resulting buffer will contain ‘456’. Note the use of NAN’s NewBuffer function, which wraps the dynamically allocated retval array in a Node buffer. Doing so transfers ownership of this memory to Node.js, so the memory associated with retval will be reclaimed (by calling free) when the buffer goes out of scope in JavaScript. More on this issue later – as we don’t always want to have it happen this way!

You can find additional information about how NAN handles buffers here.

Example: PNG and BMP Image Processing

The example above is pretty basic and not particularly exciting. Let’s turn to a more practical example – image processing with C++. If you want to get the full source code for both the example above and the image processing code below, you can head over to my nodecpp-demo repository at, the code is in the “buffers” directory.

Image processing is a good candidate for C++ add-ons, as it can often be time-consuming, CPU intensive, and some processing techniques have parallelism that C++ can exploit well. In the example we’ll look at now, we’ll simply convert png formatted data into bmp formatted data .

Converting a png to bmp is not particularly time consuming and it’s probably overkill for an add-on, but it’s good for demonstration purposes. If you are looking for a pure JavaScript implementation of image processing (including much more than png to bmp conversion), take a look at JIMP at

There are a good number of open source C++ libraries that can help us with this task. I’m going to use LodePNG as it is dependency free and quite simple to use. LodePNG can be found at, and it’s source code is at Many thanks to the developer, Lode Vandevenne for providing such an easy to use library!

Setting up the add-on

For this add-on, we’ll create the following directory structure, which includes source code downloaded from, namely lodepng.h and lodepng.cpp.

 |--- binding.gyp
 |--- package.json
 |--- png2bmp.cpp  # the add-on
 |--- index.js     # program to test the add-on
 |--- sample.png   # input (will be converted to bmp)
 |--- lodepng.h    # from lodepng distribution
 |--- lodepng.cpp  # From loadpng distribution

lodepng.cpp contains all the necessary code for doing image processing, and I will not discuss it’s working in detail. In addition, the lodepng distribution contains sample code that allows you to specifically convert between png and bmp. I’ve adapted it slightly and will put it in the add-ons source code file png2bmp.cpp which we will take a look at shortly.

Let’s look at what the actual JavaScript program looks like before diving into the add-on code itself:

'use strict';  
const fs = require('fs');  
const path = require('path');  
const png2bmp = require('./build/Release/png2bmp');

const png_file = process.argv[2];  
const bmp_file = path.basename(png_file, '.png') + ".bmp";  
const png_buffer = fs.readFileSync(png_file);

const bmp_buffer = png2bmp.getBMP(png_buffer, png_buffer.length);  
fs.writeFileSync(bmp_file, bmp_buffer);  

The program uses a filename for a png image as a command line option. It calls an add-on function getBMP which accepts a buffer containing the png file and its length. This add-on is synchronous, but we’ll take a look at the asynchronous version later on too.

Here’s the package.json, which is setting up npm start to invoke the index.js program with a command line argument of sample.png. It’s a pretty generic image:

  "name": "png2bmp",
  "version": "0.0.1",
  "private": true,
  "gypfile": true,
  "scripts": {
    "start": "node index.js sample.png"
  "dependencies": {
      "nan": "*"

Using Buffers to share data between Node.js and C++

Here is the binding.gyp file – which is fairly standard, other than a few compiler flags needed to compile lodepng. It also includes the requisite references to NAN.

  "targets": [
      "target_name": "png2bmp",
      "sources": [ "png2bmp.cpp", "lodepng.cpp" ], 
      "cflags": ["-Wall", "-Wextra", "-pedantic", "-ansi", "-O3"],
      "include_dirs" : ["<!(node -e "require('nan')")"]

png2bmp.cpp will mostly contain V8/NAN code. However, it does have one image processing utility function – do_convert, adopted from lodepng’s png to bmp example code.

The function accepts a vector containing input data (png format) and a vector to put its output (bmp format) data into. That function, in turn, calls encodeBMP, which is straight from the lodepng examples.

Here is the full code listing of these two functions. The details are not important to the understanding of the add-ons Buffer objects but are included here for completeness. Our add-on entry point(s) will call do_convert.

~~~~~~~~<del>{#binding-hello .cpp}
ALL LodePNG code in this file is adapted from lodepng's  
examples, found at the following URL:  

void encodeBMP(std::vector<unsigned char>& bmp,  
  const unsigned char* image, int w, int h)
  //3 bytes per pixel used for both input and output.
  int inputChannels = 3;
  int outputChannels = 3;

  //bytes 0-13
  bmp.push_back('B'); bmp.push_back('M'); //0: bfType
  bmp.push_back(0); bmp.push_back(0); 
  bmp.push_back(0); bmp.push_back(0); 
  bmp.push_back(0); bmp.push_back(0); //6: bfReserved1
  bmp.push_back(0); bmp.push_back(0); //8: bfReserved2
  bmp.push_back(54 % 256); 
  bmp.push_back(54 / 256); 
  bmp.push_back(0); bmp.push_back(0); 

  //bytes 14-53
  bmp.push_back(40); bmp.push_back(0); 
  bmp.push_back(0); bmp.push_back(0);  //14: biSize
  bmp.push_back(w % 256); 
  bmp.push_back(w / 256); 
  bmp.push_back(0); bmp.push_back(0); //18: biWidth
  bmp.push_back(h % 256); 
  bmp.push_back(h / 256); 
  bmp.push_back(0); bmp.push_back(0); //22: biHeight
  bmp.push_back(1); bmp.push_back(0); //26: biPlanes
  bmp.push_back(outputChannels * 8); 
  bmp.push_back(0); //28: biBitCount
  bmp.push_back(0); bmp.push_back(0); 
  bmp.push_back(0); bmp.push_back(0);  //30: biCompression
  bmp.push_back(0); bmp.push_back(0); 
  bmp.push_back(0); bmp.push_back(0);  //34: biSizeImage
  bmp.push_back(0); bmp.push_back(0); 
  bmp.push_back(0); bmp.push_back(0);  //38: biXPelsPerMeter
  bmp.push_back(0); bmp.push_back(0); 
  bmp.push_back(0); bmp.push_back(0);  //42: biYPelsPerMeter
  bmp.push_back(0); bmp.push_back(0); 
  bmp.push_back(0); bmp.push_back(0);  //46: biClrUsed
  bmp.push_back(0); bmp.push_back(0); 
  bmp.push_back(0); bmp.push_back(0);  //50: biClrImportant

  int imagerowbytes = outputChannels * w;
  //must be multiple of 4
  imagerowbytes = imagerowbytes % 4 == 0 ? imagerowbytes : 
            imagerowbytes + (4 - imagerowbytes % 4); 

  for(int y = h - 1; y >= 0; y--) 
    int c = 0;
    for(int x = 0; x < imagerowbytes; x++)
      if(x < w * outputChannels)
        int inc = c;
        //Convert RGB(A) into BGR(A)
        if(c == 0) inc = 2;
        else if(c == 2) inc = 0;
            * (w * y + x / outputChannels) + inc]);
      else bmp.push_back(0);
      if(c >= outputChannels) c = 0;

  // Fill in the size
  bmp[2] = bmp.size() % 256;
  bmp[3] = (bmp.size() / 256) % 256;
  bmp[4] = (bmp.size() / 65536) % 256;
  bmp[5] = bmp.size() / 16777216;

bool do_convert(  
  std::vector<unsigned char> & input_data, 
  std::vector<unsigned char> & bmp)
  std::vector<unsigned char> image; //the raw pixels
  unsigned width, height;
  unsigned error = lodepng::decode(image, width, 
    height, input_data, LCT_RGB, 8);
  if(error) {
    std::cout << "error " << error << ": " 
              << lodepng_error_text(error) 
              << std::endl;
    return false;
  encodeBMP(bmp, &image[0], width, height);
  return true;

Sorry… that listing was long, but it’s important to see what’s actually going on! Let’s get to work bridging all this code to JavaScript.

Synchronous Buffer Processing

The png image data is actually read when we are in JavaScript, so it’s passed in as a Node.js Buffer. We’ll use NAN to access the buffer itself. Here’s the complete code for the synchronous version:

unsigned char*buffer = (unsigned char*) node::Buffer::Data(info[0]->ToObject());  
    unsigned int size = info[1]->Uint32Value();

    std::vector<unsigned char> png_data(buffer, buffer + size);
    std::vector<unsigned char> bmp;

    if ( do_convert(png_data, bmp)) {
            NewBuffer((char *), bmp.size()/*, buffer_delete_callback, bmp*/).ToLocalChecked());

   Nan::Set(target, New<String>("getBMP").ToLocalChecked(),

NODE_MODULE(png2bmp, Init)  

In GetBMP, we use the familiar Data method to unwrap the buffer so we can work with it like a normal character array. Next, we build a vector around the input so we can pass it to our do_convert function listed above. Once the bmp vector is filled in by do_convert, we wrap it up in a Buffer and return to JavaScript.

So here is the problem with this code: The data contained in the buffer we return is likely deleted before our JavaScript gets to use it. Why? Because the bmp vector is going to go out of scope as our GetBMP function returns. C++ vector semantics hold that when the vector goes out of scope, the vector’s destructor deletes all data within the vector – in our case, our bmp data will be deleted as well! This is a huge problem since the Buffer we send back to JavaScript will have it’s data deleted out from under it. You might get away with this (race conditions are fun right?), but it will eventually cause your program to crash.

Luckily, NewBuffer has an optional third and fourth parameter to give us some more control.

The third parameter is a callback which ends up being called when the Buffer gets garbage collected by V8. Remember that Buffers are JavaScript objects, whose data is stored outside of V8, but the object itself is under V8’s control.

From this perspective, it should make sense that a callback would be handy. When V8 destroys the buffer, we need some way of freeing up the data we have created – which is passed into the callback as its first parameter. The signature of the callback is defined by NAN – Nan::FreeCallback(). The fourth parameter is a hint to aid in deallocation, and we can use it however we want.

Since our problem is that the vector containing bitmap data goes out of scope, we can dynamically allocate the vector itself instead, and pass it into the free callback where it can be properly deleted when the Buffer has been garbage collected.

Below is the new delete_callback, along with the new call to NewBuffer. I’m sending the actual pointer to the vector as the hint, so it can be deleted directly.

void buffer_delete_callback(char* data, void* the_vector) {  
  delete reinterpret_cast<vector<unsigned char> *> (the_vector);


  unsigned char*buffer =  (unsigned char*) node::Buffer::Data(info[0]->ToObject());
  unsigned int size = info[1]->Uint32Value();

  std::vector<unsigned char> png_data(buffer, buffer + size); 
  std::vector<unsigned char> * bmp = new vector<unsigned char>();

  if ( do_convert(png_data, *bmp)) {
            (char *)bmp->data(), 

Run this program by doing an npm install and then an npm start and you’ll see a sample.bmp generated in your directory that looks eerily similar to sample.png – just a whole lot bigger (because bmp compression is far less efficient than png).

Asynchronous Buffer Processing

Let’s develop an asynchronous version of the png to bitmap converter. We’ll perform the actual conversion in a C++ worker thread, using Nan::AsyncWorker. By using Buffer objects, we can avoid copying the png data, so we will only need to hold a pointer to the underlying data such that our worker thread can access it. Likewise, the data produced by the worker thread (the bmp vector) can be used to create a new Buffer without copying data.

 class PngToBmpWorker : public AsyncWorker {
    PngToBmpWorker(Callback * callback, 
        v8::Local<v8::Object> &pngBuffer, int size) 
        : AsyncWorker(callback) {
        unsigned char*buffer = 
          (unsigned char*) node::Buffer::Data(pngBuffer);

        std::vector<unsigned char> tmp(
          buffer +  (unsigned int) size);

        png_data = tmp;
    void Execute() {
       bmp = new vector<unsigned char>();
       do_convert(png_data, *bmp);
    void HandleOKCallback () {
        Local<Object> bmpData = 
               NewBuffer((char *)bmp->data(), 
               bmp->size(), buffer_delete_callback, 
        Local<Value> argv[] = { bmpData };
        callback->Call(1, argv);

        vector<unsigned char> png_data;
        std::vector<unsigned char> * bmp;

    int size = To<int>(info[1]).FromJust();
    v8::Local<v8::Object> pngBuffer = 

    Callback *callback = 
      new Callback(info[2].As<Function>());

      new PngToBmpWorker(callback, pngBuffer , size));

Our new GetBMPAsync add-on function first unwraps the input buffer sent from JavaScript and then initializes and queues a new PngToBmpWorker worker , using NAN’s API. The worker object’s Execute method is called by libuv inside a worker thread where the conversion is done. When the Execute function returns, libuv calls the HandleOKCallback in the Node.js event loop thread, which creates the buffer and invokes the callback sent from JavaScript.

Now we can utilize this add-on function in JavaScript like this:

  function(bmp_buffer) {
    fs.writeFileSync(bmp_file, bmp_buffer);


There were two core takeaways in this post:

  1. You can’t ignore the costs of copying data between V8 storage cells and C++ variables. If you aren’t careful, you can easily kill the performance boost you might have thought you were getting by dropping into C++ to perform your work!

  2. Buffers offer a way to work with the same data in both JavaScript and C++, thus avoiding the need to create copies.

Using buffers in your add-ons can be pretty painless. I hope I’ve been able to show you this through a simple demo application that rotates ASCII text, along with more practical synchronous and asynchronous image conversion examples Hopefully, this post helps you boost the performance of your own add-ons!

A reminder, all the code from this post can be found at, the code is in the “buffers” directory.

If you are looking for more tips on how to design Node.js C++ add-ons, please check out my ebook on C++ and Node.js Integration.


Node Interactive Giveaway - Win 1 of the 3 tickets (until November 13)

By Ferenc Hamori

Node Interactive Giveaway -  Win 1 of the 3 tickets (until November 13)

Here at Trace by RisingStack we have been always enthusiastic about supporting the Node.js developer community. This is the reason we publish new tutorials on our Engineering blog every week, and why we launched RisingStack Community and Node.js Daily just recently.

This takes us to the next step..

We’ll be attending Node.js Interactive North America as sponsors, and we’ll help you to be there too!

We have 3 spare tickets, and we’d love to give them to Node.js developers who are interested in attending the conference – for completely free.

To do so, you only have to take our super-short quiz and have a little bit of luck! We will randomly select the winners from the ones who complete it flawlessly.

Please, take the quiz only if ..

  • you are able to attend the full event (November 29 – December 2),
  • you will come if you win a ticket, and
  • you can cover the accommodation and travel for yourself.

Don’t spoil the opportunity for others! If you can’t make it, share this post with others instead.

About Node.js Interactive North America:

Node.js Interactive North America, the marquee event for JavaScript developers, companies that rely on Node.js and the vendors that support both of these constituents with tools, training and other services. For more info, visit


Building microservices: Stonemason & Generating microservice stubs

By Alec Lownes

Building microservices: Stonemason & Generating microservice stubs

This article explains how you can use Stonemason to create microservices stubs and how to generate boilerplate code automatically. The package allows you to get up and running as quickly as possible with building a microservices webservice.

As a developer who takes on a lot of small, independent personal projects, I often found myself tasked with writing the same boilerplate code over and over again – before even getting to start working on the meat of the code.

Additionally, as I came to depend on more and more tools, the boilerplate that I would have to write seemed to expand exponentially. I would always find that the first time I tentatively started up a service, I had forgotten the webpack config or the babel presets, or something else equally obvious.

Even worse than writing the boilerplate myself was attempting to copy my previous projects, as I would always end up including some functionality in the dependencies or the code that I never needed for the new service.

I needed something like create-react-app, but for an entire microservice.

Microservice diversity

The problem with creating a stub-generator for microservices is the broad range of functionalities that microservices are used for.

What if I just wanted to provide api routes? Do we even need DB integration? I found that my previous projects had a broad range of required functionality, so I wanted to make a tool that could generate a microservice stub for any of my previous projects.

I identified three main components that an app could independently include or exclude without affecting the others:

  1. Database support
  2. API routes
  3. Front-end

If you were to chart out the possibilities, they would look like this:

DB | API | FRONT | Example

o | o | o | A server that listens to a port with no action.
x | o | o | A service that manipulates records in a database for upgrades, etc.
o | x | o | A service that performs some calculation on a request
o | o | x | An electron front-end or a static html site
x | x | o | A DB record getter/setter service
x | o | x | Not much
o | x | x | A website that performs some non-persisting service
x | x | x | A metrics page

As you can see, a wide range of services can be generated with just the three of these features being enabled or disabled. Additionally, many other features can be grouped into these three as sub-features, such as which type of database will be used and whether Redux will be used for front-end state management.

Technology decisions

Stonemason generates a stub service with a fairly rigid set of core technologies. These can be included and removed, but for the most part, they can’t be substituted with different technologies that serve the same function. Maximally, Stonemason will generate a microservice using the following technology stack:

I wanted Stonemason to be easily extensible with new sub-options from the beginning, whenever I found a new favorite technology that I thought might be useful in the future.

While the stub generator initially started with a fairly complicated nest of template literals, I ended up going with Handlebars due to its logic-less nature. This would force me and any future contributors to keep all the logic within the generator, where it belongs.

I initially found Electron very difficult to work with, but decided to go with it anyway because it would enable a truly cross-platform GUI experience, and I could code the whole thing in JavaScript!

I decided at a late stage to include a command line interface when I found myself annoyed by how I had to switch from the Stonemason GUI to the command line to start installing the dependencies of my created app every time I tested it.

In comparison to the Electron GUI, building the Inquirer prompt object was a piece of cake. Additionally, Inquirer gave me a ton of nice visual selectors while also outputting the exact object I would end up passing to the generator.

Using Stonemason for building microservices stubs

Before anything else, Stonemason must be installed globally with the following command:

npm install -g stonemason

Generating a microservice with Stonemason can be accomplished in two different ways: Using the GUI or the CLI.

To use the GUI, run stonemason, to use the CLI, run stonemason-cli.

When using the CLI, you can get a helpful default answer to the starting directory by running it in the directory where you want to create the stubs.

Regardless of which way you start it up, Stonemason will end up asking you the same questions. As mentioned above, the main question you want to ask yourself is which combination of Database, API, and Front-end your microservice will need.

Note: If you end up using environment variables for your database path or port number, you will need to make sure these are set in your hosting environment and your local computer for testing (you can use the heroku local command to use a .env file, if you plan to host on Heroku. here)

At the end of the generation process, you will have a series of directories and files that form the framework of your microservice. In this directory, you will want to run the following command to install your microservice’s dependences.

npm install

It is important at this step to make sure your app runs by running the following command. Any errors that Stonemason might have introduced can be caught at this stage.

npm start

Additionally, if you decided to use a front-end, the build and watch commands are provided for you to build your React app and watch it for changes in development.

npm run build

npm run watch

There you have it! You can start working on the api by going to /api/v1.js or your React app by going to /app/index.jsx, for example.


Node.js Weekly Update - 11 Nov, 2016

By Gergely Németh

Node.js Weekly Update - 11 Nov, 2016

From now on, we’ll collect and republish the most important Node.js news each week on the RisingStack community blog. We use various sources to collect them, like the awesome Node Weekly, Node.js Daily, EchoJS, Reddit/Node, Node Foundation and so on.

If you miss something from this weekly Node.js update, please let us know in the comments!

6 Must-read Articles, Updates of the Week:

○ Stability first – Mathias Buus

Innovation shouldn’t come from Node core. It should come from modules. It should come from modules because they are easy to publish and they are extremely easy to update.

○ Node Interactive Ticket Giveaway – Open Until 13 November

RisingStack is doing a Ticket Giveaway for Node Interactive North America. Complete a short quiz and win one of the 3 tickets!

○ Node Knockout this Saturday

Node Knockout is a 48-hour hackathon featuring node.js. It’s an online, virtual competition with contestants worldwide.

○ Node.js Monitoring Done right – Peter Marton

Node.js Monitoring is essential for companies building competitive products with great user experience — the goal of this article is to discuss the reasons for it.

○ Using Redis for Fun and Profit – Pedro Teixeira

Redis is the Swiss knife of in-memory databases: you can use it to implement many different use-cases, ranging from a data cache or a work queue to a statistics log.

○ 19 Things I Learnt Reading the Node.js Docs – David Gilbertson

I’d like to think I know Node pretty well. I haven’t written a web site that doesn’t use it for about 3 years now. But I’ve never actually sat down and read the docs.

Important Updates to the Node.js Core

Node v7.1.0 released

Notable changes

  • buffer: add buffer.transcode to transcode a buffer’s content from one encoding to another primarily using ICU (James M Snell) #9038
  • child_process: add public API for IPC channel (cjihrig) #9322
  • lib: make String(global) === ‘[object global]’ #9279
  • libuv: Upgraded to 1.10.0 #9267
  • readline: use icu based string width calculation (James M Snell) #9040

Read the full article: Node v7.1.0 released.

Node.js Weekly Update

This was a great week, with many useful Node.js tutorials. Let us know in the comments section if you’d add anything !


Understanding the JavaScript Prototype Chain & Inheritance

By Alec Lownes

Understanding the JavaScript Prototype Chain & Inheritance

In this two-part article, I will explain the JavaScript prototype chain and the scope chain so that you can understand how to debug specific issues and how to use them to your advantage.

JavaScript: a Despised Programming Language

It can be said that Javascript is one of the most despised programming languages. However, it is also one of the most popular languages, and we regularly encounter it every day in various forms.

A lot of this animosity comes from confusion about two key components of the language: the prototype chain and scoping. While Javascript’s inheritance and scoping is different from most languages, I think that with proper understanding, these quirks can be embraced and used to their full potential.

The JavaScript Prototype Chain

Javascript has an interesting inheritance model, which happens to be completely different from most OOP languages. While it is object-oriented, an object doesn’t have a type or a class that it gets its methods from, it has a prototype. It is important to understand the differences between these two, as they are not equivalent, and lead to much confusion down the line.

JavaScript Constructors

To create an object in Javascript, you first must define its constructor function.

var LivingEntity = function(location){  
    this.x = location.x;
    this.y = location.y;
    this.z = location.z;

//New instance
var dog = new LivingEntity({  
    x: 5,
    y: 0,
    z: 1

The constructor function is nothing more than a normal function. You may notice that we are referencing this in the constructor function above. this is not specific to constructor functions, and can be referenced in any function. Normally it points to the function’s scope of execution, which we will get to in the next section.

To create a new instance of this object, call the constructor with the new keyword in front of it.


Let’s say we want to add a method to LivingEntity called moveWest that will decrease the entity’s x component by 1. Since an object is just a map in Javascript, you might be tempted to add it to the instance of the object during or after construction.

//During construction
var LivingEntity = function(location){  
    this.x = location.x;
    this.y = location.y;
    this.z = location.z;
    this.moveWest = function(){

//OR after construction
dog.moveWest = function(){  

Doing so is not the way to construct objects using prototypes, and both of these methods add unnecessary anonymous functions to memory.

Instead, we can add a single anonymous function to the prototype chain!

LivingEntity.prototype.moveWest = function(){  

If we do this, there is only one anonymous function, whose reference is passed around to all LivingEntity objects.

But what is .prototype? prototype is an attribute of all functions, and points to a map where attributes can be assigned that should be able to be accessed from all objects created with that function as the constructor.

Every object has a prototype that can be modified through the constructor’s prototype, even Object.

Object.prototype.a = 5;

var v = {};  
console.log(v.a); //5  

The prototype of an object is a way to store common attributes across all instances of a class, but in a way that is overwritable. If an object doesn’t have a reference to an attribute, that object’s prototype will be checked for the attribute.

LivingEntity.prototype.makeSound = function(){  

//dog uses its prototype because it doesn't have makeSound as an attribute
dog.makeSound(); //meow

dog.makeSound = function(){  

//now dog has makeSound as an attribute, it will use that instead of it's prototype
dog.makeSound(); //woof  

The Prototype Chain

Every object has a prototype, including the prototype object. This “chain” goes all the way back until it reaches an object that has no prototype, usually Object‘s prototype. Prototype’s version of “Inheritance” involves adding another link to the end of this prototype chain, as shown below.

var Dragon = function(location){  
     * <Function>.call is a method that executes the defined function,
     * but with the "this" variable pointing to the first argument,
     * and the rest of the arguments being arguments of the function
     * that is being "called". This essentially performs all of
     * LivingEntity's constructor logic on Dragon's "this".
     */, location);
    //canFly is an attribute of the constructed object and not Dragon's prototype
    this.canFly = true;

 * Object.create(<Function>) creates an object with the same prototype
 * as the provided function, but without executing any of the logic
 * provided in the function. This example will return an object
 * with a prototype that has the "moveWest" and "makeSound" functions,
 * but not x, y, or z attributes.
Dragon.prototype = Object.create(LivingEntity);

 * Now we can assign prototype attributes to Dragon without affecting
 * the prototype of LivingEntity.
 */ = function(y){  
    this.y += y;

var sparky = new Dragon({  
    x: 0,
    y: 0,
    z: 0

When an attribute is called on an object, the object is first checked for that attribute, and if it doesn’t exist, then each link in its prototype chain is traversed until the attribute is found or the end is reached. In this way, sparky can use moveWest even though moveWest was not defined in its immediate prototype.

What does sparky and its prototype chain look like with only each object’s specific attributes listed?

  • sparky
    • x
    • y
    • z
    • canFly
  • sparky.prototype (Dragon.prototype)
    • fly
  • sparky.prototype.prototype (LivingEntity.prototype)
    • makeSound
    • moveWest
  • sparky.prototype.prototype.prototype (Object.prototype)
    • create
    • toString
    • etc…

Next Up

The second part of this article will discuss JavaScript scope chains in-depth and help you to increase your confidence when using these features of the language. If you have questions about the prototype chain, I’ll be glad to answer them in the comments!

Stay Tuned!


Node.js Weekly Update - 18 Nov, 2016

By Ferenc Hamori

Node.js Weekly Update - 18 Nov, 2016

If you miss something from this Node.js weekly update, please let us know in the comments!

6 Must-read Articles, Updates of the Week:

○ Pino, Fast Node.js logging for Production – Stefan Thies

During the Node Interactive event in Amsterdam we had the pleasure of speaking with Matteo Collina from nearForm about the blazingly fast logger “Pino”. In this post we’ll share our insights into Pino and the additional steps required to have a reliable and fast log management solution for Node.js applications.

○ 7 More npm Tricks to Knock Your Wombat Socks Off – Tierney Coren

There are definitely some tricks when it comes to using npm’s CLI. There’s a ton of small features that you’d never know about unless someone told you or you inspect the docs insanely thoroughly.

○ What Are The Bots Up To On npm? – Adam Baldwin

Last year I had a thought, “who else is downloading and running / testing random modules on npm.” Postulating that there might be bots, build systems or other researchers mass downloading and running modules from npm.

○ Node.js Garbage Collection Explained – Gergely Nemeth

Every application needs memory to work properly. Memory management provides ways to dynamically allocate memory chunks for programs when they request it, and free them when they are no longer needed – so that they can be reused.

○ Exploring Node.js core dumps using the llnode plugin for lldb – Howard Hellyer

This article is aimed as being a primer for developers who are interested in using core dumps to help them debug Node.js problems. If you are a C/C++ developer who is comfortable with core dumps, ulimits, gcore and debuggers then you are probably comfortable with using core dumps.

○ Abusing npm libraries for data exfiltration – Asankhaya Sharma

Package and dependency managers like npm allow command execution as part of the build process. Command execution provides an easy and convenient mechanism for developers to script tasks during the build. For instance, npm allows developers to use pre-install and post-install hooks to execute tasks.

Previously in the Node.js Weekly Update

Last week we read fantastic articles about Stability in the Node Core, Node.js monitoring, Redis, and we learned 19 things from the Node.js Docs. Node v7.1.0 was also released.


Build a Blog with Ruby on Rails – Part 2

By kinsomicrote


In the first part of this tutorial you were able to set up a blog that accepts postings using a nice editor called Ckeditor. The format of our blog posts can be well formatted to suit our taste.
At this moment anyone who visits our blog can create a new post. You do not want to push this kind of a blog to the internet because it is unwise and unprofessional. In this part, we are going to learn the following:

  • How to enable authentication using Devise.
  • How to enable image uploading.

To properly understand this let’s write some code to show it. Let’s go the easy part first.
The source for this tutorial is available on Github.

Installing Devise Gem.

Devise is is a flexible authentication solution for Rails.

Add Devise gem to your Gemfile,


gem 'devise'

Now run the command to install the gem;

bundle install

Run the command to generate the necessary Devise files.

rails g devise:install

Running the command creates two new files in your application. You can find them at; config/initializers/devise.rb and config/locales/devise.en.yml. The first is an initializer that is required for Devise to work in your application.
According to the output displayed on your terminal you need to perform some tasks. First, navigate to config/environments/development.rb and paste in the line below just above the end line like I have below:

  config.action_mailer.default_url_options = { host: 'localhost', port: 3000 }

This is to help handle your mailer in the development environment. The second thing to do is add a block of code in your application layout for flash messages. So go ahead and open app/views/layouts/application.html.erb Paste the line below above the yield block:

  <p class="notice"><%= notice %></p>
  <p class="alert"><%= alert %></p>

Now to generate your Admin model, run the command below:

rails generate devise Admin

And that should do it! When that is done, run the command to migrate your database:

rake db:migrate

You will want to generate Devise views. Devise provides you with a command to do that:

rails g devise:views admin

And that will generate a lot of files for you. For the purpose of this tutorial, we want to edit the formats of a few pages Devise generated for us so that they look beautiful and presentable for our users. Paste the code below into their respective files:
This is the file that will handle the editing of our Admin profile. We want to make it look beautiful using Devise. You will agree with me that user experience is very important.


<div class="col-sm-offset-4 col-sm-4 col-xs-12">
  <h2 class="text-center">Edit <%= resource_name.to_s.humanize %></h2>

  <%= form_for(resource, as: resource_name, url: registration_path(resource_name), html: { method: :put, class: "form" }) do |f| %>
    <%= devise_error_messages! %>

    <div class="form-group">
      <%= f.label :email %><br />
      <%= f.email_field :email, autofocus: true, class: "form-control" %>

    <% if devise_mapping.confirmable? && resource.pending_reconfirmation? %>
      <div>Currently waiting confirmation for: <%= resource.unconfirmed_email %></div>
    <% end %>

    <div class="form-group">
      <%= f.label :password %> <i>(leave blank if you don't want to change it)</i><br />
      <%= f.password_field :password, autocomplete: "off", class: "form-control" %>

    <div class="form-group">
      <%= f.label :password_confirmation %><br />
      <%= f.password_field :password_confirmation, autocomplete: "off", class: "form-control" %>

    <div class="form-group">
      <%= f.label :current_password %> <i>(we need your current password to confirm your changes)</i><br />
      <%= f.password_field :current_password, autocomplete: "off", class: "form-control" %>

    <%= f.submit "Update", class: "btn btn-primary btn-lg" %>
  <% end %>

  <h3 class="text-center">Cancel my account</h3>

  <p>Unhappy? <%= button_to "Cancel my account", registration_path(resource_name), data: { confirm: "Are you sure?" }, method: :delete, class: "btn btn-danger btn-lg" %></p>

  <%= link_to "Back", :back, class: "btn btn-primary btn-lg" %>


Next we want to work on the sign up page for our Admin. It is important you know that we are just editing the default pages by adding some snippets of code from bootstrap. Copy and paste the code below into views/admins/registrations/new.html.erb and see the beautiful look that turns up.


<div class="col-sm-offset-4 col-sm-4 col-xs-12">
  <h2 class="text-center">Sign up</h2>

  <%= form_for(resource, as: resource_name, url: registration_path(resource_name), html: {class: "form"}) do |f| %>
    <%= devise_error_messages! %>

    <div class="form-group">
      <%= f.label :email %><br />
      <%= f.email_field :email, autofocus: true, class: "form-control" %>

    <div class="form-group">
      <%= f.label :password %>
      <% if @minimum_password_length %>
      <em>(<%= @minimum_password_length %> characters minimum)</em>
      <% end %><br />
      <%= f.password_field :password, autocomplete: "off", class: "form-control" %>

    <div class="form-group">
      <%= f.label :password_confirmation %><br />
      <%= f.password_field :password_confirmation, autocomplete: "off", class: "form-control" %>

    <%= f.submit "Sign up", class: "btn btn-primary btn-lg" %>
  <% end %>

<%= render "admins/shared/links" %>

We are done with the registrations of our Admin. You should have noticed by now that Devise groups all of these pages into separate folders. This is because there are different controllers handling each of this pages. The next page we want to edit is the sign in page. Paste the following code into views/admins/sessions/new.html.erb


<div class="col-sm-offset-4 col-sm-4 col-xs-12">
  <h2 class="text-center">Log in</h2>

  <%= form_for(resource, as: resource_name, url: session_path(resource_name), html: {class: "form"}) do |f| %>
    <div class="form-group">
      <%= f.label :email %><br />
      <%= f.email_field :email, autofocus: true, class: "form-control" %>

    <div class="form-group">
      <%= f.label :password %><br />
      <%= f.password_field :password, autocomplete: "off", class: "form-control" %>

    <% if devise_mapping.rememberable? %>
      <div class="form-group">
        <%= f.check_box :remember_me %>
        <%= f.label :remember_me %>
    <% end %>

    <%= f.submit "Log in", class: "btn btn-primary btn-lg" %>
  <% end %>


<%= render "admins/shared/links" %>

I told you about partial in the first part of this tutorial. Devise has a partial that it uses to render common links across all the pages we have seen above. This partial is in a different folder called shared. Let us edit that as well. Paste the code below in views/admins/shared/_links.html.erb


<div class="col-sm-offset-4 col-sm-4 col-xs-12">
  <% if controller_name != 'sessions' %>
    <%= link_to "Log in", new_session_path(resource_name) %><br />
  <% end %>

  <% if devise_mapping.registerable? && controller_name != 'registrations' %>
    <%= link_to "Sign up", new_registration_path(resource_name) %><br />
  <% end %>

  <% if devise_mapping.recoverable? && controller_name != 'passwords' && controller_name != 'registrations' %>
    <%= link_to "Forgot your password?", new_password_path(resource_name) %><br />
  <% end %>

  <% if devise_mapping.confirmable? && controller_name != 'confirmations' %>
    <%= link_to "Didn't receive confirmation instructions?", new_confirmation_path(resource_name) %><br />
  <% end %>

  <% if devise_mapping.lockable? && resource_class.unlock_strategy_enabled?(:email) && controller_name != 'unlocks' %>
    <%= link_to "Didn't receive unlock instructions?", new_unlock_path(resource_name) %><br />
  <% end %>

  <% if devise_mapping.omniauthable? %>
    <% resource_class.omniauth_providers.each do |provider| %>
      <%= link_to "Sign in with #{OmniAuth::Utils.camelize(provider)}", omniauth_authorize_path(resource_name, provider) %><br />
    <% end %>
  <% end %>

Note that we did all of the edits above to provide a great experience for our admins. When you reload the page you will notice that nothing changes. This is because of the way Devise was built to serve its views. To make our changes work, we have to edit a line in Devise initializer.
Go to line 223, of your Devise initializer; config/initializers/devise.rb, uncomment the line and change false to true, so it looks like this:

  config.scoped_views = true

Now point your browser to http://localhost:3000/admins/sign_in and see the new format of your sign in page.

You need a navigation bar so that users can easy move from one page to another. We do not need something too serious. Create a navigation partial.

touch app/views/layouts/_navigation.html.erb

Paste the following code into the file you just created.


<nav class="navbar navbar-default">
  <div class="container-fluid">
    <!-- Brand and toggle get grouped for better mobile display -->
    <div class="navbar-header">
      <button type="button" class="navbar-toggle collapsed" data-toggle="collapse" data-target="#navbar-collapse" aria-expanded="false">
        <span class="sr-only">Toggle navigation</span>
        <span class="icon-bar"></span>
        <span class="icon-bar"></span>
        <span class="icon-bar"></span>
      <a class="navbar-brand" href="#">Scotch Blog</a>
    <div class="collapse navbar-collapse" id="navbar-collapse">
      <ul class="nav navbar-nav navbar-right">
        <li><%= link_to 'Home', root_path %></li>
        <% if admin_signed_in? %>
        <!--This block is only visible to signed in admins -->
          <li><%= link_to 'New Post', new_post_path %></li>
          <li><%= link_to 'My Account', edit_admin_registration_path  %></li>
          <li><%= link_to 'Logout', destroy_admin_session_path, :method => :delete %></li>
        <!-- The block ends here -->
        <% else %>
          <li><%= link_to 'Login', new_admin_session_path  %></li>
        <% end %>

Before the navigation bar can be visible on your website you need to render it. In this case, the rendering will be done from the application layout.
See what I have in mine;


<!DOCTYPE html>
  <%= stylesheet_link_tag    'application', media: 'all', 'data-turbolinks-track' => true %>
  <%= javascript_include_tag 'application', 'data-turbolinks-track' => true %>
  <%= csrf_meta_tags %>
  <!-- Render navigation bar -->
  <%= render "layouts/navigation" %>
  <p class="notice"><%= notice %></p>
  <p class="alert"><%= alert %></p>
  <div class="container-fluid">
    <%= yield %>


Now reload your browser and you should see the navigation bar displayed.

Authenticate Action

At this point, you have your admin all set up. Now you can set up the authentication as stated earlier; we need just one line to do so.
Navigate to app/controllers/posts_controller.rb and add this line of code.

  #This authenticates admin whenever a post is to be created, updated or destroyed.
  before_action :authenticate_admin!, except: [:index, :show]

We are using the before_action callback provide by Rails to make sure that whenever ANY action EXCEPT index and show is called, authentication will be requested.


One more thing, you need to validate the presence of title and body each time a post is going to be created. You do not want your admins to create posts that have no title or body; that will piss your users off.
Validation is always done in the Model.
Open up your Post model and make it look like this:


class Post < ActiveRecord::Base
  #This validates presence of title, and makes sure that the length is not more than 140 words
  validates :title, presence: true, length: {maximum: 140}
  #This validates presence of body
  validates :body, presence: true

Admin Actions on Index Page

To make work easy for the admin, and for the sake of user experience, it is wise that we include links for easy navigation in the index. One important issue: though, these links have to be visible to just the admin.
Now open your index file and paste in the code I have below:


<div class="container">
  <div class="col-sm-10 col-sm-offset-1 col-xs-12">
    <% @posts.each do |post| %>
    <div class="col-xs-12 text-center">
      <div class="text-center">
        <h2><%= post.title %></h2>
        <h6><%= post.created_at.strftime('%b %d, %Y') %></h6>
        <%= raw(post.body).truncate(358) %>
      <div class="text-center">
        <%= link_to "READ MORE", post_path(post) %>
      <% if admin_signed_in? %>
        <%= link_to "Show", post_path(post), class: "btn btn-primary" %>
        <%= link_to "Edit", edit_post_path(post), class: "btn btn-default" %>
        <%= link_to "Delete", post_path(post), class: "btn btn-danger", data: {:confirm => "Are you sure?"}, method: :delete %>
      <% end %>
      <hr />
    <% end %>

Now log out if you are logged in and you should not be able to create a new post when you point your browser to http://localhost:3000/posts/new.
At this point, your blog is secured from intruders who may want to gain unprivileged access.

Image Uploading for Posts

First, we want to enable image upload for our posts which allow images to be uploaded when new posts are created.
We will make use of a gem called carrierwave.
Open your Gemfile and add it in.

gem 'carrierwave'
gem 'mini_magick'

Carrierwave helps you enable seamless upload of images in your Rails application. Mini_Magick helps in the processing of your images.
Run the command to install.

bundle install

Next, we run the generator command to generate some important files to ensure CarrierWave works with CKEditor.

rails generate ckeditor:install --orm=active_record --backend=carrierwave

That will generate some outputs for us. Next, migrate your database by running the command: rake db:migrate.
Run rails server to start up your server. Point your browser to http://localhost:3000/posts/new.
Follow the steps shown in the screenshots below.

When done, paste in some text and write a title. Click on the Create Post button to submit your post. Your new posts should have an image uploaded.
That was easy!

Image Uploading for Admins

Next, let us add an avatar feature for our admins. We will also make use of carrierwave.
Create an initializer for carrierwave at config/initializers/carrier_wave.rb and the in the code below:


require 'carrierwave/orm/activerecord'

We will start with generating an uploader;

rails generate uploader Avatar

This will create a new file in app/models/uploaders/avatar_uploader.rb
Open the file in your text editor and edit yours to look like mine.


# encoding: utf-8

class AvatarUploader < CarrierWave::Uploader::Base

  #include mini_magick for image processing
  include CarrierWave::MiniMagick

  #storage option
  storage :file

  #directory for storing image
  def store_dir

  # Create different versions of your uploaded files:
  version :thumb do
    process :resize_to_fit => [50, 50]

  version :medium do
    process :resize_to_fit => [300, 300]

  version :small do
    process :resize_to_fit => [140, 140]

  # Add a white list of extensions which are allowed to be uploaded.
  def extension_white_list
    %w(jpg jpeg gif png)

Let us add a string column to our admin table for the avatars.

rails g migration add_avatar_to_admins avatar:string
rake db:migrate

Open your admin model and mount your AvatarUploader. Here is how to do it;


class Admin < ActiveRecord::Base
  # Include default devise modules. Others available are:
  # :confirmable, :lockable, :timeoutable and :omniauthable
  devise :database_authenticatable, :registerable,
         :recoverable, :rememberable, :trackable, :validatable

  #mount avatar uploader
  mount_uploader :avatar, AvatarUploader

For admins to be able to upload images, we need to whitelist avatar, which makes it a permitted parameter. We will do it inside our application_controller.rb. Like this:


class ApplicationController < ActionController::Base
  # Prevent CSRF attacks by raising an exception.
  # For APIs, you may want to use :null_session instead.
  protect_from_forgery with: :exception

   before_action :configure_permitted_parameters, if: :devise_controller?


  #configure permitted parameters for devise
  def configure_permitted_parameters
    added_attrs = [:email, :password, :password_confirmation, :remember_me, :avatar, :avatar, :avatar_cache]
    devise_parameter_sanitizer.permit :sign_up, keys: added_attrs

Finally, let us add the field in our views through which the images will be uploaded in our admin sign up page.


<div class="col-sm-offset-4 col-sm-4 col-xs-12">
  <h2 class="text-center">Sign up</h2>

  <%= form_for(resource, as: resource_name, url: registration_path(resource_name), html: {multipart: :true, class: "form"}) do |f| %>
    <%= devise_error_messages! %>

    <div class="form-group">
      <%= f.label :email %><br />
      <%= f.email_field :email, autofocus: true, class: "form-control" %>

    <div class="form-group">
      <%= f.label :password %>
      <% if @minimum_password_length %>
      <em>(<%= @minimum_password_length %> characters minimum)</em>
      <% end %><br />
      <%= f.password_field :password, autocomplete: "off", class: "form-control" %>

    <div class="form-group">
      <%= f.label :password_confirmation %><br />
      <%= f.password_field :password_confirmation, autocomplete: "off", class: "form-control" %>

    <!-- Field for image upload -->
    <div class="form-group">
      <%= f.label :avatar do %><br />
        <%= f.file_field :avatar, class: "form-control" %>
        <%= f.hidden_field :avatar_cache %>
      <% end %>

    <%= f.submit "Sign up", class: "btn btn-primary btn-lg" %>
  <% end %>

<%= render "admins/shared/links" %>

The page will look like this:


In this part you have learned how to authenticate in your Rails application using Devise. You also learned about a gem called carrierwave. With this gem, you were able to enable image uploading in your blog.
I hope you enjoyed it.


Getting Started With Flask, A Python Microframework

By mbithenzomo


Flask is a simple, easy-to-use microframework for Python that can help build scalable and secure web applications. Here are a few reasons why Flask is great for beginners:

  1. It’s easy to set up
  2. It’s supported by an active community
  3. It’s well documented
  4. It’s very simple and minimalistic, and doesn’t include anything you won’t use
  5. At the same time, it’s flexible enough that you can add extensions if you need more functionality

In this tutorial, we’ll cover the following:

  1. Installation of Flask and its prerequisites
  2. Recommended file and directory structure for a Flask project
  3. Configuration and initialization of a Flask app
  4. Creating views and templates

By the end of this tutorial, we will have built a simple static website using Flask. The code used in this tutorial is available for your reference on GitHub.

Ready? Let’s dive in!


We’ll need the following installed for this tutorial:

  1. Python (this tutorial uses Python 2)
  2. virtualenv and virtualenvwrapper
  3. Flask

You may already have Python installed on your system. You can check by running the python command in your terminal. If it’s installed, you should see the following output:

$ python
Python 2.7.10 (default, Oct 23 2015, 19:19:21)
[GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.0.59.5)] on darwin
Type "help", "copyright", "credits" or "license" for more information.

If you don’t have it installed, you can download it here.

We’ll start by installing virtualenv, a tool to create isolated Python environments. We need to use virtual environments to keep the dependencies used by different Python projects separate, and to keep our global site-packages directory clean. We’ll go one step further and install virtualenvwrapper, a set of extensions that make using virtualenv a whole lot easier by providing simpler commands.

$ pip install virtualenv
$ pip install virtualenvwrapper
$ export WORKON_HOME=~/Envs
$ source /usr/local/bin/

To create and activate a virtualenv, run the following commands:

$ mkvirtualenv my-venv
$ workon my-venv

Told you the commands were simple! We now have a virtualenv called my-venv, which we have activated and are currently working on. Now, any dependencies we install will be installed here and not globally. Remember to activate the virtualenv whenever you want to use or work on this project!

Next, let’s create a directory for our app. This is where all our files will go:

$ mkdir my-project
$ cd my-project

Finally, let’s install Flask:

$ pip install Flask

Installing Flask also installs a few other dependencies, which you will see when you run the following command:

$ pip freeze

What do all these packages do? Flask uses Click (Command Line Interface Creation Kit) for its command-line interface, which allows you to add custom shell commands for your app. ItsDangerous provides security when sending data using cryptographical signing. Jinja2 is a powerful template engine for Python, while MarkupSafe is a HTML string handling library. Werkzeug is a utility library for WSGI, a protocol that ensures web apps and web servers can communicate effectively.

You can save the output above in a file. This is good practice because anyone who wants to work on or run your project will need to know the dependencies to install. The following command will save the dependencies in a requirements.txt file:

pip freeze > requirements.txt

Say “Hello World!” with Flask

I think any beginner programming tutorial would be remiss if it didn’t start with the classic “Hello World!” So here’s how to do this in Flask:

Create the following file,, in your favourite text editor (I’m an Atom girl, myself):


from flask import Flask
app = Flask(__name__)

def hello_world():
    return 'Hello World!'

We begin by importing the Flask class, and creating an instance of it. We use the __name__ argument to indicate the app’s module or package, so that Flask knows where to find other files such as templates. Then we have a simple function that will display the string Hello World!. The preceeding decorator simply tells Flask which path to display the result of the function. In this case, we have specified the route /, which is the home URL.

Let’s see this in action, shall we? In your terminal, run the following:

$ export
$ flask run
 * Serving Flask app "hello-world"
 * Running on (Press CTRL+C to quit)

The first command tells the system which app to run. The next one starts the server. Enter the specified URL ( in your browser. Voila! It works!

Directory Structure

So far, we only have one functional file in our project: A real-world web project usually has more files than that. It’s important to maintain a good directory structure, so as to organize the different components of the application separately. These are a few of the common directories in a Flask project:

  1. /app: This is a directory within my-project . We’ll put all our code in here, and leave other files, such as the requirements.txt file, outside.
  2. /app/templates: This is where our HTML files will go.
  3. /app/static: This is where static files such as CSS and JavaScript files as well as images usually go. However, we won’t be needing this folder for this tutorial since we won’t be using any static files.
$ mkdir app app/templates

Your project directory should now look like this:

├── my-project
       ├── app
       │   ├── templates
       └── requirements.txt

The seems a little out of place now, doesn’t it? Don’t worry, we’ll fix that in the next section.

File Structure

For the “Hello World!” example, we only had one file, and we started the Flask server manually from the terminal. To build our website, we’ll need more files that serve various functions. Most Flask apps have the following basic file structure:

  1. This is the application’s entry point. We’ll run this file to start the Flask server and launch our application.
  2. This file contains the configuration variables for your app, such as database details.
  3. app/ This file intializes a Python module. Without it, Python will not recognize the app directory as a module.
  4. app/ This file contains all the routes for our application. This will tell Flask what to display on which path.
  5. app/ This is where the models are defined. A model is a representation of a database table in code. However, because we will not be using a database in this tutorial, we won’t be needing this file.

Some projects have more modules (for example, an app/views directory with many views files in it), but this’ll do for now. Go ahead and create these files, and delete since we won’t be needing it anymore:

$ touch
$ cd app
$ touch
$ rm

Here’s our latest directory structure:

├── my-project
       ├── app
       │   ├──
       │   ├── templates
       │   └──
       ├── requirements.txt

Now let’s fill these empty files with some code!


The file should contain one variable per line, like so:


# Enable Flask's debugging features. Should be False in production
DEBUG = True

You can also choose to have different files for testing, development, and production, and put them in a config directory. Note that you may have some variables that should not be publicly shared, such as passwords and secret keys. These can be put in an instance/ file, which should not be pushed to version control.


Next, we have to initialize our app with all our configurations. This is done in the app/ file. Note that if we set instance_relative_config to True, we can use app.config.from_object('config') to load the file.

# app/

from flask import Flask

# Initialize the app
app = Flask(__name__, instance_relative_config=True)

# Load the views
from app import views 

# Load the config file

Run, Flask, Run!

All we have to do now is configure our file so we can start the Flask server.


from app import app

if __name__ == '__main__':

Start a terminal window and run the file. Remember to activate your virtual environment first!

$ python

We’ll get a 404 page because we haven’t written any views for our app. We’ll be fixing that shortly.


From the “Hello World!” example, you already have an understanding of how views work. We use the @app.route decorator to specify the path we’d like the view to be dispayed on. We’ve already seen how to write a view that returns a string. Let’s see what else we can do with views.


from flask import render_template

def index():
    return render_template("index.html")

def about():
    return render_template("about.html") 

Flask provides a method, render_template, which we can use to specifiy which HTML file should be loaded in a particular view. Of course, the index.html and about.html files don’t exist yet, so Flask will give us a Template Not Found error when we navigate to these paths. Go ahead; run the app and see:


Flask allows us to use a variety of template languages, but Jinja2 is by far the most popular one. Remember it from our installed dependencies? Jinja provides syntax that allows us to add some functionality to our HTML files, like if-else blocks and for loops, and also use variables inside our templates. Jinja also lets us implement template inheritance, which means we can have a base template that other templates inherit from. Cool, right?

Let’s begin by creating the following three HTML files:

$ cd app/templates
$ touch base.html index.html about.html

We’ll start with the base.html file, using a slightly modified version of this example Bootstrap template:

<!-- base.html -->

<!DOCTYPE html>
<html lang="en">
    <title>{% block title %}{% endblock %}</title>
    <!-- Bootstrap core CSS -->
    <link href="" rel="stylesheet">
    <!-- Custom styles for this template -->
    <link href="" rel="stylesheet">
    <div class="container">
      <div class="header clearfix">
          <ul class="nav nav-pills pull-right">
            <li role="presentation"><a href="/">Home</a></li>
            <li role="presentation"><a href="/about">About</a></li>
            <li role="presentation"><a href="" target="_blank">More About Flask</a></li>
      {% block body %}
      {% endblock %}
      <footer class="footer">
        <p>© 2016 Your Name Here</p>
    </div> <!-- /container -->

Did you notice the {% block %} and {% endblock %} tags? We’ll also use them in the templates that inherit from the base template:

<!-- index.html-->

{% extends "base.html" %}
{% block title %}Home{% endblock %}
{% block body %}
<div class="jumbotron">
  <h1>Flask Is Awesome</h1>
  <p class="lead">And I'm glad to be learning so much about it!</p>
{% endblock %}
<!-- about.html-->

{% extends "base.html" %}
{% block title %}About{% endblock %}
{% block body %}
<div class="jumbotron">
  <h1>The About Page</h1>
  <p class="lead">You can learn more about my website here.</p>
{% endblock %}

We use the {% extends %} tag to inherit from the base template. We insert the dynamic content inside the {% block %} tags. Everything else is loaded right from the base template, so we don’t have to re-write things that are common to all pages, such as the navigation bar and the footer.

Let’s refresh our browser and see what we have now:


Congratulations for making it this far and getting your first Flask website up and running! I hope this introduction to Flask has whetted your appetite for exploring more. You now have a great foundation to start building more complex apps. Do have a look at the official documentation for more information.

Have you used other Python frameworks before? Where do you think Flask stands? Let’s have a conversation in the comments below.