Monthly Archives: August 2017

What's New In AdonisJs 4.0

By Chimezie Enyinnaya

AdonisJS 4.0 has been in development for quite some time now. Recently Harminder Virk the creator of AdonisJs announced a dev release of the much awaited v4.0 of the framework. In his words:

The framework is full of new and fresh ideas to make you even more productive

In this article, we’ll be looking at the new features, enhancements, changes that were introduced in AdonisJs 4.0. We’ll start by highlighting them, then take a broader look at each of them in succession.

  • New website design and logo
  • Adonis CLI
  • NPM organization and scope packages
  • Boilerplates
  • New directory structure
  • ES6 features
  • Speed Improvements
  • Edge Templating Engine

New Website Design and Logo

Visiting the dev release, you can’t help but notice the new and beautiful website design and logo.

AdonisJS now has a new theme (colors) which to me is really cooool. I like the new logo too.

Adonis CLI

The Adonis CLI has being improved to include some additional functionality. The CLI now contain a install command which will save you tremendous time when installing npm packages. It does not only install the packages, but at the same time creates the config files, required models, migrations etc.

NPM Organization and Scope Packages

All AdonisJS packages are now grouped into an organisation and are scoped with @adonisjs prefix. This will make it easy to differentiate official AdonisJS packages from the community contributed packages. Below is how you would install AdonisJS Auth package:

adonis install @adonisjs/auth


It is somehow perceived that AdonisJS is meant for building only full stack applications. Well, that’s not all true. AdonisJS can be used for any kind of applications ranging from full stack apps, building APIs to microservices. Hence, with v4.0 you have 3 boilerplates (fullstack, slim and api) which you can choose from when creating a new AdonisJS application. By defaut, the fullstack app boilerplate will be used but you can specify the boilerplate you want by passing --slim or --api-only flag.

// creates a fullstack app
adonis new app-name

// creates a slim app
adonis new app-name --slim

// creates an API only app
adonis new app-name --api-only

Now you can start building your application lean and grow big as the application permits.

New Directory Structure

There has been some significant changes in the way AdonisJS 4.0 apps are structured. Below is screenshot showing the directory structures of each boilerplate.

Note that resources and public directories are not included in the api boilerplate, since the API you will be building with it can be consumed by any frontend or mobile app. You can easily create new directories/files as your application requires using the adonis CLI commands.

ES6 Features

AdonisJS 4.0 makes extensive use of async/await and ES6 Object destructuring. So if you are excited and love these ES6 features then you would definitely love AdonisJS 4.0. The code sample below shows how AdonisJS 4.0 makes use of these features:

// sample code

class UserController {
    async login ({ request, auth }) {
        const { email, password } = request.all()
        await auth.attempt(email, password)

        return 'Logged in successfully'

Speed Improvements

AdonisJS 4.0 uses async/await on like the eariler versions which makes use of generators. This has resulted in a huge performance significant in the framework, as v4.0 is almost 4 times faster than earlier versions of AdonisJS. You can take a look at the benchmarks.

Edge Templating Engine

Edge.js is Node.js templating engine, it is blazing fast and comes with an elegant API to create dynamic views. It is created by the creator of AdonisJS. AdonisJS 4.0 makes use of Edge as its templating engine compared to eariler versions which makes use of nunjucks.


So we have taken a look the exciting things that are new to AdonisJS 4.0. Though it is hasn’t be finally released, but the creator has clearly pointed out that the API is 100% complete. So you can start building your applications with AdonisJS 4.0 or start upgrading your existing applications. According to Harminder Virk, it’s just some documentation that’s left before it is officially released.


Asynchronous Chat With Rails and ActionCable

By Ilya Bodrov

Rails 5 has introduced many new cool features and ActionCable is probably the most anticipated one. To put it simply, ActionCable is a framework for real-time configuration over web sockets. It provides both client-side (JavaScript) and server-side (Ruby) code and so you can craft sockets-related functionality like any other part of your Rails application. I really like this new addition and recommend giving it a shot.

There are a handful of introductory tutorials on the Internet explaining how to get stated with ActionCable, however students often ask me how to introduce file uploading functionality over web sockets. This topic is not really covered anywhere, so I decided to research it myself.

In this two-parted tutorial we will create a basic chat application powered by Rails 5.1 and ActionCable with the ability to upload files. We will utilize Clearance for authentication as well as Shrine and FileReader API for file uploading.

The source code for this article is available at GitHub. The final application will look like this:

In the first part of the tutorial we are going to create a new application, introduce basic authentication, integrate ActionCable and utilize ActiveJob for the broadcasting. Shall we proceed?

Laying Foundations

Start off by creating a new Rails application:

rails new ActionCableUploader

At the time of writing the newest version of Rails was 5.1.1, so I am going to use it for this demo. Please note that ActionCable is not included in Rails 4 and older.

We will require a basic authentication system. To speed things up, we are not going to write it from scratch but rather use some third-party solution. The most obvious choice that comes to mind is probably Devise but let’s make things a bit more interesting and use another solution called Clearance. This gem is similar to Devise, but is intended to be smaller and simpler. After all, we really do need something simple, as this article is not about authentication solutions. Clearance was created by Thoughtbot, the guys who brought us Paperclip, FactoryGirl and other great solutions.

So, drop a new gem into the Gemfile:

gem 'clearance', '~> 1.16'

and then run:

bundle install
rails generate clearance:install

The latter command is going to equip your application with the Clearance’s code. It is going to perform the following operations:

  • Create a User model and the corresponding migration. If you already have a model with such name, it will be tweaked properly
  • Create an initializer file for Clearance. You are welcome to check it out and modify as needed
  • Insert a Clearance::Controller module into the ApplicationController
  • Make you a coffee (well, actually it won’t)

When you are ready, apply the migration:

rails db:migrate

That’s it, the preparations are done and we can move to the next section!

Adding Chat Page

What I want to do now is create the chat page and restrict access to it. The corresponding controller will be called ChatsController. Add a root route now:

# config/routes.rb
root 'chats#index'

Don’t forget to create the controller itself:

# controllers/chats_controller.rb
class ChatsController < ApplicationController
  before_action :require_login

  def index

The before_action :require_login line, as you’ve probably guessed, restricts access to all actions of the current controller. This action does pretty much the same as the authenticate_user! method in Devise.

Now create a views/chats/index.html.erb view that will have only a header for now:

<h1>Demo Chat</h1>

Lastly, populate your application layout with the following contents to display a sign out link and flash messages (if any):

<!-- views/layouts/application.html.erb -->
<% if signed_in? %>
  Signed in as: <%= %>
  <%= button_to 'Sign out', sign_out_path, method: :delete %>
<% else %>
  <%= link_to 'Sign in', sign_in_path %>
<% end %>

<div id="flash">
  <% flash.each do |key, value| %>
    <%= tag.div value, class: "flash #{key}" %>
  <% end %>

Now start the server and navigate to http://localhost:3000. You should see a page similar to this one.

Note that you are automatically redirected to the Sign In page as you are not yet logged in.

Register with some sample credentials—after that you should be able to see the chat page which means everything is working just fine.


Now let’s create a new model and the corresponding table. I’ll call the model Message which is quite an unsuprising name. It will have a body and a foreign key to establish an association to the users table:

rails g model Message user:belongs_to body:text
rails db:migrate

Make sure that your models have the proper associations set up, as we want each message to have an author (that is, a user):

# models/user.rb
has_many :messages, dependent: :destroy
# models/message.rb
belongs_to :user

Brilliant. Next, of course, we’ll need a form to actually send a message. To render it, I am going to use a new helper method called form_with introduced in Rails 5 which is meant to replace form_for and form_tag (though the latter methods are still supported). The form will be processed by JavaScript, so I’ll use # for the URL. Place the following code into your views/chats/index.html.erb view:

<div id="messages">
  <%= render @messages %>

<%= form_with url: '#', html: {id: 'new-message'} do |f| %>
  <%= f.label :body %>
  <%= f.text_area :body, id: 'message-body' %>
  <%= f.submit %>
<% end %>

Note that the form_with does not generate any ids for the tags so I am adding them manually to further select these elements using JS.

I’ve also provided the #messages block to render all the messages. This requires the views/messages/_message.html.erb partial to be present, so add it now:

<div class="message">
  <strong><%= %></strong> says:
  <%= message.body %>
  <small>at <%= l message.created_at, format: :short %></small>

Lastly, load the messages inside the index action:

# chats_controller.rb
def index
  @messages = Message.order(created_at: :asc)

This will sort them by creation date, ascending. Your chat page will look something like this:

Okay, now it is time for the ActionCable to step into the limelight.

ActionCable: Time for Action!

Our next task is to enable real-time conversation between the client and the server powered by the ActionCable’s magic. Let’s start with adding some configuration:

# config/environments/development.rb
config.action_cable.url = 'ws://localhost:3000/cable'
config.action_cable.allowed_request_origins = [ 'http://localhost:3000', '' ]

Next, a route:

# routes.rb
# ...
mount ActionCable.server => '/cable'

Lastly, meta tags:

<!-- views/layouts/application.html.erb -->
<!-- ... -->
<%= action_cable_meta_tag %>
<%= stylesheet_link_tag    'application', media: 'all', 'data-turbolinks-track': 'reload' %>
<%= javascript_include_tag 'application', 'data-turbolinks-track': 'reload' %>
<!-- ... -->

Now let’s take care of the client and write some CoffeeScript code. Create a new file app/assets/javascripts/channels/ with the following contents:

jQuery(document).on 'turbolinks:load', ->
  $messages = $('#messages')
  $new_message_form = $('#new-message')
  $new_message_body = $new_message_form.find('#message-body')

  if $messages.length > 0 = App.cable.subscriptions.create {
      channel: "ChatChannel"
      connected: ->

      disconnected: ->

      received: (data) ->

      send_message: (message) ->

Here we are checking if the #messages block is present on the page and, if yes, set up a new subscription to the ChatChannel. This channel will be used to communicate with the server in real time. Note that there are a bunch of callbacks that you can use: connected, disconnected and received. send_message will be used to actually forward the messages to the server. This new file will be loaded automatically as javascripts/ requires the channels folder by default.

One thing to note, however, is that Rails 5.1 apps do not include jQuery as a dependency anymore, so you’ll need to add it yourself:

# Gemfile
gem 'jquery-rails


bundle install

and include jQuery to the javascripts/application.js file:

//= require jquery3

I am including the latest version of jQuery to support only the modern browsers, but you can also choose versions 1 or 2.

Now we need to listen for the form submit event, prevent the default action and call the send_message method defined for the channel instead:

jQuery(document).on 'turbolinks:load', ->
    # ...
    if $messages.length > 0
        # ...
        $new_message_form.submit (e) ->
          $this = $(this)
          message_body = $new_message_body.val()
          if $.trim(message_body).length > 0

          return false

Here I am checking if the body has at least one character and call the send_message method if it is true. Nothing complex going on here.

Next, flesh out the send_message method. What it needs to do is receive the body of the message and forward if to the server where it will be stored to the database. Note that this method will not output anything to the page—it should happen inside the received callback.

# ...
send_message: (message) ->
  @perform 'send_message', message: message

@ means this in CoffeeScript. 'send_message' argument is the name of the method to call on the server-side which we will create in a minute.

Lastly, code the received callback to clear the textarea and render a new message:

# ...
received: (data) ->
  if data['message']
    $messages.append data['message']

That’s it, we have finished coding the client-side! The server-side awaits, so proceed to the next section.

ActionCable: Server-Side

If you have played The Witcher series, you know that the sword of destiny has two edges. So as ActionCable. Therefore, let’s take of the server-side now.

Create a new app/channels/chat_channel.rb file that will process the messages sent from the client-side:

class ChatChannel < ApplicationCable::Channel
  def subscribed
    stream_from "chat_channel"

  def unsubscribed

  def send_message(data)

There are two callbacks here that are run automatically: subscribed (that runs as soon as the new client subscribes to the channel using App.cable.subscriptions.create code we’ve written a moment ago) and unsubscribed. send_message is the method that is called by the following line of code in our file:

@perform 'send_message', message: message

Note, by the way, that the files inside the app/channels directory are not auto-reloaded (even in development environment), so you must restart the server after modifying them.

The data local variable contains a hash so we can access the message’s body quite easily to save it to the database:

# ...
def send_message(data)
  Message.create(body: data['message'])

There is a problem, however: we don’t have access to the Clearance’s current_user method from inside the channel’s code, therefore it is not possible to enforce authentication and associate the created message to a user.

To fix this problem, the current_user should be defined manually. We are going to employ the methods similar to the ones provided in the Clearance’s session.rb file:

# app/channels/application_cable/connection.rb

module ApplicationCable
  class Connection < ActionCable::Connection::Base
    identified_by :current_user

    def connect
      self.current_user = find_current_user
      reject_unauthorized_connection unless self.current_user


    def find_current_user
      if remember_token.present?
        @current_user ||= user_from_remember_token(remember_token)


    def cookies
      @cookies ||=

    def remember_token

    def user_from_remember_token(token)
      Clearance.configuration.user_model.find_by(remember_token: token)

The following code simply tries to find a currently logged in user by a remember token stored in the cookie (the cookie’s name is taken from the Clearance configuration). The user is then assiged to the self.current_user. If, however, the user cannot be found, we reject connection effectively disallowing to communicate using the channel. The connect method is called automatically each time someone tries to subscribe to a channel, so there nothing else we need to do here.

Now return to the ChatChannel and tweak the send_message method a bit:

# ...
def send_message(data)
  current_user.messages.create(body: data['message'])

At this point our ActionCable setup is finished. Later you can add other channels using the same principle. There is, however, one last thing to do (yeah, there is always “one last thing”, isn’t it?). After the message is stored in the database, it should be broadcasted to all users who are subscribed to the channel. The client code will then run the received callback and render the new message. So, let’s do it now!

Callback and (Very) ActiveJob

We are going to employ a model callback to broadcast a newly created message. However, I’d like to perform this task in background, therefore using the ActiveJob for this task seems like a good idea as well.

Here is the code for the Message model:

# models/message.rb
class Message < ApplicationRecord
  belongs_to :user

  validates :body, presence: true

  after_create_commit :broadcast_message


  def broadcast_message

First of all, I’ve added a very basic validation rule to ensure the body is present. Next, there is a new after_create_commit callback that runs only after the commit was performed. Inside the corresponding method we are queueing the broadcasting job while passing self as an argument. self in this case points to the created message. Using the background job here is convenient because later you may extend it and, for example, send notification emails to the users saying that there is a new message waiting for them.

The background job itself is quite simple:

# app/jobs/message_broadcast_job.rb
class MessageBroadcastJob < ApplicationJob
  queue_as :default

  def perform(message)
    ActionCable.server.broadcast 'chat_channel', message: render_message(message)


  def render_message(message)
    MessagesController.render partial: 'messages/message', locals: {message: message}

We queue the job with a default priority. Inside the perform action we broadcast the message rendered by the MessagesController. Note that the same partial created earlier is utilized here.

The MessagesController does not exist, so create it now:

# app/controllers/messages_controller.rb
class MessagesController < ApplicationController

And that’s it! Our messages are now saved and broadcasted properly, so you can boot the server, navigate to the main page of the site and try to chat with yourself. Note that after you load the page, a request to the /cable will be performed (you may observe it using Firebug or a similar tool):

To make process a bit more interesting you may open two separate browser windows and note that messages appear in both of them nearly instantly:

Inside the terminal you should see an output like this:


This ends the first part of the tutorial. We have crafted the real-time chatting application that can now be extended quite easily. Throughout the article you have learned how to:

  • Integrate Clearance gem
  • Code the client-side for ActionCable
  • Code the server-side for ActionCable
  • Enforce server-side authentication
  • Use ActiveJob to broadcast messages

In the second part we will finalize this application and allow the users to upload files via ActionCable with the help of the Shrine gem and FileReader Web API.

So, stay tuned and see you soon!


Implementing GraphQL Using Apollo On an Express Server

By John Kariuki

GraphQL Home Page


GraphQL is a data query language for APIs and runtime, a specification that defines a communication protocol between a client and server. At its core, GraphQL enables flexible, client-driven application development in a strongly typed environment. It provides a complete and understandable description of the data in your API, gives the clients the power to ask for exactly what they need.

GraphQL enables declarative data fetching where a client can specify exactly what data it needs from an API. Instead of multiple endpoints that return fixed data structures, a GraphQL server only exposes a single endpoint and responds with precisely the data a client asked for.

It is a new API standard that provides a more efficient, powerful and flexible alternative to REST and is not tied to any backend or database and is instead backed by your existing code and data.

Client appllications using GraphQL are fast and stable because they control the data they get, not the server, providing predictable results.

To successfully complete this tutorial, we will require Node JS to be setup on our computers.



The server defines the schema which defines the objects that can be queried, along with corresponding data types. A sample simple project schema may be defined as follows:

type Project {
  id: ID!                                            //"!" denotes a required field
  name: String
  tagline: String
  contributors: [User]

type User {
 id: ID!
 name: String
 age: Int

The schema above defines the shape of a project with a required id, name, tagline an contributors which is an array of type User.

Queries and Mutations

GraphQL clients communicate with GraphQL servers via queries and mutations. A query also defines the shape of the resulting data, allowing the client to explicitly control the shape of data being consumed.
Queries correspond to GET requests in normal REST applications while mutations correspond to POST, PUT and other http verbs used to make chnages to data stored on the server. For instance, to retrieve a list of projects, including name and tagline only, the following query may be used.

query {
    projects {

An equivalent of querystrings in REST may be used as follows:

query {
  project(id: "1"){  //Fetch a project whose id is 1


  "data": {
    "project": {
      "name": "React JS",
    "tagline": "js"

As you can see, the client has the liberty to specify exactly what fields they want. This prevents fetching of unnecessay data and speeds up the fetching process. A resolver will be responsible for implementing the rest of the logic after getting the arguments.


Resolvers are the link between the schema and your data. They provide the functionality that may be used to interact with databases through different operations.


Apollo Server is a tool that provides set of GraphQL server tools from Apollo that work with various Node.js HTTP frameworks (Express, Connect, Hapi, Koa etc). The package name that includes these server tools is graphql-tools, which is created and maintained by the Apollo community. Graphql-tools are a set of tools which enable production-ready GraphQL.js schema building using the GraphQL schema language, rather than using the GraphQL.js type constructors directly.

Setting up

We will begin by setting up the following folder structure:

├── src/
    └── resolvers.js
    └── schema.js
├── package.json
├── node_modules/

Creating the Schema

We can finally create a working implementation. Lets start by creating a GraphQL schema.

// app/src/schema.js
import {  makeExecutableSchema } from 'graphql-tools';

import { resolvers } from './resolvers'; // Will be implemented at a later stage.

const typeDefs = `
    type Channel {
      id: ID!                # "!" denotes a required field
      name: String
      messages: [Message]!

    type Message {
      id: ID!
      text: String
    # This type specifies the entry points into our API. 
    type Query {
      channels: [Channel]    # "[]" means this is a list of channels
      channel(id: ID!): Channel

    # The mutation root type, used to define all mutations.
    type Mutation {
      # A mutation to add a new channel to the list of channels
      addChannel(name: String!): Channel

const schema = makeExecutableSchema({ typeDefs, resolvers });
export { schema };

Notice const schema = makeExecutableSchema({ typeDefs, resolvers });
You must call makeExecutableSchema and pass in the schema(s) and an object containing all resolvers to actually “glue” the schema to its resolvers. We then export the schema to use it later in our server.


The queries are similar to the ones explained above. Lets focus on the mutation:

type Mutation {
      # A mutation to add a new channel to the list of channels
      addChannel(name: String!): Channel

This defines a muattion called addChannel that takes an argument/variable name and returns a Channel as a result. A sample mutation looks as follows:

mutation {
  addChannel(name:"lacrose") {

As we can see, as much as the schema defines a Channel as the result, the client has the ability to request for the name only.

Creating Resolvers

Resolvers are the glue between the schema and your data. Apollo uses them to figure out how to respond to a query and resolve the incoming or return types. Every query and mutation requires a resolver function, and each field in every type can have a resolver.

Apollo will continue to invoke the chain of resolvers until it reaches a scalar type (i.e. String, Int, Float, Boolean). So, we should define a few root resolvers for the queries:

// app/src/resolvers.js
const channels = [{
  id: 1,
  name: 'soccer',
}, {
  id: 2,
  name: 'baseball',

let nextId = 3;

export const resolvers = {
  Query: {
    channels: () => {
      return channels;
    channel: (root, { id }) => {
      return channels.find(channel => === id);
  Mutation: {
    addChannel: (root, args) => {
      const newChannel = { id: nextId++, name: };
      return newChannel;

For now, we declare an in memory array to store our data. This array can be replaced by a real database in a real-world application.

The resolver function provide two sections, the Query and Mutation parts. Resolver functions take three optional arguments, root, args and context. The args part is the important part that caries variables/query strings from the client. Notice the functions used are the same as the functions declared in the schema. Lets focus on a single query resolver that returns a single channel depending on the id provided.

  channel: (root, { id }) => {
      return channels.find(channel => === id);

The {id} part is the args that takes advantage of es6 object destructuring, otherwise the id could be accessed using if args was used as the second argument.

Connecting to Express

After designing a schema and resolver functions, we need to hook it up to a server. Apollo provides implementations for different servers such as Koa, hapi, Express and restify. We are going to use Express.

import express from 'express';
import cors from 'cors';
import {
} from 'graphql-server-express';
import bodyParser from 'body-parser';

import { schema } from './src/schema';

const PORT = 7700;
const server = express();
server.use('*', cors({ origin: 'http://localhost:8000' })); //Restrict the client-origin for security reasons.

server.use('/graphql', bodyParser.json(), graphqlExpress({

server.use('/graphiql', graphiqlExpress({
  endpointURL: '/graphql'

server.listen(PORT, () =>
  console.log(`GraphQL Server is now running on http://localhost:${PORT}`)

Lets go through the above code and analyze what each piece does.
We are utilizing Apollos GraphQL Server instead of the express-server because of the following features.

  • GraphQL Server has a simpler interface and allows fewer ways of sending queries, which makes it a bit easier to reason about what’s going on.
  • GraphQL Server serves GraphiQL on a separate route, giving you more flexibility to decide when and how to serve it.
  • GraphQL Server supports query batching which can help reduce load on your server.
  • GraphQL Server has built-in support for persisted queries, which can make your app faster and your server more secure.

In the above code, the imports make available the required libraries and the schema. We the define our endpoints and pass in the schema.

server.use('/graphql', bodyParser.json(), graphqlExpress({

For every request to ‘/graphql’, Apollo will run through the entire query/mutation processing chain and return the result depending on the query/mutation from the client.

For development and testing purposes, we are provided with GraphiQL, a graphical interactive in-browser GraphQL IDE that presents a React component responsible for rendering the UI, which should be provided with a function for fetching from GraphQL and can be accessed through the following endpoint.

server.use('/graphiql', graphiqlExpress({
  endpointURL: '/graphql'

Visiting /graphiql in our browser gives us the following interface where we can experiment with queries and mutations. The figure demonstrates how to fetch all channels from our server.

We could similarly carry out a mutation and request for the name only as the result:


GraphQL provides a different view to designing of API’s which is fast and more flexible compared to REST. Apollo provides better ways of implementation both on the server and client side. Clients determine what the want and avoid unnecessary data reducing the size of requests and amount of operations required to normalize data from the server.

Next up we will explore the use of Apollo on the client in a React application in comparison to using Redux and try to answer the question by Dan Abramov, the creator of Redux, GraphQL/Relay: The End of Redux? … Perhaps but will be using Apollo in place of Relay.


Easing the pain of building in JavaScript

By Dr. Axel Rauschmayer

In principle, JavaScript is a very dynamic and interactive programming language. However, that has changed significantly in recent years. Now, modern web development requires extensive build and compilation steps. That is unfortunate for two reasons. First, it makes development less pleasant. Second, it makes web development harder to learn. This blog post covers ideas and tools for easing some of the pain of building.

Source:: 2ality

Taming snakes with reactive streams

live demo

The web moves quickly and we all know it. Today, Reactive Programming is one of the hottest topics in web development and with frameworks like Angular or React, it has become much more popular especially in the modern JavaScript world. There’s been a massive move in the community from imperative programming paradigms to functional reactive paradigms. Yet, many developers struggle with it and are often overwhelmed by its complexity (large API), fundamental shift in mindset (from imperative to declarative) and the multitude of concepts.

While this is not always the easiest thing to do, once we get the hang of it we’ll ask ourselves how did we live without it?

This article is not meant to be an introduction to reactive programming and if you are completely new to it, we recommend the following resources:

The goal of this post is to learn how to think reactively by building a classic video game that we all know and love – Snake. That’s right, a video game! They are fun but complex systems that keep a lot of external state, e.g. scores, timers, or player coordinates. For our version, we’ll make heavy use of Observables and use several different operators to completely avoid external state. At some point it might be tempting to store state outside of the Observable pipeline but remember, we want to embrace reactive programming and don’t rely on a single external variable that keeps state.

Note: We’ll solely use HTML5 and JavaScript together with RxJS to transform a programmatic-event-loop into a reactive-event-driven app.

The code is available on Github and a live demo can be found here. I encourage you to clone the project, fiddle with it and implement cool new game features. If you do, ping me on Twitter.

The game

As mentioned earlier, we are going to re-create Snake, a classic video game from the late 1970s. But instead of simply copying the game, we add a little bit of variation to it. Here’s how the game works.

As a player you control a line that resembles a hungry snake. The goal is to eat as many apples as you can to grow as long as possible. The apples can be found at random positions on the screen. Each time the snake eats an apple, its tail grows longer. Walls will not stop you! But listen up, you must try to avoid hitting your own trail at all costs. If you don’t, the game is over. How long can you survive?

Here’s a preview of what we are going to build:

For this specific implementation the snake is represented as a line of blue squares where its head is painted in black. Can you tell how the fruits look? Exactly, red squares. Everything is a square and that’s not because they look so beautiful but they are very simple geometric shapes and easy to draw. The graphics are not very shiny but hey, it’s about making the shift from imperative programming to reactive programming and not about game art.

Setting up the stage

Before we can start with the game’s functionality, we need set up a element that gives us powerful drawing APIs from within JavaScript. We’ll use the canvas to draw our graphics including the playing area, the snake, the apples and basically everything we need for our game. In other words, the game will be rendered entirely on the element.

If this is completely new to you, check out this course on egghead by Keith Peters.

The index.html is quite simple because most of the magic happens with JavaScript.

  <meta charset="utf-8">
  <title>Reactive Snake</title>
  <script src="/main.bundle.js"></script>

The script that we add to the body is essentially the output of the build process and contains all of our code. However, you may be wondering why there is no such element inside the . It’s because we’ll create that element using JavaScript. In addition, we add a few constants that define how many rows and columns we have as well as the width and height of the canvas.

export const COLS = 30;
export const ROWS = 30;
export const GAP_SIZE = 1;
export const CELL_SIZE = 10;

export function createCanvasElement() {
  const canvas = document.createElement('canvas');
  canvas.width = CANVAS_WIDTH;
  canvas.height = CANVAS_HEIGHT;
  return canvas;

With that in place we can call this function, create a element on the fly and append it to the of our page:

let canvas = createCanvasElement();
let ctx = canvas.getContext('2d');

Note that we are also getting a reference to the CanvasRenderingContext2D by calling getContext('2d') on the element. This 2D rendering context for the canvas allows us to draw for example rectangles, text, lines, paths and much more.

We’re good to go! Let’s start working on the core mechanics of the game.

Identifying the source streams

From the preview and the game description we know that we need the following features:

  • Navigate the snake using the arrow keys
  • Keep track of the player’s score
  • Keep track of the snake (includes eating and moving)
  • Keep track of the apples on the field (includes generating new apples)

In reactive programming it’s all about programming with data streams, streams of input data. Conceptually, when a reactive program is executed, it sets up an observable pipeline that acts upon changes, e.g. a user has interacted with the application by pressing a key on the keyboard or simply a tick of an interval. So it’s all about figuring out what can change. Those changes often define the source streams. The key is to come up with the source streams and then compose them together to calculate whatever you need, e.g. the game state.

Let’s try to find our source streams by looking at the featurs above.

First of all, user input is definitely something that will change over time. The player navigates the hungry snake using the arrow keys. That means our first source stream is keydown$ which will emit values whenever a key is pressed down.

Next we need to keep track of the player’s score. The score basically depends on how many apples the snake has eaten. We could say that the score depends on the length of the snake because whenever the snake grows we want to increase the score by 1. Therefore, our next source stream is snakeLength$.

Again, it is important to figure out the main sources from which we can compute whatever you need, e.g. the score. In most cases, the source streams are combined and refined into more concrete streams of data. We’ll see this in action in a minute. For now, let’s continue with identifying our main source streams.

So far, we got the user input and the score in place. What’s left are the more game-facing or interactive streams, such as the snake or the apples.

Let’s start with the snake. The core mechanic of the snake is simple; it moves over time and the more apples it eats the bigger it grows. But what exactly is the source of the snake? For now, we can forget about the fact that it eats and grows because what matters is that it primarily depends on a time factor as it moves over time, e.g. 5 pixels every 200ms. So our source stream is an interval that produces a value after each period and we call this ticks$. This stream also determines the speed of our snake.

Last but not least the apples. With everyting else in place, the apples are farily easy. This stream basically depends on the snake. Everytime the snake moves we check whether the head collides with an apple or not. If it does, we remove that apple and generate a new one at a random position on the field. That said, we don’t need to introduce a new source stream for the apples.

Great, that’s it for the source streams. Here’s a brief overview of the all source streams we need for our game:

  • keydown$: keydown events (KeyboardEvent)
  • snakeLength$: represents the length of the snake (Number)
  • ticks$: interval that represents the pace of the snake (Number)

These source streams build the basis for our game from which we can calculate all other values we need including the score, snake and apples.

In the next sections we’ll look closely at how to implement each of these source streams and compose them to generate the data we need.

Steering the snake

Let’s dive right into code and implement the steering mechanism for our snake. As mentioned in the previous section the steering depends on keyboard inputs. Turns out it’s deceptively simple and the first step is to create an observable sequence from keyboard events. For this we can leverage the fromEvent() operator:

let keydown$ = Observable.fromEvent(document, 'keydown');

This is our very first source stream and it will emit a KeyboardEvent everytime the user presses down a key. Note that literally every keydown event is emitted. Therefore, we also get events for keys that we are not really interested in and that’s basically everything else but the arrow keys. But before we tackle this specific issue, we define a constant map of directions:

export interface Point2D {
  x: number;
  y: number;

export interface Directions {
  [key: number]: Point2D;

export const DIRECTIONS: Directions = {
  37: { x: -1, y: 0 }, // Left Arrow
  39: { x: 1, y: 0 },  // Right Arrow
  38: { x: 0, y: -1 }, // Up Arrow
  40: { x: 0, y: 1 }   // Down Arrow

By looking at the KeyboardEvent object it appears that every key has a unique keyCode. In order to get the codes for the arrow keys we can use this table.

Each direction is of type Point2D which is simply an object with an x and y property. The value for for each property can either be 1, -1 or 0, indicating where the snake should be heading. Later, we’ll use the direction to derive the new grid position for the snake’s head and tail.

The direction$ stream

So, we already have a stream for keydown events and everytime the player presses down a key we need to map the value, which will be a KeyboardEvent, to one of the direction vectors above. For that we can use the map() operator to project each keyboard event to a direction vector.

let direction$ = keydown$
  .map((event: KeyboardEvent) => DIRECTIONS[event.keyCode])

As mentioned earlier, we’ll receive every keydown event because we are not filtering out the ones that we are not interested in, such as the character keys. However, one could argue that we are already filtering out events by looking them up in the directions map. For every keyCode that is not defined in that map it will return undefined. Nevertheless, that’s not really filtering out values on the stream which is why we can use the filter() operator to only pipe through desired values.

let direction$ = keydown$
  .map((event: KeyboardEvent) => DIRECTIONS[event.keyCode])
  .filter(direction => !!direction)

Ok, that was easy. The code above is perfectly fine and works as expected. However, there’s still some room for improvement. Can you think of something?

Well, one thing is that we want to prevent the snake from going into the opposite direction, e.g. from right to left or up and down. It doesn’t really make sense to allow such behavior because the number one rule is to avoid hitting your own trail, remember?

The solution is fairly easy. We cache the previous direction and when a new event is emitted we check if the new direction is not equal to the opposite of the last one. Here’s a function that calculates the next direction:

export function nextDirection(previous, next) {
  let isOpposite = (previous: Point2D, next: Point2D) => {
    return next.x === previous.x * -1 || next.y === previous.y * -1;

  if (isOpposite(previous, next)) {
    return previous;

  return next;

This is the first time we are tempted to store state outside of the Observable pipeline because we somehow need to keep track of the previous direction right? An easy solution is to simply keep the previous direction in an external state variable. But wait! We wanted to avoid this, right?

To avoid external state, we need a way to sort of aggregate infinite Observables. RxJS has a very convenient operator we can use to solve our problem – scan().

The scan() operator is very similar to Array.reduce() but instead of only returning the last value, it emits each intermediate result. With scan() we can basically accumulate values and reduce a stream of incoming events to a single value infinitely. This way we can keep track of the previous direction without relying on external state.

Let’s apply this and take a look at our final direction$ stream:

let direction$ = keydown$
  .map((event: KeyboardEvent) => DIRECTIONS[event.keyCode])
  .filter(direction => !!direction)

Notice that we are using startWith() to emit an inital value before beginning to emit values from the source Observable (keydown$). Without this operator our Observable would start emitting only when the player presses a key.

The second improvement is to only emit values when the emitted direction is different from the previous one. In other words, we only want distinct values. You might have noticed distinctUntilChanged() in the snippet above. This operator does the dirty work for us and suppresses duplicate items. Note that distinctUntilChanged() only filters out identical values unless a different one is emitted in between.

The following figure visualizes our direction$ stream and how it works. Values painted in blue represent initial values, yellow means the value was changed on the Observable pipeline and values emitted on the result stream are colored orange.

direction stream

Keeping track of the length

Before we implement the snake itself, let’s come up with an idea to keep track of its length. Why do we need the length in the first place? Well, we use that information to model the score. It’s right that in an imperative world we’d simply check if there was a collision whenever the snake moves and if that’s the case we increase the score. So there is actually no need to keep track of the length. However, it would introduce yet another external state variable which we want to avoid at all costs.

In a reactive world it’s a bit different. One naive approach could be to use the snake$ stream and every time it emits a value we know that the snake has grown in length. While it really depends on the implementation of snake$, this is not how we’ll implement it. From the beginning we know that it depends on ticks$ as it moves a certain distance over time. As such, snake$ will accumulate an array of body segments and because it’s based on ticks$ it will generate a value every x milliseconds. That said, even if the snake does not collide with anything, snake$ will still produce distinct values. That’s because the snake is constantly moving on the field and therefore the array will always be different.

This can be a bit tricky to grasp because there are some peer dependencies between the different streams. For example apples$ will depend on snake$. The reason for this is that, everytime the snake moves we need the array of body segments to check if any of these pieces collides with an apple. While the apples$ stream itself will accumulate an array of apples, we need a mechanism to model collisions that, at the same time, avoids circular dependencies.

BehaviorSubject to the rescue

The solution to this is that we’ll implement a broadcasting mechanism using a BehaviorSubject. RxJS offers different types of Subjects with different functionalities. As such, the Subject class provides the base for creating more specialized Subjects. In a nutshell, a Subject is a type that implements both Observer and Observable types. Observables define the data flow and produce the data while Observers can subscribe to Observables and receive the data.

A BehaviorSubject is a more specialized Subject that represents a value that changes over time. Now, when an Observer subscribes to a BehaviorSubject, it will receive the last emitted value and then all subsequent values. Its uniqueness lies in the fact that it requires a starting value, so that all Observers will at least receive one value on subscription.

Let’s go ahead and create a new BehaviorSubject with an initial value of SNAKE_LENGTH:

// SNAKE_LENGTH specifies the inital length of our snake
let length$ = new BehaviorSubject<number>(SNAKE_LENGTH);

From here it’s only a small step to implement snakeLength$:

let snakeLength$ = length$
  .scan((step, snakeLength) => snakeLength + step)

In the code above we can see that snakeLength$ is based on length$ which is our BehaviorSubject. This means that whenever we feed a new value to the Subject using next(), it will be emitted on snakeLength$. In addition, we use scan() to accumulate the length over time. Cool, but you may be wondering what that share() is all about, right?

As already mentioned, snakeLength$ will later be used as an input for snake$ but at the same time acts as a source stream for the player’s score. As a result, we would end up recreating that source stream with the second subscriptions to the same Observable. This happens because length$ is a cold Observable.

If you are completely new to hot and cold Observables, we have written an article on Cold vs Hot Observables.

The point is that, we use share() to allow multiple subscriptions to an Observable that would otherwise recreate its source with every subscription. This operator automatically creates a Subject between the original source and all future subscribers. As soon as the number of subscribers goes from zero to one it will connect the Subject to the underlying source Observable and broadcast all its notifications. All future subscribers will be connected to that in-between Subject, so that effectively there’s just one subscription to the underlying cold Observable. This is called multicasting and will make you stand out on dinner parties.

Awesome! Now that we have a mechanism that we can use to broadcast values to multiple subscribers, we can go ahead and implement score$.

Implementing score$

The player’s score is as simple as it can get. Equipped with snakeLength$ we can now create the score$ stream that simply accumulates the player’s score using scan():

let score$ = snakeLength$
  .scan((score, _) => score + POINTS_PER_APPLE);

We basically use snakeLength$ or rather length$ to notify subscribers that there’s been a collision and if there was, we just increase the score by POINTS_PER_APPLE, a constant amount of points per apple. Note that startWith(0) must be added before scan() to avoid specifying a seed (initial accumulator value).

Let’s look at a more visual representation of what we just implemented:

snake length and score

By looking at the figure above you may be wondering why the initial value of the BehaviorSubject only shows up on snakeLength$ and is missing on score$. That’s because the first subscriber will cause share() to subscribe to the underlying data source and because the underlying data source immediately emits a value, the value has already passed by the time that the subsequent subscriptions has happened.

Sweet. With that in place, let’s implement the stream for our snake. Isn’t this exciting?

Taming the snake$

So far, we have learned a bunch of operators and we can use them to implement our snake$ stream. As discussed in the beginning of this post, we need some sort of ticker that keeps our hungry snake moving. Turns out there’s a handy operator for that called interval(x) which emits a value every x milliseconds. We’ll call each value tick.

let ticks$ = Observable.interval(SPEED);

From here it’s only a small stretch to the final snake$ stream. For every tick, depending on whether the snake has eaten an apple, we want to either move it foward or add new segment. Therefore, we can use the all so familiar scan() operator to accumulate an array of body segments. But, as you may have guessed, we’re facing a problem. Where’s the direction$ or snakeLength$ stream coming into play?

Absolutely legitimate question to ask. The direction nor the snake’s length is easily accessible from within our snake$ stream unless we kept those information in a variable outside of the Observable pipeline. But again, we would be breaking our rule of not modifying external state.

Luckily, RxJS offers yet another very convenient operator called withLatestFrom(). It’s an operator used to combine streams and that’s exactly what we’re looking for. This operator is applied to a primary source that controls when data is emitted on the result stream. In other words, you can think of withLatestFrom() as a way to throttle the output of a secondary stream.

With the above, we have the tools we need to finally implement the hungry snake$:

let snake$ = ticks$
  .withLatestFrom(direction$, snakeLength$, (_, direction, snakeLength) => [direction, snakeLength])
  .scan(move, generateSnake())

Our primary source is ticks$ and whenever a new value comes down the pipe, we take the latest
values from both direction$ and snakeLength$. Note that even if the secondary streams frequently emit values, for example if the player is smashing his head on the keyboard, we’d only be proccessing the data for each tick.

In addition, we are passing a selector function to withLatestFrom which is invoked when the primary stream produces a value. This function is optional and if omitted, a list with all elements is yielded.

We’ll leave out the explanation for the move() function as the primary goal of this post is to facilitate the fundamental shift in mindset. Nonetheless, you can find the source code for this on GitHub.

Here’s a figure that visually demonstrates the code above:

snake stream

See how we throttle direction$? The point is that withLatestFrom() is very practical when you want to combine multiple streams and you are not interested in producing values on the Observable pipeline when either of the streams emit data.

Generating apples

You may notice that implementing our core building blocks for the game becomes easier as we learn more and more operators. If you made it so far then the rest is going to be easy peasy.

So far we have implemented a couple of streams such as direction$, snakeLength$, score$ and snake$. If we combine them together we could already navigate that beast of snake around. But what is this game if there’s nothing to devour. Quite boring.

Let’s generate some apples to satisfy our snake’s appetite. First, let’s clarify what the state we need to preserve. Well, it could either be a single object or an array of objects. For our implementation we go with an array of apples. You hear the bells ringing?

Right, we can use scan() again to accumulate an array of apples. We start with an initial value and every time the snake moves, we check if there was a collision. If that’s the case, we generate a new apple and return a new array. This way we can leverage distinctUntilChanged() to filter out identical values.

let apples$ = snake$
  .scan(eat, generateApples())

Cool! This means that whenever apples$ produces a new value we can assume that our snake has devoured one of those tasty fruits. What’s left is to increase to score and also notify other streams about this event, such as snake$ that takes the latest value from snakeLength$ to figure out whether to add a new body segment.

Broadcasting events

Earlier we have implemented this broadcasting mechanism, remember? Let’s use that to trigger the desired actions. Here’s our code for eat():

export function eat(apples: Array<Point2D>, snake) {
  let head = snake[0];

  for (let i = 0; i < apples.length; i++) {
    if (checkCollision(apples[i], head)) {
      apples.splice(i, 1);
      // length$.next(POINTS_PER_APPLE);
      return [...apples, getRandomPosition(snake)];

  return apples;

A simple solution is to call length$.next(POINTS_PER_APPLE) right inside that if block. But then we’re facing a problem because we couldn’t extract this utility method into its own module (ES2015 module). In ES2015 modules are stored in files and there’s exactly one module per file. The goal is to organize our code in a way that is easy to maintain and to reason about.

A more sophisticated solution is to introduce yet another stream called applesEaten$. This stream is based on apples$ and every time a new value is emitted on the stream, we’d like to perform some kind of action, calling length$.next(). To do so we can use the do() operator which will execute some piece of code for each event.

Sounds feasible. But, we somehow need to skip the first (initial) value emitted by apples$. Otherwise we end up increasing the score right away which doesn’t make much sense as the game has just started. Turns out RxJS has an operator for that, namely skip().

The fact that applesEaten$ only acts as a publisher to notify other streams, there’s not going to be an Observer subscribing to this stream. Therefore, we have to manually subscribe.

let appleEaten$ = apples$
  .do(() => length$.next(POINTS_PER_APPLE))

Putting everything together

At this point, we have implemented all core buildings blocks of our game and we are good to go to finally combine everything into one result stream – scene$. For that we’ll use combineLatest. It’s quite similar to withLatestFrom but different in detail. First, let’s look at the code:

let scene$ = Observable.combineLatest(snake$, apples$, score$, (snake, apples, score) => ({ snake, apples, score }));

Instead of throttling secondary streams, we are interested in an event whenever any of the input Observables produces a new value. The last argument is again a selector function and we are simply taking all the values and returning an object that represents our game state. The game state contains everything that needs to be rendered onto the canvas.


Maintaining performance

Not only in games but but also for web applications we aim for performance. Performance can mean a lot but in terms of our game, we’d like to redraw the whole scene 60 times per second.

We can do that by introducing another stream similar to ticks$ but for rendering. Basically it’s another interval:

// Interval expects the period to be in milliseconds which is why we devide FPS by 1000
Observable.interval(1000 / FPS)

The problem is that JavaScript is single-threaded. The worst case is that we prevent the browser from doing anything so that it locks up. In other words, the browser may not be able to process all these updates quickly enough. The reason for this is that the browser is trying to render a frame and then it’s immediately asked to render the next one. As a result, it drops the current one to keep up the speed. That’s when animations start to look choppy.

Luckily, we can use requestAnimationFrame allowing the browser to line up work and perform it at the most appropriate time. But how do we use it for our Observable pipeline? The good news is that many operators including interval() take a Scheduler as its last argument. In a nutshell, a Scheduler is a mechanism to schedule some task to be performed in the future.

While RxJS offers a variety of Schedulers, the one that we’re interested in is called animationFrame. This Scheduler performs a task when window.requestAnimationFrame would fire.

Perfect! Let’s apply this to our interval and we’ll call the resulting Observable game$:

// Note the last parameter
const game$ = Observable.interval(1000 / FPS, animationFrame)

This interval will now produce values roughly every 16ms maintaining 60 FPS.

Rendering the scene

What’s left is to combine our game$ with the scene$. Can you guess what operator we use for that? Remember, both streams emit at different intervals and the goal now is to render our scene onto the canvas, 60 times per second. We’ll use game$ as our primary stream and every time it emits a value we combine it with the latest value from scene$. Sounds familiar? Yes, we can use withLatestFrom again.

// Note the last parameter
const game$ = Observable.interval(1000 / FPS, animationFrame)
  .withLatestFrom(scene$, (_, scene) => scene)
  .takeWhile(scene => !isGameOver(scene))
    next: (scene) => renderScene(ctx, scene),
    complete: () => renderGameOver(ctx)

You may have spotted takeWhile() in the code above. It’s another very useful operator that we can call on an exisiting Observable. It will return values from game$ until isGameOver() returns true.

That’s it! We have just implemented snake, fully reactively and without relying on any external state using nothing but Observables and operators provided by RxJS.

Here’s a live demo for you to play with:

live demo

Future work

The game is very basic and in a follow-up post we’ll extend it with various features, one of which is restarting the whole game. In addition we’ll look at how we could implement pause and resume as well as different levels of difficulty.

Stay tuned!

Special Thanks

Special thanks to the people that reviewed this post and helped me to make it better.

Source:: Thoughtram