Monthly Archives: April 2017

Build a Serverless MERN Story App With — Zero to Deploy: 1

By codebeast

Being a fullstack developer is fun to me and a lot of us enjoy the role because it’s more challenging. We have the opportunity as fullstack developers to play with every possible tool on the web and make something reasonable out of it.

Unfortunately some engineers are not fortunate enough to be in an environment that allow them showcase their fullstack powers. Either they are spitting Laravel blade template and rolling out APIs as backend developers or struggling with CSS Flexbox and React as frontend developers.

The pressure is more on frontend engineers because backend engineers can afford to have a good UI without being frontend masters. Frontend engineers just need to know a lot about their server language and even the server configuration to survive.

Serverless technology is starting the put back smiles on our faces as frontend engineers. Now we can focus on the browser and rollout servers within 3 minutes.

Our Task

The image above shows what we intend to build. It’s a full functional app written in React and backed by Node, Express and Mongo DB. The backend is “Serverless” with Webtask and composed of just functions and simple Express routes.

Serverless Concept

“Server-less” is a coined term that refers to building web apps without bothering about how the server is set up. The term causes confusion to developers that are new to the concept. It doesn’t mean that your app won’t live on a server, rather it means that the sever setup and management is left out to be managed by the provisioning platform.

One other thing you might hear when “server-less” is discussed is Function as a Service (FaaS). Serverless technique is simplified by functions. As a developer, you end up writing and deploying compassable functions. This concept will become clearer when we start getting our hands dirty.


Google Cloud Functions, stdlib, Azure Cloud Functions, AWS Lambda, etc are all serverless platforms you can lay your hands. The smart guys at Auth0 introduced Webtask which is yet another serverless platform which is seamlessly easy to get started with.

Webtask is used internal by the Auth0 team to manage and run their platform user custom codes. They were generous enough to make this available to the public so we can go ahead and build apps with it. My choice for Webtask is greatly influenced my how easy it is to get started with less overwhelming docs.

Setting Up Project

Our project is split into two — the webtask backend API and a React driven frontend. Let’s take care of Webtast setup first, then setup the frontend when it’s time to build that.

Webtask just as you might have guessed, provides a CLI tool to make deploying you functions easy. First thing to do is install this CLI tool:

npm install -g wt-cli

To deploy functions, Webtask needs a way to identify you and your functions. Therefore, an account is needed. Head straight to the Webtask website and login. You will use your email to login to the CLI.

Run the following command to login via your CLI:

wt init <YOUR-EMAIL>

Create your project directory anywhere on your machine, add an index.jsin the directory with the following content:

module.exports = function (cb) {
  cb(null, 'Hello World');

Hit the URL logged in the console after running the following command on the directory to see your deployed app:

wt create index

I was amazed too!

Now that we have seen a basic example, let’s build something serious and interesting — an API for our Stories app.

API Project Structure


The pacakge.json contains the dependencies for this project as well as important scripts to run and bundle our project to be Webtask-ready:

  "name": "wt-mern-api",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "scripts": {
    "start": "npm run create -- --watch",
    "create": "wt create index --bundle"
  "author": "",
  "license": "MIT",
  "dependencies": {
    "body-parser": "^1.17.1",
    "express": "^4.15.2",
    "mongoose": "^4.9.5",
    "webtask-tools": "^3.2.0"

Webtask Programming Model & Express

Webtask integrates best JavaScript. As you have seen, we just exported a function and an app was created. You might be wondering how you could bring in a different programming model to the scene.

Exporting functions is just one of the programming models supported by Webtask. It happens to be the most basic but that not withstanding, you can employ what works for you. In our case we want to use Express.

Webtask has a utility tool, webtask-tools, to help you bind your express app to a Webtask context. Therefore, rather than exporting a simple function, you can export an express app bound to Webtask using webtask-tools:

// ./index.js
var Express = require('express');
var Webtask = require('webtask-tools');
var bodyParser = require('body-parser')
var app = Express();

app.use(bodyParser.urlencoded({ extended: false }))

// yet to be created

module.exports = Webtask.fromExpress(app);

If you are familiar with Express, you will see a lot of familiar codes. The big change is that rather than listening, we are exporting a function created from the Express app using webtask-tools.

The following lines are yet to be created but the handle database connection/disconnection and routes respectively:


Database, Connection, Schema and Model

We are trying as much as possible to be practical enough so you can have a 360 view of how useful Webtask can be. One of such practical situations is using an actual database rather than using a JSON temp file.

Mongolab Database

Mongolab by Object Labs is a good choice for cloud database because they eliminate the challenges of knowing how to setup a Cloud DB by just giving you a URL after choosing where you want your database to be hosted. To get started:

  • Quickly Sign up.
  • Create a database. A database URL will be supplied on the dashboard, copy and keep safe.
  • Create a user to authenticate the database. You can do this by clicking on the just created database, selecting the Users’ tab and clicking Add database user.

Mongoose Schema

Mongoose is a library that makes interacting with Mongo DB easier. It provides a more friendly API for connecting, creating, updating and querying your database. To do this, it uses a schemas to map Mongo collections and their properties to JavaScript object.

// ./models/Story.js
const mongoose = require('mongoose');

module.exports = new mongoose.Schema({
    author: String,
    content: String,
    created_at: Date,
    id: mongoose.Schema.ObjectId

Connection and Model

Next is to connect to a database and disconnect at the beginning and end of each request. To set this up, we need to use a middleware which will execute before each of our Express routes:

// ./middlewares/db.js
var mongoose = require('mongoose');
// import Story schema
const StorySchema = require('../models/Story')

module.exports = {
    // Connect/Disconnect middleware
    connectDisconnect: (req, res, next) => {
        // Create connection using Mongo Lab URL
        // available in Webtask context
        const connection = mongoose.createConnection(req.webtaskContext.secrets.MONGO_URL);
        // Create a mongoose model using the Schema
        req.storyModel = connection.model('Story', StorySchema);
        req.on('end', () => {
            // Disconnect when request
            // processing is completed
        // Call next to move to
        // the next Express middleware

This middleware handles both connecting and disconnecting to a Mongolab database. The connection is achieved by passing a connection URL to the createConnection method. The URL is received via Webtask context; see next title.

When we establish a connection, we create a Mongoose model and attach the model to our request object.

We then attach an event listener to close the connection at the end of the request. next is called to pass down and continue with whatever middleware is next in the stack. This will most likely be an Express route.

Webtask Context

You can access contextual information via the Webtask context object. Such information can be used to store sensitive credentials like secrets, dynamic details like query strings, etc.

You can access the context via the function parameter:

module.exports = function(context, cb) {
    cb(null, { hello: || 'Anonymous' });

When using Express, you can access it from req.webtaskContext just like we saw in the database connection example. The MONGO_URL secret is passed in via the terminal while running the app:

wt create index --secret MONGO_URL=<MONGOLAB-CONNECTION-URL> --bundle

Express CRUD Routes

We have written all the helper codes our routes need to function. Tackling these routes is the next task:

// ./routes/stories.js
var mongoose = require('mongoose');

const Story = require('../models/Story');

module.exports = (app) => {
  app.get('/stories', (req, res) => {
      req.storyModel.find({}).sort({'created_at': -1}).exec((err, stories) => res.json(stories))
  });'/stories', (req, res) => {
      const newStory = new req.storyModel(Object.assign({}, req.body, {created_at:}));, savedStory) => {

  app.put('/stories', (req, res) => {
    const idParam =;
    req.storyModel.findOne({_id: idParam}, (err, storyToUpdate) => {
        const updatedStory = Object.assign(storyToUpdate, req.body);, story) => res.json(story))

  app.delete('/stories', (req, res) => {
    const idParam =;
    req.storyModel.remove({_id: idParam}, (err, removedStory) => res.json(removedStory));
  • GET: /stories route uses mongoose to fetch all the stories stored in the database and sort them by date created in descending order
  • POST: /stories is used to create a new story by storing the payload on req.body
  • PUT: /stories expects an id query string which it uses to find a story and updates the story based on the id
  • DELETE: /stories just like PUT, expects an id as well and removes an entry from the database collection based on the id

Final Words

You can hit the following URL using GET in Postman to see a response of an array if stories:

Remember to run the app one more time if you have not been doing so to deploy your functions:

wt create index --secret MONGO_URL=<MONGOLAB-CONNECTION-URL> --bundle

The next part of this article will describe how we will consume these endpoints in a React app so as to have a full application. We will also deploy the React app so has to have everything running on a remote server.


10 Machine Learning Examples in JavaScript

By Danny Markov


When we think of tools for developing Machine Learning software, languages like Python, C++, and MATLAB come to mind. Although it is the most popular programming language in the world, JavaScript is usually not on that list.

This is mainly due to the lack of a strong AI ecosystem in JavaScript. While the aforementioned languages offer many proven and well tested libraries, JavaScript and Node.js joined the machine learning party just recently and now there simply isn’t enough incentive for people to get on board with a big JavaScript ML project, especially when solutions already exist in Python.

So… Why JavaScript?

JavaScript has one great advantage – its super accessible. All you need to run a JS machine learning project is a browser and Node.js installed. This makes JavaScript perfect for web developers who are just getting starting with artificial intelligence and want to try out new things without too much hassle.

Most Node.js machine learning libraries are fairly new and still in development, but they do exist and are ready for you to try them. In this article we will look at some of these libraries, as well as a number of cool AI web experiments.

1. Brain

Brain is a library that lets you easily create neural networks and then train them based on input/output data. Since training takes up a lot of resources, it is preferred to run the library in a Node.js environment, although a CDN browser version can also be loaded directly onto a web page. There is a tiny demo on their website that can be trained to recognize color contrast.

Deep playground

Deep playground

Educational web app that lets you play around with neural networks and explore their different components. It has a nice UI that allows you to control the input data, number of neurons, which algorithm to use, and various other metrics that will be reflected on the end result. There is also a lot to learn from the app behind the scenes – the code is open-source and uses a custom machine learning library that is written in TypeScript and well documented.



FlappyLearning is a JavaScript project that in roughly 800 lines of unminifed code manages to create a machine learning library and implement it in a fun demo that learns to play Flappy Bird like a virtuoso. The AI technique used in this library is called Neuroevolution and applies algorithms inspired by nervous systems found in nature, dynamically learning from each iteration’s success or failure. The demo is super easy to run – just open index.html in the browser.



Probably the most actively maintained project on this list, Synaptic is a Node.js and browser library that is architecture-agnostic, allowing developers to build any type of neural network they want. It has a few built-in architectures, making it possible to quickly test and compare different machine learning algorithms. It’s also features a well written introduction to neural networks, a number of practical demos, and many other great tutorials demystifying how machine learning works.

Land Lines

Land Lines

Land Lines is an interesting Chrome Web experiment that finds satellite images of Earth, similar to doodles made by the user. The app makes no server calls: it works entirely in the browser and thanks to clever usage of machine learning and WebGL has great performance even on mobile devices. You can check out the source code on GitHub or read the full case study here.



Although it is no longer actively maintained, ConvNetJS is one of the most advanced deep learning libraries for JavaScript. Originally developed in Stanford University, ConvNetJS became quite popular on GitHub, resulting in many community driven features and tutorials. It works directly in the browser, supports multiple learning techniques, and is rather low-level, making it suitable for people with bigger experience in neural networks.

Thing Translator

Thing Translator

Thing Translator is a web experiment that allows your phone to recognize real-life objects and name them in different languages. The app is built entirely on web technologies and utilizes two machine learning APIs by Google – Cloud Vision for image recognition and Translate API for natural language translations.



Framework for building AI systems based on reinforcement learning. Sadly the open-source project doesn’t have a proper documentation but one of the demos, a self-driving car experiment, has a great description of the different parts that make up a neural network. The library is in pure JavaScript and made using modern tools like webpack and babel.



Another library that allows us to set up and train neural networks using only JavaScript. It is super easy to install both in Node.js and in the client side, and has a very clean API that will be comfortable for developers of all skill levels. The library provides a lot of examples that implement popular algorithms, helping you understand core machine learning principals.



DeepForge is a user-friendly development environment for working with deep learning. It allows you to to design neural networks using а simple graphical interface, supports training models on remote machines, and has built in version control. The project runs in the browser and is based on Node.js and MongoDB, making the installation process very familiar to most web devs.

Machine Learning in Javascript

Bonus: Machine Learning in Javascript

An excellent series of blog posts by Burak Kanber that goes over some of the machine learning fundamentals. The tutorials are well written, clear, and targeted specifically towards JavaScript developers. A great resource if you want to understand machine learning more in depth.


Although the JavaScript machine learning ecosystem is not fully developed yet, we recommend using the resources on this list to make your first steps in ML and get a feel for the core techniques. As the experiments in the article show, there are tons of fun stuff you can make using only the browser and some familiar JavaScript code.


Using npm as a Build Tool

By madhankumar028

Every developer will love this saying “It is hard to build a software without using a build tool.” To get rid of the repetitive tasks, we are using build tools. If you think Gulp has killed Grunt you may want to think about another tool because npm has surpassed both.

Now Node provides a great way to implement a build process with only npm.

Imagine a situation where using build tools makes you horrible, I felt the same thing when I use Grunt and Gulp. Now that I have been using npm as a build tool, I feel more comfortable. Here I will share with you how to do the same and make yourself comfortable while using the build tool.

At the end of this article, we will be making our own boilerplate.

Drawbacks of using others:

When using Grunt or Gulp, the packages specific to that build tool are usually just wrappers on the main package. For instance, gulp-sass is really a Gulp specific wrapper to node-sass. We can go straight to the source and just use node-sass with npm!

There are drawbacks of using Grunt/Gulp specific packages.

  1. We need to watch the versions of each sub modules which we are using. If anything gets updated or removed we have to look for another one to achieve the same. But here with npm, no such problem will come.

  2. Adding new tasks into build tool will increase dependencies. But here we can use normal command prompt command like ‘&&’ to combine multiple tasks.

  3. Custom script file like (Gruntfile.js) for tasks. Here only package.json file is enough.

  4. Doesn’t support any commands which we use in command prompt. Here you can use all commands into your package.json file.

Types of npm scripts

  1. Default scripts
  2. Custom scripts

Getting started

Let’s start our build commands!

  1. Create an empty directory and initialize it as npm using npm init. It will ask you to construct your package.json file. If you feel lazy like me to hit enter many times, then go with shorthand script npm init --yes.

  2. Now check your directory, a package.json file gets created like this :
  "name": "your_directory_name",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "scripts": {
    "test": "echo "Error: no test specified" && exit 1"
  "keywords": [],
  "author": "",
  "license": "ISC"
  1. By default, a test script will get created inside the script object. Inside the script object, we are going to configure our tasks.

  2. Run the default task using npm test shorthand for npm run test


It states that node_modules missing. We have to add our dependencies.

  1. Let’s install dev-dependencies first :
$ npm i -D jshint lite-server mocha concurrently node-sass uglify-js
  1. Let’s start creating our scripts :
"scripts": {
  "dev": "lite-server"

npm run dev – I have used it as a development server. It will take care of live-reloading and Browser-sync (since it is sub-module of lite-server, no need to install it separately). We don’t need to configure a watch property for all your files like (HTML, CSS, JS).

Browser-sync will help you with cross-browser checking. To know more about lite-server refer docs.

  "scripts": {
    "db": "json-server --watch db.json --port 3005"

npm run db – If you want to know more about JSON-Server refer my article.

  "scripts": {
    "start": "concurrently -k "npm run dev" "npm run db""

npm start shorthand for npm run start. Concurrently, using it we can perform two tasks simultaneously. You can also combine both the tasks using ‘&&’ operator. To know more about it refer docs.

  "scripts": {
   "uglify": "mkdir -p dist/js && uglifyjs src/js/*.js -m -o dist/js/app.js"

npm run uglify – It will minify your JavaScript files and move it into your desired directory. It will create a new folder only if it does not already exist (-p flag).

  "scripts": {
    "lint": "jshint src/**.js"

npm run lint – It will look for any JavaScript files inside the source folder and helps detect errors and potential problems in your JavaScript code.

"scripts": {
  "sass": "node-sass --include-path scss scss/main.scss assets/main.css"

npm run sass – It allows compiling your .scss files to CSS automatically and at a good speed.

"scripts": {
  "test": "mocha test"

npm test shorthand for npm run test. Mocha is a JavaScript test framework, which helps you to write test cases.

"scripts": {
  "bash": "Location of the Bash/Shell script file"

npm run bash – If you think you’re making a lot of commands inside the scripts object, you can make it as Bash/Shell script and include it in your package.json file as like above.


So far we have seen the basic npm build commands and explanation for them. Let’s start to prepare our own boilerplate. Using this boilerplate will save your time on preparing build tool. Invest more time on building your app.

"scripts": {

  "start": "concurrently -k "npm run dev" "npm run watch-css"",

  "dev": "lite-server",
  "db": "json-server --watch db.json --port 3005",

  "build-js": "mkdir -p dist/js && uglifyjs src/js/*.js -m -o dist/js/app.js",

  "lint": "lint jshint src/**/**.js",

  "build-css": "node-sass --include-path scss scss/main.scss assets/main.css",
  "watch-css": "nodemon -e scss -x "npm run build-css"",

  "test": "mocha test",
  "pretest": "npm run lint",
  "posttest": "echo the test has been run!",

  "bash": "Location of the bash/shell script file"

This boilerplate will take care of all the necessary things which we need during development phase like:

  1. npm run dev– Bootstraps our app, opens it in the browser, reloads the browser whenever we make changes in source.

  2. build-js– Minifies all our JavaScript files, which will be needed during production.

  3. watch-css– Nodemon is a utility that will monitor for any changes in your source and automatically restart your server. Here I have used it to monitor for any changes in the .scss file, if there are changes, it will restart the server and build our css.
"scripts": {
  "test": "echo I am test",
  "pretest": "echo I run before test",
  "posttest": "echo I run after test"
  1. npm test– It wraps the above three commands “pretest test posttest” and executes them in the order I have listed. Initially when you hit npm test it will look for pretest command. If it is there, it gets executed first, followed by test and then posttest. During the look up if it doesn’t find pretest command it will directly execute the test command.

The remaining commands I have explained it in the previous section. You can also customize this boilerplate based on your needs.


I hope this article has saved your time while preparing a build tool. Now we have prepared our own boilerplate for npm as a build tool. I hope now you will accept npm has killed both Grunt and Gulp. Feel free to use my boilerplate and contributions are welcome. Further, you can refer official npm scripts.

If you have any queries, please let me know in comments.


Node.js Weekly Update - 21 April, 2017

By Gergely Németh

Node.js Weekly Update - 21 April, 2017

Below you can find RisingStack‘s collection of the most important Node.js news, projects, updates & security leaks from this week:

1. Hard-won lessons: Five years with Node.js

Scott Nonnenberg shared his 5 years of Node.js knowledge on topics, like Classes, NaN, Event Loop, Testing, Dependencies, and on failing to use New Relic to monitor Node.js apps.

After five years working with Node.js, I’ve learned a lot. I’ve already shared a few stories, but this time I wanted to focus on the ones I learned the hard way. Bugs, challenges, surprises, and the lessons you can apply to your own projects!

2. The Definitive Guide to Object Streams in Node.js

Node.js Streams come with a great power: You have an asynchronous way of dealing with input and output, and you can transform data in independent steps.

In this tutorial, I’ll walk you through the theory, and teach you how to use object stream transformables, just like Gulp does.

3. Improving Startup Time at Atom

Over the last months, the Atom team has been working hard on improving one of the aspects of the editor our users care about the most: startup time.

We will first provide the reader with some background about why reducing startup time is a non-trivial task, then illustrate the optimizations we have shipped in Atom 1.17 (currently in beta) and, finally, describe what other improvements to expect in the future.

4. Announcing Free Node.js Monitoring & Debugging with Trace

Today, we’re excited to announce that Trace, our Node.js monitoring & debugging tool is now free for open-source projects.

Node.js Weekly Update - 21 April, 2017

We know from experience that developing an open-source project is hard work, which requires a lot of knowledge and persistence. Trace will save a lot of time for those who use Node for their open-source projects.

5. PSA: Node.js 8 will be delayed until May.

As we’ve mentioned in the previous Node.js Weekly Update, V8 5.9 will be the first version with TurboFan + Ignition (TF+I) turned on by default.

As parts of the Node.js codebase have been tuned to CrankShaft, there will be a non trivial amount of churn to adapt to the new pipeline. This also creates a security risk as CrankShaft and FullCodeGen are no longer maintained by the V8 team or tested by the Chrome security team. If TF + I lands in Node.js 9.x backporting any changes to Node.js 8.x is going to prove extremely difficult and time consuming.

The Node.js Core Team decided that they should target 5.8 in 8.x release. The foundation will delay release with 3-4 weeks to allow forward compatible ABI to 6.0. Upgrade to TF+I as semver minor.

6. Meet Awaiting – the async/await utility for browsers and Node.js

Code written with async functions benefits from superior readability, improved terseness and expressiveness, and unified error handling. No more nested callbacks, opaque Promise chains, and if (err) checks littering your code.

However, this pattern isn’t a panacea. It’s easy to do some things: iterate through single items, wait on a single result, run an array of promises in parallel. Other workflows require abstraction or state. I kept finding myself writing the same utility functions in each project: delays, throttled maps, skipping try/catch on optional operations, adapting to events or callbacks. Await, combined with these simple abstractions, yields readable yet powerful async workflows.

7. Call for Papers (NodeTalk Proposals)

Node Summit 2017 will host the fifth annual NodeTalks. The conference will host leading technology and business experts from across the Node.js ecosystem who will present real-world case studies and talks that highlight the rapidly growing number of high profile companies and critical applications that rely on the Node.js ecosystem.

Submit your talk here!

Security Vulnerabilities Discovered:

High severity

  • ReDoSdecamelize package, versions >=1.1.0 <1.1.2
  • ReDoSuseragent package, versions <2.1.12
  • ReDoSuri-js package, versions <3.0.0
  • DoSnes package, versions <6.4.1

Medium severity

Low severity

Previously in the Node.js Weekly Update

In the previous Node.js Weekly Update we read interviews with Matt Loring & Mark Hinkle, read about tracking the growth of Open-Source & Mastering Node CLI..

We help you to stay up-to-date with Node.js on a daily basis too. Check out our Node.js news page and its Twitter feed!