Build a GraphQL Server With Apollo Server and AdonisJS

By Chimezie Enyinnaya

Recently, I have been exploring GraphQL. Apollo (client and server) has really made working with GraphQL awesome. Apollo server has support for some NodeJS frameworks out of the box. When it is comes to NodeJS frameworks, AdonisJS is my preferred choice. Unfortunately, Apollo server does not support AdonisJS out of the box. For this reason, I created an AdonisJS package called adonis-apollo-server which integrates Apollo server with the AdonisJS framework.

In this tutorial, I will show you how to build a GraphQL server with Apollo server and AdonisJS using the package above.

This tutorial assumes you have some basic understanding of GraphQL. With that said, let’s jump in and start building our GraphQL server.

What We’ll be Building

For the purpose of this tutorial, we’ll build a GraphQL server with the concept of a basic blog. The blog will have User and Post entities. We’ll be able to create users and authenticate them using JSON Web Tokens (JWT). Authenticated users will be able to create posts. Hence, User and Post will have a one-to-many relationship. That is, a user can have many posts, while a post can only belong to a user.

Requirements

This tutorial assumes you have the following installed on your computer:

  • Node.js 8.0 or greater
  • Npm 3.0 or greater

Create a new AdonisJS app

We’ll start by creating a new AdonisJS app. We’ll use the adonis CLI for this, so run the command below if you don’t have it installed already:

npm i -g @adonisjs/cli

With that installed, let’s create a new AdonisJS app. We’ll call it adonis-graphql-server:

adonis new adonis-graphql-server --api-only

We are specify that we want the API only app by passing api-only flag. This will create an app well suited for building APIs as things like views won’t be included.

We can start the app to make sure everything is working as expected:

cd adonis-graphql-server
adonis serve --dev

Then visit http://127.0.0.1:3333, you should get a JSON response as below:

    {
      "greeting": "Hello world in JSON"
    }

Installing Dependencies

Next, we need to install the dependencies needed for our GraphQL server.

npm install graphql adonis-apollo-server graphql-tools slugify --save

Let’s quickly go each of the dependencies.

  • graphql: a reference implementation of GraphQL for JavaScript.
  • adonis-apollo-server: apollo server for AdonisJS.
  • graphql-tools: for building a GraphQL schema and resolvers in JavaScript.
  • slugify: slugify a string.

Before we can start using the adonis-apollo-server package, we need to first register the provider inside start/app.js:

// start/app.js

const providers = [
    ...
    'adonis-apollo-server/providers/ApolloServerProvider'
]

Database Setup

We’ll be using MySQL in this tutorial. So, we need to install Node.js driver for MySQL:

npm install mysql --save

Once that’s installed, we need to make AdonisJS know we are using MySQL. Open .env and add the snippet below to it:

// .env

DB_CONNECTION=mysql
DB_HOST=localhost
DB_DATABASE=adonis_graphql_server
DB_USER=root
DB_PASSWORD=

Remember to update the database name, username and password accordingly with your own database settings.

Defining GraphQL Schema

GraphQL schema describe how data are shaped and what data on the server can be queried or modified. Schema can be of two types: Query and Mutation. As described in the “What We’ll be Building” section, we’ll define schema for User and Post types. To do this, create a new directory named data within the app directory. Within the data directory, create a new schemas.js file and paste the code below in it:

// app/data/schema.js

'use strict'

const { makeExecutableSchema } = require('graphql-tools')

// Define our schema using the GraphQL schema language
const typeDefs = `
  type User {
    id: Int!
    username: String!
    email: String!
    posts: [Post]
  }
  type Post {
    id: Int!
    title: String!
    slug: String!
    content: String!
    user: User!
  }
`

The fields are pretty straightforward. The User type also has a posts field which will be an array of all posts created by a user. Similarly, the Post type also has a user field which will be the user that created a post. You’ll noticed we attached ! while defining the user field. This simply means, the user field can not be NULL or empty. That is to say, a post must be created by a user.

Having defined our types, let’s now the define the queries we want to be carried out on them. Still within app/data/schema.js, paste the code below just after the Post type:

// app/data/schema.js

type Query {
  allUsers: [User]
  fetchUser(id: Int!): User
  allPosts: [Post]
  fetchPost(id: Int!): Post
}

We are saying we want to be able to fetch all users and posts created and return them as an array. Also, fetch individual user and post by their ID respectively.

Next, we define some mutations. Mutation allows data to be modified on the server. Still within app/data/schema.js, paste the code below just after Query:

// app/data/schema.js

type Mutation {
  login (email: String!, password: String!): String
  createUser (username: String!, email: String!, password: String!): User
  addPost (title: String!, content: String!): Post
}

The login mutation will allow users to login into the server. It returns a string which will be a JWT. createUser as it name suggests simply creates a new user. addPost allows an authenticated user to create a post.

Finally, we need to build the schema. Before we do that, let’s add the resolvers (which we’ll create shortly) just after importing graphql-tools:

// app/data/schema.js

const resolvers = require('./resolvers')

Then we can build the schema:

// app/data/schema.js

// Type definition
...

module.exports = makeExecutableSchema({ typeDefs, resolvers })

Creating Models and Migration

Before we move on to writing resolvers, let’s take a moment to define and our models. We want our models to be as similar as possible to our schema. So, we’ll define User and Post models.

Luckily for us, AdonisJS ships with it a User model and migration file. We just need to create a Post model and migration file.

adonis make:model Post -m

This will create a app/Models/Post.js model and generate a migration file. Open the migration file database/migrations/xxxxxxxxxxxxx_post_schema.js and update the up method as below:

// database/migrations/xxxxxxxxxxxxx_post_schema.js

up () {
  this.create('posts', (table) => {
    table.increments()
    table.integer('user_id').unsigned().notNullable()
    table.string('title').notNullable()
    table.string('slug').unique().notNullable()
    table.text('content').notNullable()
    table.timestamps()
  })
}

We can now run the migrations:

adonis migration:run

Defining Models Relationship

Recall, we said User and Post will have a one-to-many relationship. We’ll now define the relationship. Open app/Models/User.js and add the code below within the User class just after the tokens method:

// app/Models/user.js

/**
 * A user can have many posts.
 *
 * @method posts
 *
 * @return {Object}
 */
posts () {
  return this.hasMany('App/Models/Post')
}

Also, let’s define the inverse relationship. Open app/Models/Post.js and add the code below within the Post class. This will be the only method the Post model will have:

// app/Models/Post.js

/**
 * A post belongs to a user.
 *
 * @method user
 *
 * @return {Object}
 */
user () {
  return this.belongsTo('App/Models/User')
}

Writing Resolvers

With our models and their relationship defined, we can now move on to writing the resolvers. A resolver is a function that defines how a field in a schema is executed. Within app/data directory, create a new resolvers.js file and paste the code below it it:

// app/data/resolvers.js

'use strict'

const User = use('App/Models/User')
const Post = use('App/Models/Post')
const slugify = require('slugify')

// Define resolvers
const resolvers = {
  Query: {
    // Fetch all users
    async allUsers() {
      const users = await User.all()
      return users.toJSON()
    },
    // Get a user by its ID
    async fetchUser(_, { id }) {
      const user = await User.find(id)
      return user.toJSON()
    },
    // Fetch all posts
    async allPosts() {
      const posts = await Post.all()
      return posts.toJSON()
    },
    // Get a post by its ID
    async fetchPost(_, { id }) {
      const post = await Post.find(id)
      return post.toJSON()
    }
  },
}

Firstly, we imported our models and the slugify package. Then we start by writing functions for retrieve our queries. Lucid (AdonisJS ORM) makes use of serializers. So, whenever we query the database using Lucid models, the return value is always a serializer instance. Hence, the need for calling toJSON() so as to return a formatted output.

Next, we define functions for our mutations. Still within app/data/resolvers.js, paste the code below just after Query object:

// app/data/resolvers.js

Mutation: {
  // Handles user login
  async login(_, { email, password }, { auth }) {
    const { token } = await auth.attempt(email, password)
    return token
  },

  // Create new user
  async createUser(_, { username, email, password }) {
    return await User.create({ username, email, password })
  },

  // Add a new post
  async addPost(_, { title, content }, { auth }) {
    try {
      // Check if user is logged in
      await auth.check()

      // Get the authenticated user
      const user = await auth.getUser()

      // Add new post
      return await Post.create({
        user_id: user.id,
        title,
        slug: slugify(title, { lower: true }),
        content
      })
    } catch (error) {
      // Throw error if user is not authenticated
      throw new Error('Missing or invalid jwt token')
    }
  }
},

Because we created an api-only app, the app is already configured to use of JWT for authentication.

> Note: We need to set the Authorization = Bearer header to authenticate the request.

The login function makes use of the auth object which will be pass to the context object of GraphQL option when starting the server (more on this later). It then attempt to log the user in. If it was successful, a token (JWT) will be returned. The createUser function simply creates a new user in the database. Lastly, the addPost function, just like login(), also accepts the auth object as it third argument. It checks if the user is logged in, then get the details of the authenticated user and finally add a new post to the database. If the user is not logged in (that is, token was not found or is invalid) we return an error message.

Next, we define functions to retrieve the fields on our User, Post types respectively. Still within app/data/resolvers.js, paste the code below just after Mutation object:

// app/data/resolvers.js

User: {
  // Fetch all posts created by a user
  async posts(userInJson) {
    // Convert JSON to model instance
    const user = new User()
    user.newUp(userInJson)

    const posts = await user.posts().fetch()
    return posts.toJSON()
  }
},
Post: {
  // Fetch the author of a particular post
  async user(postInJson) {
    // Convert JSON to model instance
    const post = new Post()
    post.newUp(postInJson)

    const user = await post.user().fetch()
    return user.toJSON()
  }
}

Because we called toJSON() on our queries above, for us to be able to call a relationship or any other method on the models, we need to first convert the JSON back to an instance of the model. Then we can call our relationship methods (posts() and user()) defined earlier.

Finally, we export the resolvers:

// app/data/resolvers.js

module.exports = resolvers

Creating the GraphQL Server

We have successful build out each piece of our GraphQL server, let’s now put everything together. Open start/routes.js and update it as below:

// start/routes.js

'use strict'

const Route = use('Route')
const GraphqlAdonis = use('ApolloServer')
const schema = require('../app/data/schema');

Route.route('/graphql', ({ request, auth, response }) => {
    return GraphqlAdonis.graphql({
      schema,
      context: { auth }
    }, request, response)
}, ['GET', 'POST'])

Route.get('/graphiql', ({ request, response }) => {
    return GraphqlAdonis.graphiql({ endpointURL: '/graphql' }, request, response)
})

Firstly, we imported Route which we’ll use to define our routes. Then GraphqlAdonis (the adonis-apollo-server package) and lastly we imported our schema.

Since GraphQL can be served over HTTP GET and POST requests, we defined a /graphql route that accept both requests. Within this route, we call GraphqlAdonis’s graphql() passing to it an object containing our schema and auth object (as the context) as GraphQL options. The graphql() accepts 3 arguments. The first argument is the GraphQL options which is is an object. The remaining arguments are the request and response object respectively. We pass to the schema and auth object as the GraphQL options.

The /graphiql route is solely for testing out the GraphQL server. Though I won’t be using it for testing out the server in this tutorial. I only added it just to show you how to use GraphiQL with your server.

Testing Out the Server

I will be using Insomnia to test out the server. Insomnia is a REST client that has support for GraphQL query. You can download it if you don’t have it already.

Make sure the server is already started:

adonis serve --dev

Which should be running on http://127.0.0.1:3333.

Start the Insomnia app:

Click on create New Request. Give the request a name if you want, then select POST as the request method, then select GraphQL Query. Finally, click Create.

Next, enter http://127.0.0.1:3333/graphql in the address bar:

Try creating a new user with createUser mutation:

mutation {
    createUser(username: "mezie", email: "mezie@example.com", password: "password") {
        id
        username
        email
    }
}

You should get a response as in the image below:

Then login:

mutation {
    login(email: "mezie@example.com", password: "password")
}

A JWT is returned on successful login.

To test out adding a new posting, remember we said only authenticated users can add post? Though we are successfully logged in, but we need to find a way to add the JWT generated above to the request headers. Luckily for us, this is pretty simple with Insomnia (the major reason I chose to test out the GraphQL server with it).

From Auth dropdown, select Bearer Token and paste the token (JWT) above in the field provider.

With that added, you can now add a new post:

mutation {
    addPost(title: "Hello Adonis", content: "Adonis is awesome!") {
        id
        title
        slug
        content
        user {
            username
            email
        }
    }
}

Conclusion

There we have it! So, in this tutorial, we saw how to integrate Apollo server in an AdonisJS using the adonis-apollo-server package. We then went on to build a GraphQL server with it. We also saw how to add authentication to the GraphQL server using JSON Web Tokens (JWT). Finally, we saw how to test our GraphQL server using the Insomnia REST client.

So, go forth and build awesome GraphQL apps with AdonisJS.

Source:: scotch.io

Node.js Weekly Update - December 8

By Tamas Kadlecsik

Screen-Shot-2017-12-08-at-12.03.06-PM

Below you can find RisingStack‘s collection of the most important Node.js updates, projects & tutorials from this week:

Node v8.9.2 (LTS) released

Notable Changes

  • console:
    • avoid adding infinite error listeners (Matteo Collina)
  • http2:
    • improve errors thrown in header validation (Joyee Cheung)

Node v6.12.1 (LTS) released

Notable changes

  • build:
    • fix npm install with –shared (Ben Noordhuis)
  • build:
    • building with python 3 is now supported (Emily Marigold Klassen)
  • src:
    • v8 options can be specified with either ‘_’ or ‘-‘ in NODE_OPTIONS (Sam Roberts)

Node.js meets OpenCV’s Deep Neural Networks — Fun with Tensorflow and Caffe

Unleash the awesomeness of neural nets to recognize and classify objects in images!

Take a look at OpenCV’s Deep Neural Networks module and learn how to load pretrained models from Tensorflow and Caffe with OpenCV’s DNN module.

Health Checks and Graceful Shutdown for Node.js Applications

What happens to your users who used your product at the time of the deployment? Chances are, the requests they have in progress are going to fail.

What to do in these situations? Implement terminus to perform health checks and graceful shutdown to your applications. Learn how to do this step-by-step in this article.

Patterns for designing flexible architecture in node.js (CQRS/ES/Onion)

This blog post reveals patterns for designing flexible architecture in Node.js.

The project uses CQRS and Event Sourcing patterns, and it’s organized using onion architecture and written with TypeScript.

Why End-to-End Testing is Important for Your Team

End-to-end testing is where you test your whole application performance from start to finish.

It is one of the ways to keep a developer team moving forward without breaking everything. Read this post to see how they implemented it at Hubba.

Wrangling GeoJSON with Turf.js

Turf makes Node.js a natural choice for working with GeoJSON. Check out Turf if you find yourself dealing with geospatial data, it will enable you to do some incredible things.

Tutorial to Native Node.js Modules with C++. Part 3 — Asynchronous Bindings and Multithreading

JavaScript in the browser runs in a single-threaded environment and in case you want to load off some work to a separate thread, you have to mess around with web workers.

You can easily make use of multiple threads to perform computationally heavy tasks in parallel, and you can learn how to do so in this tutorial.

1_KMFrX776LOznXpsJSfQXVw-1

The secret to being a top developer is building things! Here’s a list of fun apps to build!

Here are 8 fantastic projects to train your coding muscles! The goal is to build each app with whatever technology stack you prefer. Keep it conflict-free, use whatever you want!

1_CFM9_VhPRrGG755enCmClw-1

Previously in the Node.js Weekly Update

In the previous Node.js Weekly Update we collected the latest news on Node.js such as the latest v8.9.1 (LTS) release, how to perform a NodeJS blockchain implementation, how to build a simple static page generaton with Node.js, or how to write fast and safe native Node.js modules with Rust. Click if you missed it!

We help you to stay up-to-date with Node.js on a daily basis too. Check out our Node.js news page and its Twitter feed!

Source:: risingstack.com

Setting up Webpack for Any Project

By O’Brian Kimo

Most developers have interacted with webpack while creating React projects and most see it as a tool for use in developing React projects rather than a general development tool.

webpack is a powerful module bundler that can be very efficient if used correctly.

In this tutorial, we will explore how to setup a project using wepback right from the folder structure to exploring different loaders, plugins and other interesting features that come with webpack. This will give you a different perspective to webpack and you will help in setting up future Javascript projects using webpack.

Why webpack?

An alternative to using webpack is using a combination of a task runner like grunt or gulp with a bundler like browserify. But what makes a developer opt for webpack rather than use the task runners?

Webpack attacks the build problem in a fundamentally more integrated and opinionated manner. In browserify, you use gulp/grunt and a long list of transforms and plugins to get the job done. webpack offers enough power out of the box that you typically don’t need grunt or gulp at all.

webpack is also configuration based unlike gulp/grunt where you have to write code to do your tasks. What makes it even better is the fact that it makes correct assumptions about what you want to do; work with different JS modules, compile code, manage assets and so forth.

The live-reload ability is also blazing fast. The ability to substitute output filenames with hash filenames enables browsers to easily detect changed files by including build-specific hash in the filename.

Splitting of file chunks and extracting webpack’s boilerplate and manifest also contibutes to fast rebuilds. These are just some few highlights out of many that make webpack a better choice.

webpack Features

The main webpack features which we will discuss further include:

  • Loaders
  • Plugins
  • Use of different configurations for different environments
  • Lazy loading of split chunks
  • Dead code elimination by tree shaking
  • Hot module replacement that allows code to be updated at runtime without the need for a full refresh
  • Caching by substituting filenames with hash filenames

Project Setup

Prerequisites

To continue with this tutorial we will require Node JS on our machines which comes bundled with the node package manager(npm). We will then install yarn which is an alternative to npm that gives us additional functionalities and more speed during installation of packages.

$ npm install -g yarn

Directory Structure

We will begin by creating the following directory structure:

Inside our main directory webpack-setup, we initialize our project with yarn init which will create for us package.json file. We will briefly explore what some of the directories and files will be used for.

  • src: Main project container.
  • src/app: Will host our javacript files.
  • src/public: Holds project assets and static files.
  • src/style: Holds the project’s global styles.
  • src/app/index.js: Main entry point into our project.
  • src/public/index.html: Main project template.

Initial Configuration

We will start by creating a simple webpack configuration that we will gradually develop by adding more functionality. This simple configuration will only contain one very important plugin, HtmlWebpackPlugin.

The HtmlWebpackPlugin simplifies creation of HTML files to serve your webpack bundles and can automatically inject our javascript bundle into our main HTML template. But before that, we will need to install some required modules; webpack which is the main bundler and webpack-dev-server that provides a simple light weight server for development purposes.

$ yarn add webpack webpack-dev-server html-webpack-plugin -D
const HtmlWebpackPlugin = require('html-webpack-plugin'); // Require  html-webpack-plugin plugin

module.exports = {
  entry: __dirname + "/src/app/index.js", // webpack entry point. Module to start building dependency graph
  output: {
    path: __dirname + '/dist', // Folder to store generated bundle
    filename: 'bundle.js',  // Name of generated bundle after build
    publicPath: '/'
  },
  module: {  // where we defined file patterns and their loaders
      rules: [ 
      ]
  },
  plugins: [  // Array of plugins to apply to build chunk
      new HtmlWebpackPlugin({
          template: __dirname + "/src/public/index.html",
          inject: 'body'
      })
  ],
  devServer: {  // configuration for webpack-dev-server
      contentBase: './src/public',  //source of static assets
      port: 7700, // port to run dev-server
  } 
};

HtmlWebpackPlugin basically informs webpack to include our javascript bundle in the body element of the provided template file. We will then add a simple statement in src/app/index.js and also populate our src/public/index.html file with simple HTML for demonstration. We then update package.json script with a start script.

"scripts": {
    "start": "webpack-dev-server --history-api-fallback --inline --progress"
  }

The above script will enable our server to server index.html in case of a 404 error and the --inline option allows for injection of a Hot Module Replacement script in our bundle while the --progres option simply shows console outputs of the running tasks. We can then start our application with:

$ yarn start

Looking at our console, we find the following logs which basically explain the devServer section.

We can then navigate to http://localhost:7700/ to see our application.

Loaders

Loaders are special modules webpack uses to ‘load’ other modules (written in another language) into Javascript. They allow us to pre-process files as we import or “load” them.

Thus, loaders are kind of like “tasks” in other build tools, and provide a powerful way to handle front-end build steps.

Loaders can transform files from a different language (like TypeScript) to JavaScript, or sass to css. They can even allow us to do things like import CSS and HTML files directly into our JavaScript modules. Specifying loaders in our configuration’s module.rules section is the recommended way of using them.

babel-loader

This loader uses Babel to load ES2015 files.We install babel-core which is the actual babel used by babel-loader. We also include babel-preset-env; a preset that compiles ES2015+ down to ES5 by automatically determining the Babel plugins and polyfills you need based on your targeted browser or runtime environments.

$ yarn add babel-core babel-loader babel-preset-env -D

We then create a .babelrc file where we include the presets.

//.babelrc
{ "presets": [ "env" ] }

We can now finally include our loader in our configuration to transform Javascript files. This will now allow us to use ES2015+ syntax in our code.

Configuration

//webpack.config.js
...
module: {
      rules: [
          {
            test: /.js$/,
            use: 'babel-loader',
            exclude: [
              /node_modules/
            ]
          }
      ]
  }
...

Test Case

//src/app/index.js
class TestClass {
    constructor() {
        let msg = "Using ES2015+ syntax";
        console.log(msg);
    }
}

let test = new TestClass();

The above snippet results to the following in our browser console:

This is a very common loader. We will further demonstrate some few loaders with popular frameworks including Angular(1.5+) and React.

raw-loader

It is a loader that lets us import files as a string. We will show this by importing a HTML template to use for an angular component.

Configuration

//webpack.config.js
...
module: {
      rules: [
         ...,
          {
              test: /.html/,
              loader: 'raw-loader'
          }
      ]
  },
  ...

Use

//src/app/index.js
import angular from 'angular';
import template from './index.tpl.html';

let component = {
    template // Use ES6 enhanced object literals.
}

let app = angular.module('app', [])
    .component('app', component)

We could alternatively use template: require('./index.tpl.html' instead of the import statement and have a simple HTML file.

//src/app/index.tpl.html
<h3>Test raw-loader for angular component</h3>

sass-loader

The sass-loader helps us to use scss styling in our application. It requires node-sass which allows us to natively compile .scss files to CSS at incredible speed and automatically via a connect middleware. It is recommended to use it together with css-loader to turn it into a JS module and style-loader that will add CSS to the DOM by injecting a tag.

$ yarn add sass-loader node-sass css-loader style-loader -D

Configuration

//webpack.config.js
...
module: {
      rules: [
         ...,
          {
            test: /.(sass|scss)$/,
            use: [{
                loader: "style-loader" // creates style nodes from JS strings
            }, {
                loader: "css-loader" // translates CSS into CommonJS
            }, {
                loader: "sass-loader" // compiles Sass to CSS
            }]
          }
      ]
  },
  ...
//src/style/app.scss
$primary-color: #2e878a;
body {
    color: $primary-color;
}

Use

We simply import it in the template as follows and the styling will kick in.

//src/app/app.js
...
import '../style/app.scss';
...

So far those are enough loaders to guide us in the right direction on how to add other loaders.

Plugins

Plugins are the backbone of webpack and serve the purpose of doing anything else that a loader cannot do.

Loaders do the pre-processing transformation of any file format when you use them; they work at the individual file level during or before the bundle is generated. On the other hand, plugins are quite simple since they expose only one single function to webpack and are not able to influence the actual build process.

Plugins work at bundle or chunk level and usually work at the end of the bundle generation process.

Plugins can also modify how the bundles themselves are created and have more powerful control than loaders. The figure below illustrates where loaders and plugins operate.

We have already used html-webpack-plugin and we will demonstrate how to use some more common plugins in our project.

extract-text-webpack-plugin

Extracts text from a bundle, or bundles, into a separate file. This is very important in ensuring that when we build our application, the CSS is extracted from the Javascript files into a separate file. It moves all the required CSS modules in entry chunks into a separate CSS file. Our styles will no longer be inlined into the JS bundle, but in a separate CSS file (styles.css). If our total stylesheet volume is big, it will be faster because the CSS bundle is loaded in parallel to the JS bundle.

$ yarn add extract-text-webpack-plugin -D

Configuration

//webpack.config.js
var ExtractTextPlugin = require('extract-text-webpack-plugin');
...
{
  test: /.css$/,
  use: ExtractTextPlugin.extract({  
    fallback: 'style-loader',
    use: [
      { loader: 'css-loader'},
      { loader: 'sass-loader'}
    ],
  })
},
plugins: [
    new ExtractTextPlugin("styles.css"), // extract css to a separate file called styles.css
  ]
...

DefinePlugin

The DefinePlugin allows you to create global constants which can be configured at compile time. This can easily be used to manage import configurations like API keys and other constants that can be changed easily. The best way to use this plugin is to create a .env file with different constants and access them in our configuration using dotenv package then we can directly refer to these constants in our code.

$ yarn add dotenv -D

We can then create a simple environmental variable in our .env file.

//.env
API_KEY=1234567890

Configuration

...
require('dotenv').config()
...
plugins: [
    new webpack.DefinePlugin({  // plugin to define global constants
          API_KEY: JSON.stringify(process.env.API_KEY)
      })
]

webpack-dashboard

This is a rarely used CLI dashboard for your webpack-dev-server. The plugin introduces “beauty and order” in our development environment and instead of the normal console logs, we get to see an attractive easy to interpret dashboard.

Installation

$ yarn add webpack-dashboard -D

Configuration

//webpack.config.js
...
const DashboardPlugin = require('webpack-dashboard/plugin');
...
plugins: [
      new DashboardPlugin()
  ],
...

We then edit our start script to use the plugin.

//package.json
...
"scripts": {
    "start": "webpack-dashboard -- webpack-dev-server --history-api-fallback --inline --progress"
  }
...

After running our application, we see a very nice interface.

Development Environments

In this last section, we focus on how we can use webpack to manage different environment configurations. This will also include use of some plugins depending on the environment which can either be testing, development, staging or production depending on the provided environmental variables. We will rely on dotenv package to get our environment. Some of the things that can vary between these environments include devtool and plugins like extract-text-webpack-plugin, UglifyJsPlugin and copy-webpack-plugin among others.

  • devtool– Controls if and how source maps are generated.
  • copy-webpack-plugin – Copies individual files or entire directories to the build directory. This is recommended for production to copy all assets to the output folder.
  • uglifyjs-webpack-plugin – Used to minify our Javascript bundle. Recommended to be used in production to reduce the size of our final build.

Installation

$ yarn add copy-webpack-plugin uglifyjs-webpack-plugin -D

Configuration

We will alter our configuration a bit to accomodate this functionality. We also remove DashboardPlugin which is known to cause some issues when minifying.

//webpack.config.js
const CopyWebpackPlugin = require('copy-webpack-plugin');
const UglifyJSPlugin = require('uglifyjs-webpack-plugin');
require('dotenv').config()

const ENV = process.env.APP_ENV;
const isTest = ENV === 'test'
const isProd = ENV === 'prod';

function setDevTool() {  // function to set dev-tool depending on environment
    if (isTest) {
      return 'inline-source-map';
    } else if (isProd) {
      return 'source-map';
    } else {
      return 'eval-source-map';
    }
}
...
const config = {
...
devtool: setDevTool(),  //Set the devtool
...
}

// Minify and copy assets in production
if(isProd) {  // plugins to use in a production environment
    config.plugins.push(
        new UglifyJSPlugin(),  // minify the chunk
        new CopyWebpackPlugin([{  // copy assets to public folder
          from: __dirname + '/src/public'
        }])
    );
};

module.exports = config;

The difference between the bundle sizes before and after minification are clearly visible. We have managed to trim our code from 1.57MB to 327kB.

Conclusion

webpack is definitely a poweful tool for development and is easy to configure once you grasp the few concepts that are applied. Managing multiple configurations for multiple environments can be very cumbersome but webpack-merge provides us with the ability to merge different configurations and avoid use of if statements for configurations. This article demonstrates just a few of the many different loaders and plugins that make using webpack fun. Feel free to play around with different plugins and frameworks to better understand the power of webpack.

Source:: scotch.io

Testing Angular 2 and Continuous Integration with Jest

By Matt Fehskens

First test starting

This article is brought with ❤ to you by Semaphore.

Introduction

Angular is one of the most popular front-end frameworks around today. Developed
by Google, it provides a lot of functionality out of the box. Like its
predecessor, AngularJS, it was built with testing in mind. By default, Angular
is made to test using Karma and Jasmine. There are, however, some drawbacks to
using the default setup, such as performance.

Jest is a testing platform made by Facebook, which can help us work
around some of these issues. In this article, we’ll look at how to set up an
Angular project to use Jest and how to migrate from Karma/Jasmine testing to
Jest.

Prerequisites

Before starting this article, it is assumed that you have:

  • An understanding of Angular and how to write unit tests for Angular. If
    you need a primer on this, please read the Semaphore series on Unit Testing
    with Angular 2
    ,
  • Knowledge of TypeScript and how it relates to JavaScript,
  • An understanding of ES6/ES2015 concepts such as arrow functions, modules,
    classes, and block-scoped variables,
  • Comprehension of using a command line or terminal such as Git Bash, iTerm, or
    your operating system’s built-in terminal,
  • Node >= v6 and NPM >= v5 installed,
  • Knowledge of how to run NPM scripts,
  • Knowledge of how to setup an NPM project, and
  • Knowledge of how to install dependencies through NPM

Testing Angular

Angular, as the introduction explained, comes with built-in support for using
Karma and Jasmine to perform unit testing. There are other options for testing,
such as using the Mocha testing framework with the
Chai assertions library and
Sinon mocking library, but using Jasmine out of the
box gives us all three of these capabilities.

Karma and Jasmine

Jasmine

Jasmine is a testing framework as well as
assertion library and mocking library. It provides all the functionality we
need to write tests. The testing framework functionality gives us functions
like describe, it, beforeEach, and afterEach.

Its assertion library provides the ability to verify that the
tests have run, by providing the function expect and its chaining assertions,
such as toEqual and toBe.

Finally, its mocking capability let’s us stub
external functions, through spies, by calling jasmine.spy or spyOn or
jasmine.createSpyObj. Mocking functionality is important because it allows
us to keep our tests isolated to just the code we want to test, which is the
point of a unit test.

Karma

Karma is a test runner. It was developed by
the AngularJS team to make testing a core tennet of the framework. A test
runner, as its name implies, runs tests. Karma will load up an environment
which will then find test files and use the specified testing framework to
execute those tests.

Karma requires a configuration file, usually named karma.conf.js, which
specifies where the test files, which frameworks and plugins to use, and
how to configure those plugins.

Jest

Jest is advertised as a zero-configuration
testing platform. It’s important to note that it’s neither a framework nor a library,
but a platform. Out of the box, Jest not only gives us everything Jasmine
provides, but also provides the functionality of Karma.

Jest is developed by Facebook and used to test all their JavaScript code,
including the React codebase. Just that
alone is a ringing endorsement for the quality of the Jest platform.

Why Jest?

The Jest website explains well why we should use it to
write our tests. Comparing it to the default Karma and Jasmine setup,
we can see that it comes with some added benefits:

  • Parallel testing: Jest, by default, runs tests in parallel, to minimize the
    time it takes to execute tests,
  • Sandboxing: Jest sandboxes all tests to prevent global variables or state
    from a previous test to affect the results of the next one, and
  • Code coverage reports: with Karma and Jasmine, you have to set up a plugin for
    code coverage. Adding TypeScript into the mix makes things even more
    difficult to process. By contrast, Jest will provide code coverage
    reports out of the box.

Not only does Jest solve our performance woes, but it also keeps
our tests isolated from each other and gives
us code coverage, all out of the box.

Jest and JSDom

It should be noted that one potential disadvantage of Jest is that it
uses JSDom
to simulate the brower’s DOM. Karma has an advantage here as it can
run tests in a variety of browsers.

The Sample Project

In this post, we’ll create a toy project to highlight how to use Jest. The application will just
output some text to the screen using an Angular component and then we’ll incorporate a service
to augment that text. By doing so, we’ll see how to test two of the most important features of
Angular–components and services–through Jest.

Cloning the Repository

If you’d like to view the code for this article, it’s available at
this GitHub repository.

It can be downloaded by going to your terminal/shell and entering:

git clone https://github.com/gonzofish/angular-jest-tutorial

Throughout the article, there will be stopping points, where a Git tag will be
made available to see the code to that point. A tag can be checked out using:

git checkout <tag name>

Setting Up the Project to Use Jest

For our project we’ll be using a very basic Angular setup. There are a few steps
to this:

  1. Generate the NPM package,
  2. Add Angular dependencies,
  3. Add Jest dependencies,
  4. Add source directory,
  5. Set up TypeScript,
  6. Set up Jest, and
  7. Set up .gitignore (optional).

The starting layout of our code directory will be:

src/
  setup-jest.ts
  tsconfig.spec.json
.gitignore
package-lock.json
package.json
tsconfig.json

Generating the NPM Package

The first step is to create your project using NPM:

npm init -f

Which will create the package.json project without asking any questions.
Next, go in to the generated package.json and set the test script to
jest.

Adding Angular Dependencies

Install the Angular dependencies using npm:

npm install --save @angular/common 
    @angular/compiler 
    @angular/core 
    @angular/platform-browser 
    @angular/platform-browser-dynamic 
    core-js 
    rxjs 
    zone.js
npm install --save-dev typescript 
    @types/node

These dependencies will let us use Angular to write our application and test it
properly. Here’s what each package does:

  • @angular/common: provides acces to the CommonModule which adds the basic
    directives, such as NgIf and NgForOf.
  • @angular/compiler: provides the JIT compiler which compiles components and
    and modules in the browser.
  • @angular/core: gives access to commonly used classes and decorators.
  • @angular/platform-browser: provide functionality, in conjunction with
    platform-browser-dynamic, to bootstrap our application.
  • @angular/platform-browser-dynamic: provide functionality, in conjunction with
    platform-browser, to bootstrap our application.
  • @types/node: the TypeScript type definitions for Node, which are needed for
    compiling code.
  • core-js: provides polyfills for the browser.
  • rxjs: the Observable library which Angular requires.
  • typescript: the TypeScript compiler, which converts TypeScript to
    JavaScript.
  • zone.js: a library, developed by the Angular team,
    to create execution contexts, which aid in rendering the Angular application.

Adding Testing Dependencies

Then, install the development dependencies:

npm install --save-dev @types/jest 
    jest 
    jest-preset-angular

Let’s list out what each of these does:

  • @types/jest: the TypeScript type definitions for Jest,
  • jest: the Jest testing framework, and
  • jest-preset-angular: a preset Jest configuration for Angular projects.

Setting Up TypeScript

To set up TypeScript, we need a tsconfig.json file. The TypeScript config
we’ll use is:

{
    "compileOnSave": false,
    "compilerOptions": {
        "baseUrl": "src",
        "emitDecoratorMetadata": true,
        "experimentalDecorators": true,
        "lib": [
            "dom",
            "es2015"
        ],
        "module": "commonjs",
        "moduleResolution": "node",
        "noImplicitAny": true,
        "sourceMap": true,
        "suppressImplicitAnyIndexErrors": true,
        "target": "es5",
        "typeRoots": [
            "node_modules/@types"
        ]
    },
    "include": [
        "src"
    ]
}

This configuration provides the essential attributes needed for writing
Angular applications with TypeScript, as
per the Angular documentation.
We’ll also add a couple of extra options:

  • compileOnSave: this is for IDE support, setting it to false tells IDEs
    with support not to compile the code into a JS
    file on save,
  • baseUrl: the base directory to resolve non-relative URLs as if they are
    relative. With this set, instead of doing:

    import { XService } from '../../services/x.service';
    

    we can just do:

    import { XService } from 'services/x.service';
    
  • include: tells TypeScript only to convert files in the list, we’re telling
    it to only compile TypeScript files under the src directory.

Adding the Source Directory

This step is pretty simple, just create a directory named src in the root
directory of your project.

Setting Up Jest

Although Jest can work out of the box, to get it to work with Angular there is
a minimal amount of setup.

Jest Configuration

First, we need open up package.json and ensure that we’ve set the test
script to jest:

"scripts": {
    "test": "jest"
}

Secondly, add a jest section to package.json:

"jest": {
    "preset": "jest-preset-angular",
    "roots": [ "<rootDir>/src/" ],
    "setupTestFrameworkScriptFile": "<rootDir>/src/setup-jest.ts"
}

Here’s what each attribute of our Jest setup does:

  • preset: specifies that we’ll be using the jest-preset-angular preset for
    our setup. To see what this configuration looks like, visit
    the jest-preset-angular documentation.
  • roots: specifies the root directory to look for test files, in our case,
    that’s the src directory; is a Jest caveat to go to the
    project’s root directory.
  • setupTestFrameworkScriptFile: the file to run just after the test framework
    begins running, we point it to a file named setup-jest.ts under the src
    directory.

Creating the TypeScript Configuration for Testing

Add a tsconfig.spec.json file, which is required by
jest-preset-angular, in the src directory:

{
    "extends": "../tsconfig.json"
}

Creating the Jest Setup File

Next, add a file setup-jest.ts to the src directory:

import 'jest-preset-angular';

This pulls in the globals required to run tests. We’re now setup to use Jest.

Creating .gitignore (optional)

This step is only required if you’re creating a Git repository. You don’t want
to commit all the node_modules, so create a .gitignore file and add
node_modules to it.

The First Stop

At this point we have our project set up, and to see the code as it
exists at this point, checkout tag stop-01:

git checkout stop-01

Writing Tests

Let’s add a component, and call it hello-ewe. This function will
take in a name and output Baaaa, {{ name }}! (which means “Hello” for sheep).

Inside of the src folder, add a folder named hello-ewe. Inside the
hello-ewe folder add two files: hello-ewe.component.ts and
hello-ewe.component.spec.ts.

The configuration of jest-preset-angular looks for all files that end in
.spec.ts and run them as test files. So we’ll start by setting up our
test file, hello-ewe.component.spec.ts:

import {
    async,
    ComponentFixture,
    TestBed
} from '@angular/core/testing';

import { HelloEweComponent } from './hello-ewe.component';

describe('HelloEweComponent', () => {
    let component: HelloEweComponent;
    let fixture: ComponentFixture<HelloEweComponent>;

    beforeEach(async(() => {
        TestBed.configureTestingModule({
            declarations: [ HelloEweComponent ]
        });
        fixture = TestBed.createComponent(HelloEweComponent);
        component = fixture.componentInstance;
    }));

    test('should exist', () => {
        expect(component).toBeDefined();
    });
});

If you’ve written unit tests with Jasmine, this code will look very familiar.
Jest uses a syntax extremely similar to Jasmine. We import all the same parts
of the core testing module that we would for any Angular component test. We
also import the HelloEweComponent from the component TypeScript file.

Next, we scope our tests, just like Jasmine, using the describe function.
Again, like with Jasmine, we use the beforeEach function to set up things for
testing. We pull in the HelloEweComponent with
TestBed.configureTestingModule, get the component fixture via
TestBed.createComponent, and get the component by pulling the
componentInstance from the fixture.

We’ll create a sanity test to verify that our component has been compiled. This
is the first departure from Jasmine. In Jasmine, the it function identifies an
actual test to run; with Jest the analogous function is test. It should be
noted that in order to make the transition from Jasmine easier, Jest has test
also aliased as it.

And, finally, like with Jasmine, we use expect to start an assertion. Off of
calling expect, there are assertions that we can call. We can also use the
assertion .toBeDefined() to assure that our component has been created.

If we run npm test, first we see the test start to run:

When it finishes, we see it failed as expected because we haven’t
set up our actual component:

First test failing

The Jest outputs are very informative and easy to understand. This output
says that our test failed because there isn’t a HelloEweComponent to create.
As a result, our toBeDefined assertion fails because the value we’re
asserting against, component, isn’t defined.

Let’s make our test pass by creating our HelloEweComponent in
hello-ewe.component.ts:

import {
    Component
} from '@angular/core';

@Component({
    selector: 'hello-ewe',
    styles: [],
    template: '<p></p>'
})
export class HelloEweComponent {

}

As you can see, it’s a rather empty component that just provides a simple
template of a paragraph. If we save that and re-run our tests, we get a
passed test!

First test succeeding

Watch Mode

When doing test-driven development, or writing multiple unit tests, it’s nice
to be able to turn on a “watch” mode that re-runs our tests every time we
update a file.

With Jest this is very simple. We just call jest --watch. To make this easy
to run in our project, open up package.json and add to the scripts
section:

"test:watch": "jest --watch"

The name of the script, test:watch, can be whatever we want it to be, but
test:watch clearly states our intention.

We can now run our tests using npm run test:watch. With Jest, the --watch
flag will clear the scren before every run, making it easier to see what
passed or failed on the last run. With other test setups, this would
require some additional configuration.

Also, with Jest, there is another nuance to the --watch flag. In addition to
--watch there is a --watchAll flag. The --watchAll will re-run all tests
when any file changes, with --watch only the tests related to the changed
files are re-run.

This means if we have a service that we’ve already fully tested and we’re
working on a component, whenever our component or its test file change, the
tests for the service aren’t re-run. This significantly decreases the time to
run tests using watch mode.

More Tests

Now that we’re using watch mode, we can add tests and see them all re-run when
our code changes. To see this in action, let’s add a second test. We’ve said
that our HelloEweComponent was going to output Baaaa, {{ name }}!, but we
have nothing that makes that happen, so let’s add it to hello-ewe.component.spec.ts.

First let’s test that we have a default name:

// imports

describe('HellowEweComponent', () => {
    // setup and other test

    test('should have a default name', () => {
        expect(component.name).toBe('Matt');
    });
});

Since we’re using watch mode, our tests automatically re-run and fail because
there is no name attribute on our component, let alone one set to Matt:

Test Two Failing

The output shows only the errors of the failed tests, making it simple to
understand what failed. Something else to notice here is the part of our Jest
output that reads:

Expected value to be (using ===):
    "Matt"
Received:
    undefined

Difference:

    Comparing two different types of values. Expected string but received undefined.

    at src/hello-ewe/hello-ewe.component.spec.ts:29:32

This output is very straightforward. It tells us exactly why our test failed,
as opposed to other set ups which can provide very cryptic messages when a test
fails. We’ve said that we want component.name to be the string "Matt" but
component.name is undefined. Which is exactly the message we’re given in
the Difference section of the output!

We’ll fix this by adding a name attribute to hello-ewe.component.ts:

// imports & component declaration
export class HelloEweComponent {
    name = 'Matt';
}

our tests succeed:

Test Two Succeeding

And, finally, we can test that our output meets the expectation that we set that
it will out Baaaa, {{ name }}:

// imports

describe('HelloEweComponent', () => {
    // previous variable declarations
    let dom: any;

    beforeEach(async(() => {
        // module setup & variable instantiation
        dom = fixture.nativeElement;
    }));

    // previous tests

    test('should output a <p> with "Baaaa, {{ name }}!"', () => {
        fixture.detectChanges(); // renders the dom
        expect(dom.innerHTML).toBe('<p>Baaaa, Matt!</p>');
    });
});

Here we grab the DOM element using the nativeElement of the fixture we
previously created. We then call detectChanges on our fixture to render
the DOM. Finally, we test that the innerHTML of our DOM is set to a
paragraph with Baaaa, Matt! in it.

This fails, letting us know our template only has

and no
Baaaa, Matt!:

Test Three Failing

So we fix the template in hello-ewe.component.ts:

// imports

@Component({
    // selector & styles
    template: '<p>Baaaa, {{ name }}!</p>'
})
export class HelloEweComponent {
    name = 'Matt';
}

When our tests are re-run, they all pass!

Test Three Succeeding

Stop running Jest by hitting q. The code, to this point, can be seen by
checking out stop-02:

git checkout stop-02

Adding a Service

Next, we’ll add a service so we can see how the watch mode really works.
Create a folder under src called services. Add two files to that folder:
name.service.ts and name.service.spec.ts.

First, let’s set up our test file:

import { TestBed } from '@angular/core/testing';

import { NameService } from './name.service';

describe('NameService', () => {
    let service: NameService;

    beforeEach(() => {
        TestBed.configureTestingModule({
            providers: [ NameService ]
        });
        service = TestBed.get(NameService);
    });

    test('should exist', () => {
        expect(service).toBeDefined();
    });
});

Start watch mode again. Of course, it fails, because we have no such
service. Note the time it took for the tests to run.

When we add the service it to name.service.ts:

import { Injectable } from '@angular/core';

@Injectable()
export class NameService {

}

Test Four Success

It passes! You’ll notice that the time it took to run the tests is
significantly lower in our case. It went from 4 seconds to ~1 second. This is
because Jest is only running tests relevant to the files we change.

Let’s add a getter to our service that outputs a private name variable in the
service. First, we add our tests:

// imports

describe('NameService', () => {
    // setup & previous test

    test('should have a name getter', () => {
        expect(service.name).toBe('Matt');
    });
});

and the accompanying code for name.service.ts:

// import & Injectable
export class NameService {
    private _name = 'Matt';

    get name() { return this._name; }
}

Adding a Setter

We’ll also add a method to set that private name variable. We can use the
name getter to verify it works.

// imports

describe('NameService', () => {
    // setup & previous tests

    test('#setName should set the private name', () => {
        service.setName('Goku');
        expect(service.name).toBe('Goku');
    });
});

The test fails with an informative message that setName is not a function. We’ll
add it to NameServie:

// import & Injectable
export class NameService {
    // _name & name

    setName(name: string) {
        this._name = name;
    }
}

And, our output shows that our tests have passed:

Test Five Success

This shows that the only tests which were executed were
those related to the NameService. None of the HelloEweComponent tests
ran because those files didn’t change.

You can check out the code we’ve created as stop-03.

git checkout stop-03

No Tests for Committed Files

Again, stop Jest, using q. Now, start it back up using the watch mode. The
output may look similar to:

No Changes Found

Jest has a neat feature where, if there weren’t any changes since the last
Git commit, it won’t run any tests.

Watch Options

You’ll notice the message at the bottom of the watch output:

Watch Usage: Press w to show more

Pressing w gives us a list of options to run. By default, the watch mode is
only going to run tests related to changed files. However, these options give
us the ability to run all tests, run a tests using a regex pattern for file
names or test names, to quit, or to re-run the tests.

So, not only do we also have a watch mode, but we have one that affords a greater
flexibility on how to run tests. If we only want to run a single test, we
can do that very easily.

Code Coverage

It was mentioned above that Jest provides code coverage out of the box. In
other setups we’d have to do a significant amount of configuration to get code
coverage outputs that made sense. Jest leverages the Istanbul library to execute
code coverage.

As you probably guessed, getting code coverage is as simple as running jest
with a flag. Open up package.json and add a script:

"test:cov": "jest --coverage"

Then go ahead and run npm run test:cov. Our tests will run and, at the end,
we’re presented with a nice table output of our code coverage:

Code Coverage Ouput

You may notice that in that table is our setup-jest.ts file. Although it isn’t
“real” code, it needs to be included for the coverage reports to properly compile.

In addition to the in-terminal table outuput, Jest will create
HTML code coverage reports via Istanbul, located in the generated coverage folder.

HTML Coverage Report

If you’ve created a Git project, make sure to add coverage to your
.gitignore.

You can checkout the code with test coverage we’ve created as stop-04.

git checkout stop-04

More Jest Options

Jest provides even more than we’ve shown here. For instance, it can do
snapshot testing
to verify that your UI outputs as expected.

Jest is a powerful tool that gets setting up a testing harness out of the way
and lets us write tests efficiently.

Continuous Integration with Semaphore

We’re not really done testing until we’ve incorporated it into our
continuous integration service. We’ll use Semaphore
for continuous integration.

If you haven’t done so already, push your code to a repository on either
GitHub or Bitbucket.

Once our code is committed to a repository, we can add a CI step to our Angular
2 development without much effort.

  • Go to Semaphore,
  • Sign up for an
    account,
  • Confirm your email and
    sign in,
  • Next, if you do not have any projects set up, select “Add new project”,

The "Add New Project" button

  • Next, select either Github or Bitbucket, depending on
    where your repository is:

Select repository host

  • Then, from the provided list, select the project repository:

Select project repository

  • Next, select the branch (most likely master),

Select a branch

  • Identify who the owner of the project should be,

Identify the owner

  • Semaphore will analyze your project to identify a configuration,

Identifying configuration

  • Set the version of node to >= 6,

Node version six or greater

  • On the same page, ensure your build is set up to do npm install and
    npm test,

Build steps

  • Click “Build With These Settings” and it’s building.

Build Button

From now on, any time you push code to your repository, Semaphore will start
building it, making testing and deploying your code continuously
fast and simple.

Conclusion

In this article, using a toy application, we’ve seen the power of Jest at work
with Angular. We saw how Jest improves testing by cutting down on test run
time while also providing a lot out of the box with little to no configuration.

This article is brought with ❤ to you by Semaphore.

Source:: semaphoreci.com

A different way of understanding this in JavaScript

By Dr. Axel Rauschmayer

In this blog post, I take a different approach to explaining this in JavaScript: I pretend that arrow functions are the real functions and ordinary functions a special construct for methods. I think it makes this easier to understand – give it a try.

Source:: <a href=http://feedproxy.google.com/~r/2ality/~3/ZkmCsw9QNTw/alternate-this.html target="_blank" title="A different way of understanding this in JavaScript” >2ality

Creating Transcripts For Videos

By Prosper Otemuyiwa

Legend has it that several years ago the inhabitants of the earth all spoke one language. They all communicated easily and there was peace in the land. Suddenly, they came together to build a tower to speak face-to-face with their maker. Their maker laughed hysterically and simply did one thing––disrupted their language. These inhabitants started speaking in many different languages, they couldn’t understand each other. Confusion was spread like fire over the face of the earth. And the plan and tower had to be abandoned. What a shame!

Wait a minute. What if there was a way for these earthly beings to record their conversations and have it translated for other people to understand? What if they had the ability to create transcripts for their videos? Perhaps, the mighty edifice would have been completed successfully. In this article, I’ll show how to create transcripts for your videos using Cloudinary. Check out the repo to get the code.

What’s Cloudinary?

Cloudinary is a cloud-based, end-to-end media management solution. As a critical part of the developer stack, Cloudinary automates and streamlines your entire media asset workflow. It handles a variety of media, from images, video, audio to emerging media types. It also automates every stage of the media management lifecycle, including media selection, upload, analysis and administration, manipulation optimization and delivery.

Uploading Videos

Uploading videos with Cloudinary is as simple as using the upload widget

Cloudinary Upload Widget

Performing server-side uploads is also as simple as:

cloudinary.v2.uploader.upload("dog.mp4", 
        { resource_type: "video" },
        function(result) {console.log(result); });

Creating a Transcript For Videos

Cloudinary has a new add-on offering support for video transcription using Google speech. Quickly create a new account with Cloudinary if you don’t have any.

Step 1

Subscribe to the Google speech internal add-on plan. Currently, transcription only supports English audio. And 1 add-on unit is equivalent to 15 video seconds.

Step 2

Set up a Node server. Initialize a package.json file:

 npm init

Install the following modules:

 npm install express multer cloudinary cors body-parser --save

express: We need this module for our API routes
multer: Needed for parsing http requests with content-type multipart/form-data
cloudinary: Node SDK for Cloudinary
body-parser: Needed for attaching the request body on express’s req object
cors: Needed for enabling CORS

Step 3

Create a server.js file in your root directory. Require the dependencies we installed:

const express = require('express');
const app = express();
const cloudinary = require('cloudinary');
const cors = require('cors');
const bodyParser = require('body-parser');
const multer = require('multer');
const multerMiddleware = multer({ dest: 'video/' });

// increase upload limit to 50mb
app.use(bodyParser.json({limit: "50mb"}));
app.use(bodyParser.urlencoded({limit: "50mb", extended: true, parameterLimit:50000}));
app.use(cors());

cloudinary.config({
    cloud_name: 'xxxxxxxxxxx',
    api_key: 'xxxxxxxxxxx',
    api_secret: 'xxxxxxxxxxxxx'
});

app.post('/upload', multerMiddleware.single('video'), function(req, res) {
  console.log("Request", req.file.path);
  // Upload to Cloudinary
  cloudinary.v2.uploader.upload(req.file.path,
    {
      raw_convert: "google_speech",
      resource_type: "video",
      notification_url: 'https://requestb.in/wh7fktwh'
    },
    function(error, result) {
      if(error) {
        console.log("Error ", error);
        res.json({ error: error });
      } else {
        console.log("Result ", result);
        res.json({ result: result });
      }
    });
});

app.listen(3333);
console.log('Listening on localhost:3333');

Make sure your server is running:

nodemon server.js

Once a user makes a POST request to the /upload route, the route grabs the video file from the HTTP request, uploads to Cloudinary, and makes a request to Google Speech to extract the text from the voice in the video recording.

The notification_url is an HTTP URL to notify your application (a webhook) when the file has completed uploading. In this code demo, I set up a webhook quickly with the ever-efficient requestbin.

Immediately the video is done uploading, a notification is sent to the webhook.

Inspecting the response sent from Cloudinary to the Webhook

In the image above, you can see the response states that the transcript creation status is pending. Check out the full response.

"info": {
  "raw_convert": {
      "google_speech": {
          "status": "pending"
      }
  }
},

Extract from the full response

Another notification is sent to the webhook once the transcript has been fully extracted from the video recording.

Inspecting the response sent from Cloudinary to the Webhook

Check out the full response below:

{
  "info_kind": "google_speech",
  "info_status": "complete",
  "public_id": "tb5lrftmeurqfmhqvf6h",
  "uploaded_at": "2017-11-23T15:06:55Z",
  "version": 1511449614,
  "url": "http://res.cloudinary.com/unicodeveloper/video/upload/v1511449614/tb5lrftmeurqfmhqvf6h.mov",
  "secure_url": "https://res.cloudinary.com/unicodeveloper/video/upload/v1511449614/tb5lrftmeurqfmhqvf6h.mov",
  "etag": "47d8aad801c4d7464ddf601f71ebddc7",
  "notification_type": "info"
}

Now, the transcript has been created. Next, attach it to the l_subtitles transformation property.

Step 4

The transcript was created in step 3. It’s very important that you know that the transcript value is {public_id}.transcript. This is what I mean––the public_id of the video I uploaded is tb5lrftmeurqfmhqvf6h. Therefore, the transcript will be tb5lrftmeurqfmhqvf6h.transcript.

All you need to do now is add it as an overlay to the video URL with the l_subtitles parameter like so:

l_subtitles:tb5lrftmeurqfmhqvf6h.transcript

Finally, the URL of the video with the transcript enabled will be:



Video without transcript

http://res.cloudinary.com/unicodeveloper/video/upload/v1511449614/tb5lrftmeurqfmhqvf6h.mp4

Video with transcript enabled

https://res.cloudinary.com/unicodeveloper/video/upload/l_subtitles:tb5lrftmeurqfmhqvf6h.transcript/v1511449614/tb5lrftmeurqfmhqvf6h.mp4

Note: Cloudinary now supports converting video audio from stereo to mono using the fl_mono transformation. Check this out: http://res.cloudinary.com/neta/video/upload/fl_mono/stereo-sound-test.mp3. Here, we used the
Cloudinary Node.js library
. The raw_convert parameter also works with other SDKs––PHP, Ruby, Java, etc

Styling Subtitles

Subtitles are displayed using the default white color and centered alignment. You can use transformation parameters to adjust the color, font, font size, and position. For example, let’s change the color of our example subtitle to green. I’ll also change its alignment.

Transformation––change color to green, change position to south_west. _cogreen, _g_southwest

https://res.cloudinary.com/unicodeveloper/video/upload/l_subtitles:tb5lrftmeurqfmhqvf6h.transcript,co_green,g_south_west/v1511449614/tb5lrftmeurqfmhqvf6h.mp4

Conclusion

The inhabitants of the earth now have a solution to their initial problem. You are the godsent programmer, problem solver and solution architect. Go, create transcripts for your videos. Programmatically creating video transcripts and applying them to the videos has never been this easy. Check out Cloudinary’s video solution for more insights on automating your video management workflow.

Worthy of note is that 1 add-on unit is equivalent to 15 seconds. If you need more, contact the Cloudinary support team.

Source:: scotch.io