Monthly Archives: November 2016

Build A Media Library With React, Redux, and Redux-saga – Part 2

By rowland

S7NJZL7zTqOQY3ii1LNN_architect2.jpg

In the first part of this tutorial, we had a running app. We covered basic React setup, project workflow; defined basic components and configured our application’s routes.

In part 2 of this tutorial, which is unarguably the most interesting part of building React/Redux application, we will setup application state management with redux, connect our React components to the store, and then deploy to Heroku. We will walk through this part in eight steps:

  1. Define Endpoints of interest.
  2. Create a container component.
  3. Define action creators.
  4. Setup state management system.
  5. Define async task handlers.
  6. Create presentational components.
  7. Connect our React component to Redux store.
  8. Deploy to Heroku.

Step 1 of 8: Define Endpoints of interest

Our interest is the media search endpoints of Flickr API and Shutterstock API.

Api/api.js

const FLICKR_API_KEY = 'a46a979f39c49975dbdd23b378e6d3d5';
const SHUTTER_CLIENT_ID = '3434a56d8702085b9226';
const SHUTTER_CLIENT_SECRET = '7698001661a2b347c2017dfd50aebb2519eda578';

// Basic Authentication for accessing Shutterstock API
const basicAuth = () => 'Basic '.concat(window.btoa(`${SHUTTER_CLIENT_ID}:${SHUTTER_CLIENT_SECRET}`));
const authParameters = {
  headers: {
    Authorization: basicAuth()
  }
};

/**
* Description [Access Shutterstock search endpoint for short videos]
* @params { String } searchQuery
* @return { Array } 
*/
export const shutterStockVideos = (searchQuery) => {
  const SHUTTERSTOCK_API_ENDPOINT = `https://api.shutterstock.com/v2/videos/search?
  query=${searchQuery}&page=1&per_page=10`;

  return fetch(SHUTTERSTOCK_API_ENDPOINT, authParameters)
  .then(response => {
    return response.json();
  })
  .then(json => {
      return json.data.map(({ id, assets, description }) => ({
        id,
        mediaUrl: assets.preview_mp4.url,
        description
      }));
  });
};

/**
* Description [Access Flickr search endpoint for photos]
* @params { String } searchQuery
* @return { Array } 
*/
export const flickrImages = (searchQuery) => {
  const FLICKR_API_ENDPOINT = `https://api.flickr.com/services/rest/?method=flickr.photos.search&text=${searchQuery}&api_key=${FLICKR_API_KEY}&format=json&nojsoncallback=1&per_page=10`;

  return fetch(FLICKR_API_ENDPOINT)
    .then(response => {
      return response.json()
    })
    .then(json => {
      return json.photos.photo.map(({ farm, server, id, secret, title }) => ({
        id,
        title,
        mediaUrl: `https://farm${farm}.staticflickr.com/${server}/${id}_${secret}.jpg`
      }));
    });
};

First, head to Flickr and Shutterstock to get your credentials or use mine.

We’re using fetch method from fetch API for our AJAX request. It returns a promise that resolves to the response of such request. We simply format the response of our call using ES6 destructuring assignment before returning to the store.

We can use jQuery for this task but it’s such a large library with many features, so using it just for AJAX doesn’t make sense.

Step 2 of 8: Create a container component

In order to test our application as we walk through the steps, let’s define a MediaGalleryPage component which we will update later for a real time sync with our store.

container/MediaGalleryPage.js

import React, { Component } from 'react';
import { flickrImages, shutterStockVideos } from '../Api/api';

// MediaGalleryPage Component
class MediaGalleryPage extends Component {

 // We want to get images and videos from the API right after our component renders.
 componentDidMount() {
    flickrImages('rain').then(images => console.log(images, 'Images'));
    shutterStockVideos('rain').then(videos => console.log(videos,'Videos'));
  }

  render() {
  // TODO: Render videos and images here
  return (<div></div>)
  }
}

export default MediaGalleryPage;

We can now add library route and map it to MediaGalleryPage Container.

Let’s update out routes.js for this feature.

import React from 'react';
import { Route, IndexRoute } from 'react-router';
import App from './containers/App';
import HomePage from './components/HomePage';
import MediaGalleryPage from './containers/MediaGalleryPage';

// Map components to different routes.
// The parent component wraps other components and thus serves as 
// the entrance to other React components.
// IndexRoute maps HomePage component to the default route
export default (
  <Route path="/" component={App}>
    <IndexRoute component={HomePage} />
    <Route path="library" component={MediaGalleryPage} />
  </Route>
);

Let’s check it out on the browser console.

Images and Videos from the API

We are now certain that we can access our endpoints of interest to fetch images and short videos. We can render the results to the view but we want to separate our React components from our state management system. Some major advantages of this approach are maintainability, readability, predictability, and testability.

We will be wrapping our heads around some vital concepts in a couple of steps.

Step 3 of 8: Define action creators

Action creators are functions that return plain Javascript object of action type and an optional payload. So action creators create actions that are dispatched to the store. They are just pure functions.

Let’s first define our action types in a file and export them for ease of use in other files. They’re constants and it’s a good practice to define them in a separate file(s).

constants/actionTypes.js

// It's preferable to keep your action types together.
export const SELECTED_IMAGE = 'SELECTED_IMAGE';
export const FLICKR_IMAGES_SUCCESS = 'FLICKR_IMAGES_SUCCESS';
export const SELECTED_VIDEO = 'SELECTED_VIDEO';
export const SHUTTER_VIDEOS_SUCCESS = 'SHUTTER_VIDEOS_SUCCESS';
export const SEARCH_MEDIA_REQUEST = 'SEARCH_MEDIA_REQUEST';
export const SEARCH_MEDIA_SUCCESS = 'SEARCH_MEDIA_SUCCESS';
export const SEARCH_MEDIA_ERROR = 'SEARCH_MEDIA_ERROR';

Now, we can use the action types to define our action creators for different actions we need.

actions/mediaActions.js

import * as types from '../constants/actionTypes';

// Returns an action type, SELECTED_IMAGE and the image selected
export const selectImageAction = (image) => ({
  type: types.SELECTED_IMAGE,
  image
});

// Returns an action type, SELECTED_VIDEO and the video selected
export const selectVideoAction = (video) => ({
  type: types.SELECTED_VIDEO,
  video
});

// Returns an action type, SEARCH_MEDIA_REQUEST and the search criteria
export const searchMediaAction = (payload) => ({
  type: types.SEARCH_MEDIA_REQUEST,
  payload
});

The optional arguments in the action creators: payload, image, and video are passed at the site of call/dispatch. Say, a user selects a video clip on our app, selectVideoAction is dispatched which returns SELECTED_VIDEO action type and the selected video as payload. Similarly, when searchMediaAction is dispatched, SEARCH_MEDIA_REQUEST action type and payload are returned.

Step 4 of 8: Setup state management system

We have defined the action creators we need and it’s time to connect them together. We will setup our reducers and configure our store in this step.

There are some wonderful concepts here as shown in the diagram in Part 1.

Let’s delve into some definitions and implementations.

The Store holds the whole state tree of our application but more importantly, it does nothing to it. When an action is dispatched from a React component, it delegates the reducer but passing the current state tree alongside the action object. It only updates its state after the reducer returns a new state.

Reducers, for short are pure functions that accept the state tree and an action object from the store and returns a new state. No state mutation. No API calls. No side effects. It simply calculates the new state and returns it to the store.

Let’s wire up our reducers by first setting our initial state. We want to initialize images and videos as an empty array in our own case.

reducers/initialState.js

export default {
  images: [],
  videos: []
};

Our reducers take the current state tree and an action object and then evaluate and return the outcome.

Let’s check it out.

reducers/imageReducer.js

import initialState from './initialState';
import * as types from '../constants/actionTypes';

// Handles image related actions
export default function (state = initialState.images, action) {
  switch (action.type) {
    case types.FLICKR_IMAGES_SUCCESS:
      return [...state, action.images];
    case types.SELECTED_IMAGE:
      return { ...state, selectedImage: action.image };
    default:
      return state;
  }
}

reducers/videoReducer.js

import initialState from './initialState';
import * as types from '../constants/actionTypes';

// Handles video related actions
// The idea is to return an updated copy of the state depending on the action type.
export default function (state = initialState.videos, action) {
  switch (action.type) {
    case types.SHUTTER_VIDEOS_SUCCESS:
      return [...state, action.videos];
    case types.SELECTED_VIDEO:
      return { ...state, selectedVideo: action.video };
    default:
      return state;
  }
}

The two reducers look alike and that’s how simple reducers can be. We use a switch statement to evaluate an action type and then return a new state.

create-react-app comes preinstalled with babel-plugin-transform-object-rest-spread that lets you use the spread (…) operator to copy enumerable properties from one object to another in a succinct way.

For context, { …state, videos: action.videos } evaluates to Object.assign({}, state, action.videos).

Since reducers don’t mutate state, you would always find yourself using spread operator, to make and update the new copy of the current state tree.

So, When the reducer receives SELECTED_VIDEO action type, it returns a new copy of the state tree by spreading it(…state) and updating the selectedVideo property.

The next step is to register our reducers to a root reducer before passing to the store.

reducers/index.js

import { combineReducers } from 'redux';
import images from './imageReducer';
import videos from './videoReducer';

// Combines all reducers to a single reducer function
const rootReducer = combineReducers({
  images, 
  videos
});

export default rootReducer;

We import combineReducers from Redux. CombineReducers is a helper function that combines our images and videos reducers into a single reducer function that we can now pass to the creatorStore function.

You might be wondering why we’re not passing in key/value pairs to combineReducers function. Yes, you’re right. ES6 allows us to pass in just the property if the key and value are the same.

Now, we can complete our state management system by creating the store for our app.

store/configureStore.js

import { createStore, applyMiddleware } from 'redux';
import createSagaMiddleware from 'redux-saga';
import rootReducer from '../reducers';
import rootSaga from '../sagas'; // TODO: Next step

//  Returns the store instance
// It can  also take initialState argument when provided
const configureStore = () => {
  const sagaMiddleware = createSagaMiddleware(); 
  return {
    ...createStore(rootReducer,
      applyMiddleware(sagaMiddleware)),
    runSaga: sagaMiddleware.run(rootSaga)
  };
};

export default configureStore;
  • Initialize your SagaMiddleWare. We’d discuss about sagas in the next step.
  • Pass rootReducer and sagaMiddleware to the createStore function to create our redux store.
  • Finally, we run our sagas. You can either spread them or wire them up to a rootSaga.

What are sagas and why use a middleware?

Step 5 of 8: Define async task handlers

Handling AJAX is a very important aspect of building web applications and React/Redux application is not an exception. We will look at the libraries to leverage for such task and how they neatly fit into the whole idea of having a state management system.

You would remember reducers are pure functions and don’t handle side effects or async tasks; this is where redux-saga comes in handy.

redux-saga is a library that aims to make side effects (i.e. asynchronous things like data fetching and impure things like accessing the browser cache) in React/Redux applications easier and better — documentation.

So, to use redux-saga, we also need to define our own sagas that will handle the necessary async tasks.

What the heck are sagas?

Sagas are simply generator functions that abstract the complexities of an asynchronous workflow. It’s a terse way of handling async processes. It’s easy to write, test and reason. Still confused, you might want to revisit the first part of this tutorial if you missed it.

Let’s first define our watcher saga and work down smoothly.

sagas/watcher.js

import { takeLatest } from 'redux-saga';
import { searchMediaSaga } from './mediaSaga';
import * as types from '../constants/actionTypes';

// Watches for SEARCH_MEDIA_REQUEST action type asynchronously
export default function* watchSearchMedia() {
  yield* takeLatest(types.SEARCH_MEDIA_REQUEST, searchMediaSaga);
}

We want a mechanism that ensures any action dispatched to the store which requires making API call is intercepted by the middleware and result of request yielded to the reducer.

To achieve this, Redux-saga API exposes some methods. We need only four of those for our app: call, put, fork and takeLatest.

  • takeLatest is a high-level method that merges take and fork effect creators together. It basically takes an action type and runs the function passed to it in a non-blocking manner with the result of the action creator. As the name suggests, takeLatest returns the result of the last call.
  • watchSearchMedia watches for SEARCH_MEDIA_REQUEST action type and call searchMediaSaga function(saga) with the action’s payload from the action creator.

Now, we can define searchMediaSaga; it serves as a middleman to call our API. Getting interesting right?

sagas/mediaSaga.js

import { put, call } from 'redux-saga/effects';
import { flickrImages, shutterStockVideos } from '../Api/api';
import * as types from '../constants/actionTypes';

// Responsible for searching media library, making calls to the API
// and instructing the redux-saga middle ware on the next line of action,
// for success or failure operation.
export function* searchMediaSaga({ payload }) {
  try {
    const videos = yield call(shutterStockVideos, payload);
    const images = yield call(flickrImages, payload);
    yield [
      put({ type: types.SHUTTER_VIDEOS_SUCCESS, videos }),
      put({ type: types.SELECTED_VIDEO, video: videos[0] }),
      put({ type: types.FLICKR_IMAGES_SUCCESS, images }),
      put({ type: types.SELECTED_IMAGE, image: images[0] })
    ];
  } catch (error) {
    yield put({ type: 'SEARCH_MEDIA_ERROR', error });
  }
}

searchMediaSaga is not entirely different from normal functions except the way it handles async tasks.

call is a redux-saga effect that instructs the middleware to run a specified function with an optional payload.

Let’s do a quick review of some happenings up there.

  • searchMediaSaga is called by the watcher saga defined earlier on each time SEARCH_MEDIA_REQUEST is dispatched to store.
  • It serves as an intermediary between the API and the reducers.

  • So, when the saga(searchMediaSaga) is called, it makes a call to the API with the payload. Then, the result of the promise(resolved or rejected) and an action object is yielded to the reducer using put effect creator. put instructs Redux-saga middleware on what action to dispatch.
  • Notice, we’re yielding an array of effects. This is because we want them to run concurrently. The default behaviour would be to pause after each yield statement which is not the behaviour we intend.
  • Finally, if any of the operations fail, we yield a failure action object to the reducer.

Let’s wrap up this section by registering our saga to the rootSaga.

sagas/index.js

import { fork } from 'redux-saga/effects';
import watchSearchMedia from './watcher';

// Here, we register our watcher saga(s) and export as a single generator 
// function (startForeman) as our root Saga.
export default function* startForman() {
  yield fork(watchSearchMedia);
}

fork is an effect creator that provisions the middleware to run a non-blocking call on watchSearchMedia saga.

Here, we can bundle our watcher sagas as an array and yield them at once if we have more than one.

Hope by now, you are getting comfortable with the workflow. So far, we’re able to export startForman as our rootSaga.

How does our React component know what is happening in the state management system?

Step 6 of 8: Connect our React component to redux store

I’m super excited that you’re still engaging and we’re about testing our app.

First, let’s update our index.js  app’s entry file.

import ReactDOM from 'react-dom';
import React from 'react';
import { Router, browserHistory } from 'react-router';
import { Provider } from 'react-redux';  
import configureStore from './store/configureStore';
import routes from './routes';

// Initialize store
const store = configureStore();

ReactDOM.render(
  <Provider store={store}>
    <Router history={browserHistory} routes={routes} />
  </Provider>, document.getElementById('root')
);

Let’s review what’s going on.

  • Initialize our store.
  • Provider component from react-redux makes the store available to the components hierarchy. So, we have to pass the store as props to it. That way, the components below the hierarchy can access the store’s state with connect method call.

Now, let’s update our MediaGalleryPage component to access the store.

container/MediaGalleryPage.js

import React, { Component } from 'react';
import { connect } from 'react-redux';
import { searchMediaAction } from '../actions/mediaActions';

// MediaGalleryPage Component
class MediaGalleryPage extends Component {

  // Dispatches *searchMediaAction*  immediately after initial rendering.
  // Note that we are using the dispatch method from the store to execute this task, courtesy of react-redux
 componentDidMount() {
    this.props.dispatch(searchMediaAction('rain'));
  }

  render() {
    console.log(this.props.images, 'Images');
    console.log(this.props.videos, 'Videos');
    console.log(this.props.selecteImage, 'SelectedImage');
    console.log(this.props.selectedVideo, 'SelectedVideo');
    return (<div> </div>)
  }
}

// Define PropTypes
MediaGalleryPage.propTypes = {
// Define your PropTypes here
};

 // Subscribe component to redux store and merge the state into 
 // component's props
const mapStateToProps = ({ images, videos }) => ({
  images: images[0],
  selectedImage: images.selectedImage,
  videos: videos[0],
  selectedVideo: videos.selectedVideo
});

// connect method from react-router connects the component with redux store
export default connect(
  mapStateToProps)(MediaGalleryPage);

MediaGalleryPage component serves two major purposes:

a) Sync React Components with the Redux store.

b) Pass props to our presentational components: PhotoPage and VideoPage. We will create this later in the tutorial to render our content to the page.

Let’s summarize what’s going on.

React-router exposes two important methods(components) we will use to bind our redux store to our component – connect and Provider.

connect takes three optional functions. If any is not defined, it takes the default implementation. It’s a function that returns a function that takes our React component-MediaGalleryPage as an argument.

mapStateToProps allows us keep in sync with store’s updates and to format our state values before passing as props to the React component. We use ES6 destructuring assignment to extract images and videos from the store’s state.

Now, everthing is good and we can test our application.

$ npm start

You can grab a cup of coffee and be proud of yourself.

Wouldn’t it be nice if we can render our result on the webpage as supposed to viewing them on the browser console?

Step 7 of 8: Create presentational components

What are presentational components?

They are basically components that are concerned with presentation – how things look. Early in this tutorial, we created a container component – MediaGalleryPage which will be concerned with passing data to these presentational components. This is a design decision which has helped large applications scale efficiently. However, it’s at your discretion to choose what works for your application.

Our task is now easier. Let’s create the two React components to handle images and the videos.

components/PhotoPage.js

import React, { PropTypes } from 'react';

// First, we extract images, onHandleSelectImage, and selectedImage from 
// props using ES6 destructuring assignment and then render.
const PhotosPage = ({ images, onHandleSelectImage, selectedImage }) => (
  <div className="col-md-6">
    <h2> Images </h2>
    <div className="selected-image">
      <div id={selectedImage.id}>
        <h6>{selectedImage.title}</h6>
        <img src={selectedImage.mediaUrl} alt={selectedImage.title} />
      </div>
    </div>
    <div className="image-thumbnail">
      {images.map((image, i) => (
        <div key={i} onClick={onHandleSelectImage.bind(this, image)}>
          <img src={image.mediaUrl} alt={image.title} />
        </div>
      ))}
    </div>
  </div>
);

// Define PropTypes
PhotosPage.propTypes = {
  images: PropTypes.array.isRequired,
  selectedImage: PropTypes.object,
  onHandleSelectImage: PropTypes.func.isRequired
};

export default PhotosPage;

components/VideoPage.js

import React, { PropTypes } from 'react';

// First, we extract videos, onHandleSelectVideo, and selectedVideo 
// from props using destructuring assignment and then render.
const VideosPage = ({ videos, onHandleSelectVideo, selectedVideo }) => (
  <div className="col-md-6">
    <h2> Videos </h2>
    <div className="select-video">
      <div id={selectedVideo.id}>
        <h6 className="title">{selectedVideo.description}</h6>
        <video controls src={selectedVideo.mediaUrl} alt={selectedVideo.title} />
      </div>
    </div>
    <div className="video-thumbnail">
      {videos.map((video, i) => (
        <div key={i} onClick={onHandleSelectVideo.bind(this, video)}>
          <video controls src={video.mediaUrl} alt={video.description} />
        </div>
      ))}
    </div>
  </div>
);

// Define PropTypes
VideosPage.propTypes = {
  videos: PropTypes.array.isRequired,
  selectedVideo: PropTypes.object.isRequired,
  onHandleSelectVideo: PropTypes.func.isRequired
};

export default VideosPage;

The two React components are basically the same except that one handles images and the other is responsible for rendering our videos.

Let’s now update our MediaGalleryPage to pass props to this components.

container/MediaGalleryPage.js

import React, { PropTypes, Component } from 'react';
import { connect } from 'react-redux';
import {
  selectImageAction, searchMediaAction,
  selectVideoAction } from '../actions/mediaActions';
import PhotoPage from '../components/PhotoPage';
import VideoPage from '../components/VideoPage';
import '../styles/style.css';

// MediaGalleryPage Component
class MediaGalleryPage extends Component {
  constructor() {
    super();
    this.handleSearch = this.handleSearch.bind(this);
    this.handleSelectImage = this.handleSelectImage.bind(this);
    this.handleSelectVideo = this.handleSelectVideo.bind(this);
  }

  // Dispatches *searchMediaAction*  immediately after initial rendering
 componentDidMount() {
    this.props.dispatch(searchMediaAction('rain'));
  }

  // Dispatches *selectImageAction* when any image is clicked
  handleSelectImage(selectedImage) {
    this.props.dispatch(selectImageAction(selectedImage));
  }

  // Dispatches *selectvideoAction* when any video is clicked
  handleSelectVideo(selectedVideo) {
    this.props.dispatch(selectVideoAction(selectedVideo));
  }

  // Dispatches *searchMediaAction* with query param.
  // We ensure action is dispatched to the store only if query param is provided.
  handleSearch(event) {
    event.preventDefault();
    if (this.query !== null) {
      this.props.dispatch(searchMediaAction(this.query.value));
      this.query.value = '';
    }
  }

  render() {
    const { images, selectedImage, videos, selectedVideo } = this.props;
    return (
      <div className="container-fluid">
        {images ? <div>
          <input
            type="text"
            ref={ref => (this.query = ref)}
          />
          <input
            type="submit"
       className="btn btn-primary"
            value="Search Library"
            onClick={this.handleSearch}
          />
          <div className="row">
            <PhotoPage
              images={images}
              selectedImage={selectedImage}
              onHandleSelectImage={this.handleSelectImage}
            />
            <VideoPage
              videos={videos}
              selectedVideo={selectedVideo}
              onHandleSelectVideo={this.handleSelectVideo}
            />
          </div>
        </div> : 'loading ....'}
      </div>
    );
  }
}

// Define PropTypes
MediaGalleryPage.propTypes = {
  images: PropTypes.array,
  selectedImage: PropTypes.object,
  videos: PropTypes.array,
  selectedVideo: PropTypes.object,
  dispatch: PropTypes.func.isRequired
};

 // Subscribe component to redux store and merge the state into component's props
const mapStateToProps = ({ images, videos }) => ({
  images: images[0],
  selectedImage: images.selectedImage,
  videos: videos[0],
  selectedVideo: videos.selectedVideo
});

// connect method from react-router connects the component with redux store
export default connect(
  mapStateToProps)(MediaGalleryPage);

This is pretty much how our final MediaGalleryPage component looks like. We can now render our images and videos to the webpage.

Let’s recap on the latest update on MediaGalleryPage component.

render method of our component is very interesting. Here we’re passing down props from the store and the component’s custom functions(handleSearch, handleSelectVideo, handleSelectImage) to the presentational components — PhotoPage and VideoPage.

This way, our presentational components are not aware of the store. They simply take their behaviour from MediaGalleryPage component and render accordingly.

Each of the custom functions dispatches an action to the store when called.
We use ref to save a callback that would be executed each time a user wants to search the library.

componentDidMount Lifecycle method is meant to allow dynamic behaviour, side effects, AJAX, etc. We want to render search result for rain once a user navigates to library route.

One more thing. We bound all the custom functions in the component’s constructor function with .bind() method. It’s simply Javascript. bind() allows you to create a function out of regular functions. The first argument to it is the context(this, in our own case) to which you want to bind your function. Any other argument will be passed to such function that’s bounded.

We’re done building our app. Surprised?

Let’s test it out…

$ npm start

Step 8 of 8: Deploy to Heroku

Now that our app works locally, let’s deploy to a remote server for our friends to see and give us feedback. Heroku’s free plan will suffice.

We want to use create-react-app’s build script to bundle our app for production.

$ npm run build

A build/ folder is now in our project directory. That’s the minified static files we will deploy.

Next step is to add another script command to our package.json to help us serve our build files with a static file server. We will use pushstate-server(a static file server) to serve our files.

"deploy": "npm install -g pushstate-server && pushstate-server build"

Let’s create Procfile in our project’s root directory and add: web: npm run deploy

Procfile instructs Heroku on how to run your application.

Now, let’s create our app on Heroku. You need to register first on their platform. Then, download and install Heroku toolbelt if it’s not installed on your system. This will allow us to deploy our app from our terminal.

$ heroku login # Enter your credentials as it prompts you
$ heroku create <app name> # If  you don't specify a name, Heroku creates one for you.
$ git add --all && git commit -m "Add comments"  # Add and commit your changes.
$ git push heroku master # Deploy to Heroku
$ heroku ps:scale web=1 # Initialize one instance of your app
$ heroku open # Open your app in your default browser

We did it. Congrats. You’ve built and deployed a React/Redux application elegantly. It can be that simple.

Now some recommended tasks.

  1. Add tests for your application.
  2. Add error handling functionality.
  3. Add a loading spinner for API calls for good user experience.
  4. Add sharing functionality to allow users share on their social media walls
  5. Allow users to like an image.

Conclusion

It’s apparent that some things were left out, some intentionally. However, you can improve the app for your learning.
The key takeaway is to separate your application state from your React components. Use redux-saga to handle any AJAX requests and don’t put AJAX in your React components. Good luck!!!

Source:: scotch.io

Introduction To Chatbots

By Danny Markov

introduction-to-chatbots

Chatbots have been a trending topic for quite some time now and have got a lot of people excited about them. Some believe that bots are the next big thing and will soon replace apps, while others think they are just a fad, bound to fail.

In this article we will restrain ourselves from giving opinion on the future of chatbots. Instead, we will try to shine some light on how they work, what they can be used for, and how to get one up and running.

What Is A Chatbot?

A chatbot is a service or tool that you can communicate with via text messages. The chatbot understands what you are trying to say and replies with a coherent, relevant message or directly completes the desired task for you.

You chat with people.

If you remember CleverBot, you know that the concept of chatbots isn’t that new. What makes them so relevant right now is:

  1. The huge amount of time people spend texting in their messaging apps (Facebook Messenger, Slack, etc.), making messengers a fast-growing market where businesses can engage potential customers.
  2. The advancements in artificial intelligence, machine learning and natural language processing, allowing bots to converse more and more like real people.

Modern chatbots do not rely solely on text, and will often show useful cards, images, links, and forms, providing an app-like experience.

Facebook bots have nice non-text UI

Facebook bots have nice non-text UI (Skyskanner)

This allows them to be used for many different purposes such as shopping, customer service, news, games, and anything else you can think of. A good chatbot doesn’t have to perform a huge variety of tasks – a bot that shows you the latest news doesn’t need to be able to order Chinese food. It does one thing and to does it well.

How Do Chatbots Work?

Most people won’t build their chatbots from scratch as there are plenty frameworks and services that can help. However, in order to grasp how bots work, we have to go over some developer talk.

Backend: Chatbots can be built in basically any programming language that allows you to make a web API. For most people this will be either Node.js or PHP, but there are many bot libraries written for Java and Python as well. The backend receives messages, thinks of a response, and returns it to the user.

Frontend: This can be one of the popular messengers apps (Facebook Messenger, Slack, Telegram), or a simple chat interface like the Realtime Chat With Node.js we built some time ago. You are not limited to a single platform – the same bot can be implemented in more than one place.

Connecting the two: Your web server will then have to setup webhooks – URL-based connections between your bot and the chat platform. Webhooks will allow you to securely send and receive messages via simple HTTP requests. All of the mainstream messengers provide detailed guides on how to connect your bot to them.

Oversimplified model of a chatbot

Oversimplified model of a chatbot

The bot is now connected and can listen and respond to users. The only thing left to do is give it a brain.

Dumb Bots and Smart Bots

Depending on the way bots are programmed, we can separate them in two major categories: command-based (dumb bots) and learning (smart bots).

Command-based bots are structured around specific keywords that the bot recognizes. Each command has to be manually programmed by the developer using regular expressions or other forms of string analysis. If the user says something beyond the programmed scenarios, the bot cannot answer in any helpful way.

Although the capabilities of these “dumb” bots are limited, they are sufficient for the majority of bot designs, especially when combined with other types of UI like multiple choice questions.

Simple conversation with the Weps bot

Simple, linear conversation with the Weps bot

Learning bots rely on some sort of artificial intelligence to converse with users. Instead of looking for a predefined answer, smart bots come up with adequate replies right on the spot. They also keep track of the context of the conversation and remember what has been previously said, just like a human would!

Working with natural language processing and machine learning is no easy task, especially for beginner developers. Thankfully, you don’t have to all of the work on your own. There are a number of excellent libraries (ConvNetJS, nlp_compromise, TextBlob) and services (wit.ai, api.ai) that can help you with teaching your bot some conversational skills.

Free-flowing conversations with Abe, a financial adviser bot

Getting Started With Chatbots

If we’ve managed to pump you up and you want to start building your first chatbot, here are some tips that will get you going. Depending on how much of the work you want to do on your own, you can go full DYI and build your bot from scratch, or you can just use a framework that will save you lots of time.

  • BotKit – The most popular toolkit for building bots. Its open-source and very well documented.
  • Claudia – Chatbot builder designed for deploying directly to AWS Lambda.
  • Bottr – Super simple Node.js framework with built in app for testing. Awesome if you just want to play around for 10 minutes.

Once you’ve completed your bot, most of these frameworks allow you to easily connect it to the popular messaging platforms, which is a huge timesaver as all the major platforms have different setup processes.

If you want to skip the development process as a whole, you can trust one of the many available bot-building services:

  • wit.ai – Service that receives raw text or voice data and uses NLP to help you manage responses.
  • Chatfuel – A tool for Facebook Messenger or Telegram bots. No programming involved.
  • motion.ai – Full bot building service with support for many texting platforms.
  • api.ai – Natural language processing that allows you to make bots and define conversation scenarios.

Some Examples

Although chatbots seem very futuristic, they are already here and are freely available to the public. Below is a list of just a few interesting bots we’ve stumbled upon. Some of them don’t even have to be installed, just find them in Facebook Messenger and say Hi!

If you check out Botlist or the Telegram Bot Store you will see that developers have been working on bots for a while now and there is already an inventory of thousands of bots.

Conclusion

The idea of chatbots has got developers, businesses and messenger apps all working together, building a totally new ecosystem from scratch. We will have to wait and see how well bots will be adopted by the general public, but whether they fail or become the next big thing, we will have some fun playing with AI in the meantime.

We hope this article has been helpful and we’ve introduced you properly to the world of chatbots. If you have any questions or opinions, feel free to leave a comment below 🙂

Source:: Tutorialzine.com

Redis + Node.js: Introduction to Caching

By Akos Kemives

Redis + Node.js: Introduction to Caching

I think understanding and using caching is a very important aspect of writing code, so in this article, I’ll explain what caching is, and I’ll help you to get started with Redis + Node.js.

What is caching?

Data goes in, data comes out. A simple concept that has been around for quite a while but according to this survey, many developers don’t take advantage of it.

  • Do developers think that caching makes their applications a lot more complex?
  • Is this something that is either done from the beginning or not at all?

Through this introduction we will see that:

  1. Caching can be easily integrated into your application.
  2. It doesn’t have to be added everywhere, you can start experimenting with just a single resource.
  3. Even the simplest implementation can positively impact performance.

Integrating with third-party APIs

To show the benefits of caching, I created an express application which integrates with GitHub’s public API and retrieves the public repositories for an organization (more precisely only the first 30, see default pagination options).

const express = require('express');  
const request = require('superagent');  
const PORT = process.env.PORT;

const app = express();

function respond(org, numberOfRepos) {  
    return `Organization "${org}" has ${numberOfRepos} public repositories.`;
}

function getNumberOfRepos(req, res, next) {  
    const org = req.query.org;
   request.get(`https://api.github.com/orgs/${org}/repos`, function (err, response) {
        if (err) throw err;

        // response.body contains an array of public repositories
        var repoNumber = response.body.length;
        res.send(respond(org, repoNumber));
    });
};

app.get('/repos', getNumberOfRepos);

app.listen(PORT, function () {  
    console.log('app listening on port', PORT);
});

Start the app and make a few requests to
http://localhost:3000/repos?org=risingstack
from your browser.

Redis + Node.js: Introduction to Caching

Receiving a response from GitHub and returning it through our application took a little longer than half a second.

When it comes to communicating with third-party APIs, we inherently become dependent on their reliability. Errors will happen over the network as well as in their infrastructure. Application overloads, DOS attacks, network failures, not to mention request throttling and limits in cases
of a proprietary API.

How caching can help us to mitigate these problems?

We could temporarily save the first response and serve it later, without actually requesting
anything from GitHub. This would result in less frequent requests, therefore less chance for any of the above errors to occur.

You probably think: we would serve old data which is not necessarily accurate, but think about the data itself.

Is the list of repositories going to change frequently? Probably not, but even if it does, after some time we can just ask GitHub again for the latest data and update our cache.

Redis + Node.js: Using Redis as cache in our application

Redis can be used in many ways but for this tutorial think of it as a key-value (hash map or dictionary) database-server, which is where the name comes from, REmote DIctionary Server.

We are going to use the redis Node.js client to communicate with our Redis server.

To install the Redis server itself, see the official Quick Start guide.

From now on, we assume that you have it installed and it is running.

Let’s start by adding the redis client to our dependencies:

npm install redis --save  

then creating a connection to a local Redis server:

const express = require('express');  
const request = require('superagent');  
const PORT = process.env.PORT;

const redis = require('redis');  
const REDIS_PORT = process.env.REDIS_PORT;

const app = express();  
const client = redis.createClient(REDIS_PORT);  

Caching the data

As I already pointed out, Redis can be used as simple as a hash map. To add data to it use:

client.set('some key', 'some value');  

if you want the value for ‘some key’ to expire after some time use setex:

client.setex('some key', 3600, 'some value');  

This works similar to set except that some key is removed after the duration (in seconds) specified in the second parameter. In the above example, some key will be removed from Redis after one hour.

We are going to use setex because the number of public repositories for an organization might change in the future.

var repoNumber = response.body.length;  
// for this tutorial we set expiry to 5s but it could be much higher
client.setex(org, 5, repoNumber);  
res.send(respond(org, repoNumber));  

For this demo we are using organization names as keys, but depending on your use-case, you might need a more sophisticated algorithm for generating them.

Retrieving the cached data

Instead of implementing the caching logic inside the app.get callback, we are going to take advantage of express middleware functions, so the resulting implementation can be easily reused in other resources.

Start by adding a middleware function to the existing handler:

app.get('/repos', cache, getNumberOfRepos);  

cache have access to the same request object (req), response object (res), and the next middleware function in the application’s request-response cycle like getNumberOfRepos does.

We are going to use this function to intercept the request, extract the organization’s name and see if we can serve anything from Redis:

function cache(req, res, next) {  
    const org = req.query.org;
    client.get(org, function (err, data) {
        if (err) throw err;

        if (data != null) {
            res.send(respond(org, data));
        } else {
            next();
        }
    });
}

We are using get to retrieve data from Redis:

client.get(key, function (err, data) {  
});

If there is no data in the cache for the given key we are simply calling next(), entering the next middleware function: getNumberOfRepos.

Results

Redis + Node.js: Introduction to Caching

The initial implementation of this application spent 2318ms to serve 4 requests.

Using a caching technique reduced this number to 672ms, serving the same amount of responses 71% faster.

We made one request to the GitHub API instead of four, reducing the load on GitHub and reducing the chance of other communication errors.

During the fifth request, the cached value was already expired. We hit GitHub again (618ms) and cached the new response. As you can see the sixth request (3ms) already came from the cache.

Summary

Although there is a whole science behind caching, even a simple approach like this shows promising results. Similar improvements can be made by caching responses from a database server, file system or any other sources of communication that otherwise would be noticeably slower.

If you have any questions about this topic, let me know in the questions!

Source:: risingstack.com

Improve User Experience with Intelligent Delivery (via Multiple CDNs)

By chris92

eBwUVxKRTG6qRC8WyeKQ_improving-user-experience-through-intelligent-delivery.png.jpg

We always want the best experience for our users so they will keep running back to our services. These days I spend more of my time as a developer optimizing and delivering content efficiently, than I do writing the actual code.

Writing an efficient and performant code is just one step, but unfortunately that is not all. The content we deliver and how we deliver it can ruin our day no matter how clean our code base is. This is always the case when we are dealing with media files. Unfortunately, images make up about 60% of downloaded content on most websites.

Most likely, you are already doing a great job with managing these files by outsourcing the responsibility to a third party CDN or doing so internally using your own servers. Bad news is, your web application server or the single CDN will still not deliver this content at an expected time if your users are spread all over the world.

Let’s have a look at delivering content more efficiently with Cloudinary‘s multiple CDNs feature.

CDN or Not?

One of the most frustrating experiences for every user is having to wait for the browser loading spinner to stop dancing and deliver content. This wait period, from when a request is made to when it is completed, to when the response is produced is known as latency.

Latency is an evil we must live with as long as we make use of the web because the request-response cycle is the web’s natural behavior. We can curtail latency, thereby reducing the amount of time users wait. One strategy is using CDNs.

A CDN (Content Delivery Network) is a solution that is independent of your application and server but can be used to deliver content to your website rather than delivering directly from your servers. The good thing about this is CDNs offer more sophisticated and powerful tools to handle and optimize content and content delivery, thereby relieving the heavy-lifting responsibilities from your server.

As shown in the illustration above, contents are cached/stored on the CDN and delivered to the users for subsequent requests. The server will no longer be responsible for serving the data. This is a healthy practice for your server and an awesome user experience because CDNs are faster.

The Story of Physical Distance

CDNs as we have seen are better options for delivering contents especially media files. But there is still a huge challenge. As we discussed earlier, a single CDN still does not guarantee a fast request because of physical distance.

A situation where you have your CDN server in Bangkok and you have users around the world making request to that server from different countries on different continents. Users closer to Bangkok and in Asia will have a minimized latency than those in London and the US. This is caused by the distance between the user and the server known as physical distance. The farther the distance, the more network activities and routing will occur. This means more users must wait for the process to complete.

The strategy used by CDN providers is to deliver content using multiple CDNs located in different parts of the world. The implication is, when a user in Africa makes a request, a CDN server in Africa delivers, same with Asia, UK, and so on.

Leading media websites like Twitter, Facebook, and Netflix have adopted the multi-CDN approach to more efficiently serve media throughout the world – Cloudinary can now do that for you too!

Cloudinary’s Multiple CDN Feature

Today, Cloudinary does not just offer media storage, manipulation, and administration but now offers multiple CDN for your media files. This means you get the following services at an extremely fast delivery rate:

  • On-the-fly image manipulation via CDN URLs
  • Adaptive format adaptation for any browser
  • Automatic image optimization with Save Data support
  • SEO friendly URLs with dynamic suffix
  • Automatic width and DPR using Client Hints
  • Transparent and automatic invalidations
  • Custom domain (CNAME), SSL support, HTTP/2 support
  • Video streaming with on-the-fly transcoding

You no longer need to select the best CDN, handle the technical integration, fit the CDN to your needs – Cloudinary does it all for you! Each of Cloudinary’s supported CDNs has different advantages and special features; Cloudinary maps your requirements to the best matching CDN.

This multi CDN approach enables developers to improve the end-user experience by:

  • Leveraging strengths of various CDNs to offer the best-of-breed performance and coverage for all users.
  • Improving the quality of service – performance, reliability and availability – specifically for users that otherwise would not be as close to a server.
  • Eliminating operational overhead of managing and maintaining CDN solutions, such integration, maintenance and contracts.

Cloudinary uses real-time monitoring and performance data, to dynamically switch between different CDN providers and networks to match the user characteristics, request by request.

Get Started Today

My experience, and that of my users, in building web apps and using them respectively have been great because of the Cloudinary techniques employed to manage media files. Content delivery just got better with the introduction of Cloudinary’s multi CDNs feature. Get started and give your users the experience they deserve and keep them coming back

Source:: scotch.io

What is Angular 2? High level overview, feature and fundamentals.

By S Sharif In this article, we will see high level overview of Angular 2. Also, we will see some unique features of Angular 2 that makes it more powerful and fast. For better understanding, will go with some…

The post What is Angular 2? High level overview, feature and fundamentals. appeared first on Technical Diary.

Visit www.technicaldiary.com to read, full article.

Source:: TECHNICALDIARY

Getting Started with SLIM 3, A PHP Microframework

By kayandrae

vewa5RxrR5mWR6xjENwS_get-started-with-slim-3-a-php-microframework.png.jpg

Just like the world of Javascript, in the world of PHP, we have a fairly large amount of frameworks and libraries. We could go on all day listing PHP libraries and still not finish in time for Christmas.

But there are some libraries that separate themselves from the chaff, one such example is the Laravel PHP framework.

Another framework that sets itself apart is SLIM.

What is SLIM?

Taking it straight from the horse’s mouth,

Slim is a PHP micro framework that helps you quickly write simple yet powerful web applications and APIs.

Emphasis on “micro” and “APIs”. SLIM is not robust with lots of features, what makes it so good is the fact that it creates room for extensibility, thus kinda obeying the Open-closed principle.

At its core, Slim is a dispatcher that receives an HTTP request, invokes an appropriate callback routine, and returns an HTTP response. That’s it.

The fact that SLIM doesn’t come with many dependencies makes it one of the best frameworks out there when it comes to API development.

Features of SLIM

When it comes to using SLIM, there are four main features:

  • HTTP Router: this allows developers to map functions/callbacks to HTTP methods and URLs.
  • Middlewares: serve as layers to allow developers modify HTTP requests and response. If you need further explanation on middlewares, there is an article here on Scotch that covers Laravel middlewares. Sure the implementations might be a tad different, but they offer the same functionality.
  • Dependency Injection: makes it possible for developers to have complete control over managing dependencies within applications.
  • PSR-7 support: this is not really a feature, more like a standard. It defines the way HTTP requests and responses should be handled in web applications. You can read more about it on PHP-FIG.

Installing SLIM

To get SLIM installed on your computer, the fastest way is to use composer to install a skeletal version of SLIM. But for this tutorial, we will only install SLIM core and then walk through configuring SLIM for a simple application request-response cycle.

To install SLIM we use composer, we create our working directory (call it whatever you want). Then, we move into that directory and run the following composer command which should install SLIM on our machine.

composer require slim/slim "^3.0"

NOTE: at the time of writing this article, the current version of SLIM is version 3.5.0.

Basic routing

Ah, the obligatory “Hello World” application. This example will illustrate how easy it is to use SLIM.

In our project directory, we should only see our composer files and a vendor directory. We can then create a index.php file.

In that file, we add this little code snippet.

<?php

require __DIR__ . '/vendor/autoload.php';

$app = new SlimApp;

$app->get('/', function ($request, $response) {
    return 'hello world';
});

$app->run();

First, we load composer’s autoloader, then we create a new instance of the SlimApp, then call a get method to the home path, and pass a closure that takes in the request and response, and then return “hello world” (which is converted to a response object by SLIM. Outside the closure, we run the app.

To see our message in the browser, in the root of our working directory, we boot a PHP server.

php -S localhost:8000

If you open localhost:8000 in the browser, you should see “hello world”. If you use a different server, you should check out the section on server configuration.

Other HTTP methods

The above example only simulated a GET request, as we all know, there are more request types like POST, PUT, PATCH, DELETE, OPTIONS. To use any of these methods in SLIM, we literarily do the same thing we did with the get method above.

<?php

$app->post('/', function ($request, $response) {
    return 'hello world';
});

The above route now will only work for POST requests. We can call the rest of the HTTP methods the same way.

The route discussion in this article is a bit vague, in an upcoming article, we will talk about routing in SLIM.

Render views with Twig

Say we are building an application that is not an API with SLIM, we need a way to organize our templates, not that SLIM requires any template of sorts, but templates make it easier to manage our view files.

There are a lot of templates engines for PHP, but for this tutorial, we will stick with Twig.

Twig allows us to use custom properties to write PHP template files. It compiles to pure PHP code.

To display a variable in twig, we do this.

{{ var }}

This then compiles itself to:

<?php echo $var ?>

We could create global template layouts that we can later extend and keep our page layout and styling consistent.

{% extends "layout.html" %}

{% block content %}
    The content of the page...
{% endblock %}

Twig offers many more features. It is also one of the most stable and best PHP template languages out there. Another alternative to twig is Blade (for Laravel), or Smarty.

Installing Twig

To get started, we need to install Twig and to do that, we go into the root of our application and run the following command.

composer require slim/twig-view

Now that we have twig installed, let’s create a templates directory in the root of our project. We can now register twig as our view engine/manager in SLIM. In our index.php file, we add this snippet before our routes.

<?php
// ...

$container = $app->getContainer();

$container['view'] = function ($container) {
    $templates = __DIR__ . '/templates/';
    $cache = __DIR__ . '/tmp/views/';

    $view = new SlimViewsTwig($templates, compact('cache'));

    return $view;
};

First, we grab our app $container and add a new property called view. We name it whatever we want (template, page etc). Just note that this value is important as we will need to reference it soon. We pass a closure which takes in the $container instance.

Then we create an instance of SlimViewsTwig and pass in the directory of our templates, and the second parameter is an array (compact converts variables to array key-value pairs) where we can pass the location to our template cache directory.

We can disable caching by setting the $cache variable to false. After that, we can return the $view.

To actually use our view files in our routes, we can go back into our router and then return a view file like this.

$app->get('/', function ($request, $response) {
    return $this->view->render($response, 'home.twig');
});

Note: make sure you have a file name home.twig in your templates directory. Whatever you place in that file gets rendered in the browser.

Server Configuration

There is a 100% chance that you don’t use the PHP default server to serve your app in production. Odds are you choose between Apache and the lovely Nginx. For these servers, our routes won’t work without a little URL rewriting. We see how to configure Apache and Nginx, if you use another server, you can check out the web servers section on SLIM’s website.

Apache

Wherever you decide to make the root of your application, create a new .htaccess file and place the following snippet in that file.

RewriteEngine On
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond %{REQUEST_FILENAME} !-d
RewriteRule ^ index.php [QSA,L]

What this does is just serve files and directory that exists on the server, and if they don’t exist then let the routing be handled by our index.php file.

Nginx

server {
    listen 80;
    server_name example.com;
    index index.php;
    error_log /path/to/example.error.log;
    access_log /path/to/example.access.log;
    root /path/to/public;

    location / {
        try_files $uri /index.php$is_args$args;
    }

    location ~ .php {
        try_files $uri =404;
        fastcgi_split_path_info ^(.+.php)(/.+)$;
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_param SCRIPT_NAME $fastcgi_script_name;
        fastcgi_index index.php;
        fastcgi_pass 127.0.0.1:9000;
    }
}

This configuration is similar to the apache’s configuration, don’t forget to replace example.com.

Conclusion

Frameworks like Laravel, Zend, Symfony etc are really good, but they are not necessary when it comes to building APIs and simple websites (they are too bloated). Simple is an excellent framework, it is easy to use, lightweight, extensible etc. What’s not to love.

Source:: scotch.io

Testing Services with Http in Angular 2

Testing is important. That’s why Angular comes with a testing story out-of-the-box. Due to its dependency injection system, it’s fairly easy to mock dependencies, swap out Angular modules, or even create so called “shallow” tests, which enable us to test Angular components, without actually depending on their views (DOM). In this article we’re going to take a look at how to unit test a service that performs http calls, since there’s a little bit more knowledge required to make this work.

The Service we want to test

Let’s start off by taking a look at the service want to test. Sure, sometimes we actually want to do test-driven development, where we first create the test and then implement the actual service. However, from a learning point of view, it’s probably easier to grasp testing concepts when we first explore the APIs we want to test.

At thoughtram, we’re currently recording screencasts and video tutorials, to provide additional content to our blog readers. In order to make all these videos more explorable, we’re also building a little application that lets users browse and watch them. The video data is hosted on Vimeo, so we’ve created a service that fetches the data from their API.

Here’s what our VideoService roughly looks like:

import { Injectable, Inject } from '@angular/core';
import { VIMEO_API_URL } from '../config';

import 'rxjs/add/operator/map';

@Injectable()
export class VideoService {

  constructor(private http: Http, @Inject(VIMEO_API_URL) private apiUrl) {}

  getVideos() {
    return this.http.get(`${this.apiUrl}/videos`)
                    .map(res => res.json().data);
  }
}

getVideos() returns an Observable<Array. This is just an excerpt of the actual service we use in production. In reality, we cache the responses so we don’t peform an http request every single time we call this method.

Special Tip: If the @Inject() decorator is new to you, make sure to checkout this article

To use this service in our application, we first need to create a provider for it on our application module, later we will use it in our tests:

@NgModule({
  imports: [HttpModule]
  providers: [VideoService]
  ...
})
export class AppModule {}

Since the API returns an Observable, we need to subscribe to it to actually perform the http request. That’s why the call side of the method looks something like this:

@Component()
export class VideoDashboard {
  
  private videos = [];

  constructor(private videoService: VideoService) {}

  ngOnInit() {
    this.videoService.getVideos()
        .subscribe(videos => this.videos = videos);
  }
}

Notice how we’re passing a callback function to access the video data that is emitted by the Observable. We need to keep that in mind when testing these methods, because we can’t call them synchronously. To get an introduction to Observables in conjunction with Angular, make sure to read this article.

Alright, now that we know what the service we want to test looks like, let’s take a look at writing the tests.

Configuring a testing module

Before we can start writing test specs for our service APIs, we need to configure a testing module. This is needed because in our tests, we want to make sure that we aren’t performing actual http requests and use a MockBackend instead. Our goal is to isolate the test scenario as much as we can without touching any other real dependencies. Since NgModules configure injectors, a testing module allows us to do exactly that.

When testing services or components that don’t have any dependencies, we can just go ahead an instantiate them manually, using their constructors like this:

it('should do something', () => {
  let service = new VimeoService();

  expect(service.foo).toEqual('bar');
});

Special Tip: When testing components and services that don’t have any dependencies, we don’t necessarily need to create a testing module.

To configure a testing module, we use Angular’s TestBed. TestBed is Angular’s primary API to configure and initialize environments for unit testing and provides methods for creating components and services in unit tests. We can create a module that overrides the actual dependencies with testing dependencies, using TestBed.configureTestingModule().

import { TestBed } from '@angular/core/testing';

describe('VideoService', () => {

  beforeEach(() => {

    TestBed.configureTestingModule({
      ...
    });

  });
});

This will create an NgModule for every test spec, as we’re running this code as part of a beforeEach() block. This is a Jasmine API. If you aren’t familiar with Jasmine, we highly recommend reading their documentation.

Okay, but what does a configuration for such a testing module look like? Well, it’s an NgModule, so it has pretty much the same API. Let’s start with adding an import for HttpModule and a provider for VideoService like this:

import { HttpModule } from '@angular/http';
import { VideoService } from './video.service';
import { VIMEO_API_URL } from '../config';
...
TestBed.configureTestingModule({
  imports: [HttpModule],
  providers: [
    { provide: VIMEO_API_URL, useValue: 'http://example.com' },
    VideoService
  ]
});
...

This configures an injector for our tests that knows how to create our VideoService, as well as the Http service. However, what we actually want is an Http service that doesn’t really perform http requests. How do we do that? It turns out that the Http service uses a ConnectionBackend to perform requests. If we find a way to swap that one out with a different backend, we get what we want.

To give a better picture, here’s what the constructor of Angular’s Http service looks like:

@Injectable()
export class Http {

  constructor(
    protected _backend: ConnectionBackend,
    protected _defaultOptions: RequestOptions
  ) {}

  ...
}

By adding an HttpModule to our testing module, providers for Http, ConnectionBackend and RequestOptions are already configured. However, using an NgModule’s providers property, we can override providers that have been introduced by other imported NgModules! This is where Angular’s dependency injection really shines!

Overriding the Http Backend

In practice, this means we need to create a new provider for Http, which instantiates the class with a different ConnectionBackend. Angular’s http module comes with a testing class MockBackend. That one not only ensures that no real http requests are performed, it also provides APIs to subscribe to opened connections and send mock responses.

With the useFactory strategy of a provider configuration, we can then create Http instances that use a different ConnectionBackend. Here’s what that looks like:

import { HttpModule, Http, BaseRequestOptions } from '@angular/http';
import { MockBackend } from '@angular/http/testing';
...
TestBed.configureTestingModule({
  ...
  providers: [
    ...
    {
      provide: Http,
      useFactory: (mockBackend, options) => {
        return new Http(mockBackend, options);
      },
      deps: [MockBackend, BaseRequestOptions]
    },
    MockBackend,
    BaseRequestOptions
  ]
});
...

Wow, that’s a lot of code! Let’s go through this step by step:

  • We create a new provider for Http that uses useFactory strategy, so we are in charge of creating the actual service instance.
  • Http asks for a ConnectionBackend and RequestOptions. That’s why we pass mockBackend and options to the constructor.
  • To make sure Angular knows what we mean by mockBackend and options, we add deps: [MockBackend, BaseRequestOptions]. This is needed because metadata (Type Annotations) in normal functions aren’t preserved at runtime.
  • We add providers for MockBackend and BaseRequestOptions

Awesome! We’ve created a testing module that uses an Http service with a MockBackend. Now let’s take a look at how to actually test our service.

Testing the service

When writing unit tests with Jasmine, every test spec is written as an it() block, where an assertion is made and then checked if that assertion is true or not. We won’t go into too much detail here, since there’s a lot of documentation for Jasmine out there. We want to test if our VideoService returns an Observable<Array, so let’s start with the following it() statement:

describe('VideoService', () => {
  ...
  describe('getVideos()', () => {

    it('should return an Observable<Array<Video>>', () => {
      // test goes here
    });
  });
});

We’ve also added another nested describe() block so we can group all tests that are related to that particular method we test. Okay, next we need to get an instance of our VideoService. Since we’ve created a testing module that comes with all providers for our services, we can use dependency injection to inject instances accordingly.

Injecting Services

Angular’s testing module comes with a helper function inject(), which injects service dependencies. This turns out to be super handy as we don’t have to take care of getting access to the injector ourselves. inject() takes a list of provider tokens and a function with the test code, and it returns a function in which the test code is executed. That’s why we can pass it straight to our spec and remove the anonymous function we’ve introduced in the first place:

import { TestBed, inject } from '@angular/core/testing';
...
it('should return an Observable<Array<Video>>',
  inject([/* provider tokens */], (/* dependencies */) => {
  // test goes here
}));

Cool, now we have all the tools we need to inject our services and write a test. Let’s go ahead an do exactly that. Once we have our service injected, we can call getVideos() and subscribe to the Observable it returns, to then test if the emitted value is the one we expect.

it('should return an Observable<Array<Video>>',
  inject([VideoService], (videoService) => {

    videoService.getVideos().subscribe((videos) => {
      expect(videos.length).toBe(4);
      expect(videos[0].name).toEqual('Video 0');
      expect(videos[1].name).toEqual('Video 1');
      expect(videos[2].name).toEqual('Video 2');
      expect(videos[3].name).toEqual('Video 3');
    });
}));

This test is not finished yet. Right now we’re having a test that expects some certain data that is going to be emitted by getVideos(), however, remember we’ve swapped out the Http backend so there’s no actual http request performed? Right, if there’s no request performed, this Observable won’t emit anything. We need a way to fake a response that is emitted when we subscribe to our Observable.

Mocking http responses

As mentioned earlier, MockBackend provides APIs to not only subscribe to http connections, it also enables us to send mock responses. What we want is, when the underlying Http service creates a connection (performs a request), send a fake http response with the data we’re asserting in our getVideos() subscription.

We can subscribe to all opened http connections via MockBackend.connections, and get access to a MockConnection like this:

it('should return an Observable<Array<Video>>',
  inject([VideoService, MockBackend], (videoService, mockBackend) => {
    ...
    mockBackend.connections.subscribe((connection) => {
      // This is called every time someone subscribes to
      // an http call.
      //
      // Here we want to fake the http response.
    });
}));

The next thing we need to do, is to make the connection send a response. We use MockConnection.mockRespond() for that, which takes an instance of Angular’s Response class. In order to define what the response looks like, we need to create ResponseOptions and define the response body we want to send (which is a string):

...
const mockResponse = {
  data: [
    { id: 0, name: 'Video 0' },
    { id: 1, name: 'Video 1' },
    { id: 2, name: 'Video 2' },
    { id: 3, name: 'Video 3' },
  ]
};

mockBackend.connections.subscribe((connection) => {
  connection.mockRespond(new Response(new ResponseOptions({
    body: JSON.stringify(mockResponse)
  })));
});
...

Cool! With this code we now get a fake response inside that particular test spec. Even though it looks like we’re done, there’s one more thing we need to do.

Making the test asynchronous

Because the code we want to test is asynchronous, we need to inform Jasmine when the asynchronous operation is done, so it can run our assertions. This is not a problem when testing synchronous code, because the assertions are always executed in the same tick as the code we test. However, when testing asynchronous code, assertions might be executed later in another tick. That’s why we can explicitly tell Jasmine that we’re writing an asynchronous test. We just need to also tell Jasmine, when the actual code is “done”.

Jasmine usually provides access to a function done() that we can ask for inside our test spec and then call it when we think our code is done. This looks something like this:

it('should do something async', (done) => {
  setTimeout(() => {
    expect(true).toBe(true);
    done();
  }, 2000);
});

Angular makes this a little bit more convenient. It comes with another helper function async() which takes a test spec and then runs done() behind the scenes for us. This is pretty cool, because we can write our test code as if it was synchronous!

How does that work? Well… Angular takes advantage of a feature called Zones. It creates a “testZone”, which automatically figures out when it needs to call done(). If you havent heard about Zones before, we wrote about them here and here.

Special Tip: Angular’s async() function executes test specs in a test zone!

Let’s update our test to run inside Angular’s test zone:

import { TestBed, inject, async } from '@angular/core/testing';
...
it('should return an Observable<Array<Video>>',
  async(inject([VideoService, MockBackend], (videoService, mockBackend) => {
    ...
})));
...

Notice that all we did was wrapping the inject() call in an async() call.

The complete code

Putting it all together, here’s what the test spec for the getVideos() method looks like:

import { TestBed, async, inject } from '@angular/core/testing';
import {
  BaseRequestOptions,
  HttpModule,
  Http,
  Respone,
  ResponseOptions
} from '@angular/http';
import { MockBackend } from '@angular/http/testing';
import { VideoService } from './video.service';
import { VIMEO_API_URL } from '../config';

describe('VideoService', () => {

  beforeEach(() => {

    TestBed.configureTestingModule({
      imports: [HttpModule],
      providers: [
        { provide: VIMEO_API_URL, useValue: 'http://example.com' },
        VideoService,
        {
          provide: Http,
          useFactory: (mockBackend, options) => {
            return new Http(mockBackend, options);
          },
          deps: [MockBackend, BaseRequestOptions]
        },
        MockBackend,
        BaseRequestOptions
      ]
    });
  });

  describe('getVideos()', () => {

    it('should return an Observable<Array<Video>>',
        async(inject([VideoService, MockBackend], (videoService, mockBackend) => {

        videoService.getVideos().subscribe((videos) => {
          expect(videos.length).toBe(4);
          expect(videos[0].name).toEqual('Video 0');
          expect(videos[1].name).toEqual('Video 1');
          expect(videos[2].name).toEqual('Video 2');
          expect(videos[3].name).toEqual('Video 3');
        });

        const mockResponse = {
          data: [
            { id: 0, name: 'Video 0' },
            { id: 1, name: 'Video 1' },
            { id: 2, name: 'Video 2' },
            { id: 3, name: 'Video 3' },
          ]
        };

        mockBackend.connections.subscribe((connection) => {
          connection.mockRespond(new Response(new ResponseOptions({
            body: JSON.stringify(mockResponse)
          })));
        });
    })));
  });
});

This pattern works for pretty much every test that involves http operations. Hopefully this gave you a better understanding of what TestBed, MockBackend and async are all about.

Source:: Thoughtram

Controlling access to global variables via an ES6 proxy

By Axel Rauschmayer

The following function evalCode() traces the global variables that are accessed while evaluating a piece of JavaScript code.

    // Simple solution
    const _glob = typeof global !== 'undefined' ? global : self;
    
    function evalCode(code) {
        const func = new Function ('proxy',
            `with (proxy) {${code}}`); // (A)
        const proxy = new Proxy(_glob, {
            get(target, propKey, receiver) {
                console.log(`GET ${String(propKey)}`); // (B)
                return Reflect.get(target, propKey, receiver);
            },
            set(target, propKey, value, receiver) { // (C)
                console.log(`SET ${String(propKey)}=${value}`);
                return Reflect.set(target, propKey, value, receiver);
            },
        });
        return func(proxy);
    }

The way this works is as follows:

  • The with statement wrapped around the code (line A) means that every variable access that “leaves” the scope of the code becomes a property access of proxy.
  • The proxy observes what properties are accessed via its handler, which traps the operations “get” (line B) and “set” (line C),

Unsing evalCode():

    > evalCode('String.prototype')
    GET Symbol(Symbol.unscopables)
    GET String
    undefined
    > evalCode('String = 123')
    GET Symbol(Symbol.unscopables)
    SET String=123
    undefined

Explanations:

  • We don’t return what code does, which is why the result is undefined.
  • Symbol.unscopables shows up, because with checks its operand for a property with this key to determine whether which properties it should expose as variables to its body. This mechanism is explained in “Exploring ES6”.

This is very hacky! with is a deprecated sloppy mode feature that is used in conjunction with a brand new ES6 feature.

Source of this hack: Vue.js, explained by qgustavor on reddit.

Further reading

Source:: 2ality

Node.js Weekly Update - 25 Nov, 2016

By Ferenc Hamori

Node.js Weekly Update - 25 Nov, 2016

If you miss something from this Node.js weekly update, please let us know in the comments!

5 Must-read Articles, Updates of the Week:

○ Can Node.js Scale? Ask the Team at Alibaba – Joyee Cheung

In advance of Node.js Interactive, Joyee Cheung told the story of how Alibaba instrumented Node.js, why they chose to use Node.js, and what challenges they faced with trying to scale Node.js on the server side.

○ Last Week in Node.js Working Groups – November, 14 2016 – Jeremiah Senkpiel

While seeming to be a quiet week on the surface, last week there was a fair amount of activity in many important projects.

○ Owning Node.js – Gergely Nemeth

3 Node.js tutorial videos to help you become a better developer. Debugging your applications, using async, finding memory leaks & CPU profiling.

○ WalmartLab’s Migrated to React and Nodejs in Less Than a Year – Alex Grigoryan

The story of how Walmart successfully migrated to React and Node.js with efficiency and speed. At the end of this talk, you should be able to make your own successful strategy to transform your technology stack quickly and successfully.

○ Node.js Databases: Using CouchDB – Pedro Teixeira

By default, CouchDB does not impose any specific schema to the documents it stores. Instead, it allows any data that JSON allows — as long as we have an object as the root. Because of that, any two documents in the same database can hold completely different documents, and it’s up to the application to make sense of them.

Important Updates to the Node.js Core

Node v7.2.0 released

Notable changes

  • crypto: The Decipher methods setAuthTag() and setAAD() now return this (Kirill Fomichev) #9398
  • dns: Implemented {ttl: true} for resolve4() and resolve6() (Ben Noordhuis) #9296
  • process: Added a new external property to the data returned by memoryUsage() (Fedor Indutny) #9587
  • * tls*: Fixed a memory leak when writes were queued on TLS connection that was destroyed during handshake. (Fedor Indutny) #9626
  • v8: The data returned by getHeapStatistics() now includes three new fields: malloced_memory, peak_malloced_memory, and does_zap_garbage. (Gareth Ellis) #8610

Read the full article: Node v7.2.0 released.

Previously in the Node.js Weekly Update

Last week we read fantastic articles about fast logging in production, npm tricks and best practices as well as how the garbage collector works.

Source:: risingstack.com