Getting Started with Yoga and Prisma for Building GraphQL Servers

By Chris Nwamba

By now, you have probably heard a lot of buzz about GraphQL going around and how it’s going to replace REST but you don’t even know where to begin. You hear a lot of buzzwords about mutations and queries but what you’re used to is GET and POST requests on your different endpoints to fetch data and make changes to your database. Thinking about making a switch to GraphQL and don’t know where to begin? This article is for you.

In this article, we are going to take a look at how to convert a database to a GraphQL API using Prisma.


To follow through this article, you’ll need the following:

  • Basic knowledge of JavaScript
  • Node installed on your machine
  • Node Package Manager installed on your machine

To confirm your installations, run the command:

node --version
npm --version

If you get their version numbers as results then you’re good to go!

Using Prisma to connect GraphQL to a database

Prisma is a service that allows you to build a GraphQL server with any database with ease. To get started with using Prisma, you need to install the command line interface first:

npm install -g prisma

This globally installs prisma for future use on your machine. The next thing you need to do is to create a new service. To do this, run the command:

prisma init scotch-prisma-demo

When you run this command, a prompt will be shown for you to decide what server you want to connect to.

For demo purposes, select the Demo Server option and then a browser prompt will be shown to register with Prisma Cloud. After successful login, click the CREATE A NEW SERVICE option and then copy the key required to log in to Prisma from the page.

Head over to your terminal, close the current process if it’s still running and then login to prisma by pasting the command copied from the browser into the terminal:

Login to Prisma

Once you’ve successfully signed in, return to the command line and select the region for your demo server and follow the remaining prompts as shown below:

Creating new Prisma Service

You can also head over here if you want to connect Prisma with own database.

Now that the service has been created, you need to deploy it. To do this, head over to the scotch-prisma-demo directory and run the deploy command:

cd scotch-prisma-demo
prisma deploy

Deploying Prisma Service

Once deployment is complete, on the Prisma Cloud you can see a list of your deployed services:

List of Services on Prisma Cloud

To open the GraphQL API and start sending queries and running mutations, run the command:

prisma playground

Running Sample Queries using Prisma Service

Updating Schema

When the scotch-prisma-demo folder was created, two files were created in the folder

  • prisma.yml – containing the endpoint and datamodel for the server.
  • datamodel.graphql – Containing the schema for the server.

Edit the datamodel.graphql file to look like this:

type User {
    id: ID! @unique
    name: String!

This means the schema will contain the ID and name for the user. To update the schema, run the command:

prisma deploy

Using Yoga and Prisma Binding to interact with Prisma Service

In the last section, we saw how to use Prisma to turn a database into GraphQL API and then run Queries and mutations from the playground. Here’s the thing though. In a real-life application, leaving all this functionality to anyone who has access to the Prisma means that the client has the entire database exposed to them and this is not good.

To to this, Prisma has the graphql-yoga and prisma-binding which allow you to build a GraphQL server on top of the Prisma service by connecting the resolvers of the GraphQL server to Prisma’s GraphQL API using Prisma binding.

In the scotch-prisma-demo folder initialize a node project using the command:

npm init -y

This automatically creates a package.json file for your project

Installing the necessary modules

To continue building the graphql server, we will need to install the following modules by running the command:

npm install graphgql-yoga prisma-binding

Reorganize File Structure

Run the following commands:

mkdir prisma
mv datamodel.graphql prisma/
mv prisma.yml prisma/
mkdir src
touch src/index.js
touch src/schema.graphql

Creating Application Schema

The GraphQL server needs to have a Schema that helps identify the operations that can be carried out on the server. For this application, edit the src/schema.graphql to look like this:

type Query{
    user(id: ID!): User

type Mutation {
    signup(name: String!): User!

In this case, the schema allows for the following:

  • Fetch a single user based on its id
  • Signup a user to the system

Downloading Prisma Database Schema

If you look at the schema definition above, you can see that the User is not defined. Ideally, you could redefine the User type in the schema but since there already exists a definition in the Prisma service created earlier, we will use that instead to prevent us from having multiple locations for one definition.

Install the GraphQL CLI by running:

npm install -g graphql-cli

and then create a .graphqlconfig in the root directory of the project and update it as follows:

touch .graphqlconfig.yml
        schemPath: src/schema.graphql
            default: http://localhost:4000
        schemaPath: src/generated/prisma.graphql
            prisma: prisma/prisma.yml

You can download the Prisma schema by running the following command:

graphql get-schema --project prisma

Then, update the src/schema.graphql to look like this:

import User from './generated/prisma.graphql'

type Query{
    user(id: ID!): User

type Mutation {
    signup(name: String!): User!

You have the Prisma schema downloaded in the /src/generated/prisma.graphql file.

Instantiating Prisma bindings and Implementing Resolvers

The next thing to do is to ensure that Prisma’s GraphQL API can be accessed by the resolvers using the prisma-binding. To do this, update the index.js file to look like this:

// src/index.js
const { GraphQLServer } = require('graphql-yoga')
const { Prisma } = require('prisma-binding')

const resolvers = {
    Query: {
        user: (_, args, context, info) => {
            // ...
    Mutation: {
        signup: (_, args, context, info) => {
            // ...

const server = new GraphQLServer({
    typeDefs: 'src/schema.graphql',
    context: req => ({
        prisma: new Prisma({
            typeDefs: 'src/generated/prisma.graphql',
            endpoint: 'PRISMA_ENDPOINT',
server.start(() => console.log(`GraphQL server is running on http://localhost:4000`))

Note that the PRISMA_ENDPOINT can be obtained from your /prisma/prisma.yml file.

Update the resolvers in your application by adding the following into your index.js file:

// src/index.js

const resolvers = {
    Query: {
        user: (_, args, context, info) => {
            return context.prisma.query.user(
                    where: {
    Mutation: {
        signup: (_, args, context, info) => {
            return context.prisma.mutation.createUser(
                    data: {


Update your package.json file as follows:

    "name": "scotch-prisma-demo",
    "version": "1.0.0",
    "description": "",
    "main": "index.js",
    "scripts": {
    "start": "node src/index.js",
    "test": "echo "Error: no test specified" && exit 1"
    "keywords": [],
    "author": "",
    "license": "ISC",
    "dependencies": {
    "graphql-yoga": "^1.14.7",
    "prisma-binding": "^2.0.2"

Here, we added the start script to the project. Now, you can run the application by using this command:

npm start

This starts the server and the playground for use. When you head over to the browser and navigate to localhost:4000, you get the following:

Running a Mutation in the Playground

Running a Query in the Playground

Now that we have built the GraphQL server over the Prisma Service, the playground no longer connects directly to the Prisma Endpoint but to the application itself. With this, the client is restricted to only performing actions that have been defined in the new schema on the server.


In the article, we have seen how to create a Prisma service that converts a database to a GraphQL API and how to use GraphQL Yoga and Prisma Binding to build a GraphQL server on the service for use by client applications. Here’s a link to the GitHub repository if interested. Feel free to leave a comment below.


Build Custom Pagination with React

By Glad Chinda

Demo App Screenshot

Often times, we get involved in building web apps in which we are required to fetch large sets of data records from a remote server, API or some database sitting somewhere. If you are building a payment system for example, it could be fetching thousands of transactions. If it is a social media app, it could be fetching tonnes of user comments, profiles or activities. Whichever it is, there are a couple of methods for handling the data such that it doesn’t become overwhelming to the end-user interacting with the app.

One of the popular methods for handling large datasets on the view is by using the infinite scrolling technique – where more data is loaded in chunks as the user continues scrolling very close to the end of the page. This is the technique used in displaying search results in Google Images. It is also used in so many social media platforms like Facebook – displaying posts on timeline, Twitter – showing recent tweets, etc.

Another known method for handling large datasets is using pagination. Pagination works effectively when you already know the size of the dataset(the total number of records in the dataset) upfront. Secondly, you only load the required chunk of data from the total dataset based on the end-users interaction with the pagination control. This is the technique used in displaying search results in Google Search.

In this tutorial, we will see how to build a custom pagination component with React for paginating large datasets. In order to keep things as simple as possible, we will build a paginated view of the countries in the world – of which we already have the data of all the countries upfront.

Here is a demo of what we will be building in this tutorial.


Before getting started, you need to ensure that you have Node already installed on your machine. I will also recommend that you install the Yarn package manager on your machine, since we will be using it for package management instead of npm that ships with Node. You can follow this Yarn installation guide to install yarn on your machine.

We will create the boilerplate code for our React app using the create-react-app command-line package. You also need to ensure that it is installed globally on your machine. If you are using npm >= 5.2 then you may not need to install create-react-app as a global dependency since we can use the npx command.

Finally, this tutorial assumes that you are already familiar with React. If that is not the case, you can check the React Documentation to learn more about React.

Getting Started

Create new Application

Start a new React application using the following command. You can name the application however you desire.

create-react-app react-pagination

npm >= 5.2

If you are using npm version 5.2 or higher, it ships with an additional npx binary. Using the npx binary, you don’t need to install create-react-app globally on your machine. You can start a new React application with this simple command:

npx create-react-app react-pagination

Install Dependencies

Next, we will install the dependencies we need for our application. Run the following command to install the required dependencies.

yarn add bootstrap prop-types react-flags countries-api
yarn add -D npm-run-all node-sass-chokidar

We have installed node-sass-chokidar as a development dependency for our application to enable us use SASS. For more information about this, see this guide.

Now open the src directory and change the file extension of all the .css files to .scss. The required .css files will be compiled by node-sass-chokidar as we continue.

Modify the npm Scripts

Edit the package.json file and modify the scripts section to look like the following:

"scripts": {
  "start:js": "react-scripts start",
  "build:js": "react-scripts build",
  "start": "npm-run-all -p watch:css start:js",
  "build": "npm-run-all build:css build:js",
  "test": "react-scripts test --env=jsdom",
  "eject": "react-scripts eject",
  "build:css": "node-sass-chokidar --include-path ./src --include-path ./node_modules src/ -o src/",
  "watch:css": "npm run build:css && node-sass-chokidar --include-path ./src --include-path ./node_modules src/ -o src/ --watch --recursive"

Include Bootstrap CSS

We installed the bootstrap package as a dependency for our application since we will be needing some default styling. We will also be using styles from the Bootstrap pagination component. To include Bootstrap in the application, edit the src/index.js file and add the following line before every other import statement.

import "bootstrap/dist/css/bootstrap.min.css";

Setup Flag Icons

We installed react-flags as a dependency for our application. In order to get access to the flag icons from our application, we will need to copy the icon images to the public directory of our application. Run the following commands from your terminal to copy the flag icons.

mkdir -p public/img
cp -R node_modules/react-flags/vendor/flags public/img

If you are on a Windows machine, run the following commands instead:

mkdir publicimg
xcopy node_modulesreact-flagsvendorflags publicimg /s /e

The components directory

We will create the following React components for our application.

  • CountryCard – This simply renders the name, region and flag of a given country.

  • Pagination – This contains the whole logic for building, rendering and switching pages on the pagination control.

Go ahead and create a components directory inside the src directory of the application to house all our components.

Start the Application

Start the application by running the following command with yarn:

yarn start

The application is now started and development can begin. Notice that a browser tab has been opened for you with live reloading functionality to keep in sync with changes in the application as you develop.

At this point, the application view should look like the following screenshot:

Initial View

The CountryCard Component

Create a new file CountryCard.js in the src/components directory and add the following code snippet to it.

import React from 'react';
import PropTypes from 'prop-types';
import Flag from 'react-flags';

const CountryCard = props => {
  const { cca2: code2 = '', region = null, name = {}  } = || {};

  return (
    <div className="col-sm-6 col-md-4 country-card">
      <div className="country-card-container border-gray rounded border mx-2 my-3 d-flex flex-row align-items-center p-0 bg-light">

        <div className="h-100 position-relative border-gray border-right px-2 bg-white rounded-left">

          <Flag country={code2} format="png" pngSize={64} basePath="./img/flags" className="d-block h-100" />


        <div className="px-3">

          <span className="country-name text-dark d-block font-weight-bold">{ name.common }</span>

          <span className="country-region text-secondary text-uppercase">{ region }</span>



CountryCard.propTypes = {
  country: PropTypes.shape({
    cca2: PropTypes.string.isRequired,
    region: PropTypes.string.isRequired,
    name: PropTypes.shape({
      common: PropTypes.string.isRequired

export default CountryCard;

The CountryCard component requires a country prop that contains the data about the country to be rendered. As seen in the propTypes for the CountryCard component, the country prop object must contain the following data:

  • cca2 – 2-digit country code
  • region – the country region e.g Africa
  • name.common – the common name of the country e.g Nigeria

Here is a sample country object:

  cca2: "NG",
  region: "Africa",
  name: {
    common: "Nigeria"

Also notice how we render the country flag using the react-flags package. You can check the react-flags documentation to learn more about the required props and how to use the package.

The Pagination Component

Create a new file Pagination.js in the src/components directory and add the following code snippet to it.

import React, { Component, Fragment } from 'react';
import PropTypes from 'prop-types';

class Pagination extends Component {

  constructor(props) {
    const { totalRecords = null, pageLimit = 30, pageNeighbours = 0 } = props;

    this.pageLimit = typeof pageLimit === 'number' ? pageLimit : 30;
    this.totalRecords = typeof totalRecords === 'number' ? totalRecords : 0;

    // pageNeighbours can be: 0, 1 or 2
    this.pageNeighbours = typeof pageNeighbours === 'number'
      ? Math.max(0, Math.min(pageNeighbours, 2))
      : 0;

    this.totalPages = Math.ceil(this.totalRecords / this.pageLimit);

    this.state = { currentPage: 1 };


Pagination.propTypes = {
  totalRecords: PropTypes.number.isRequired,
  pageLimit: PropTypes.number,
  pageNeighbours: PropTypes.number,
  onPageChanged: PropTypes.func

export default Pagination;

The Pagination component can take four special props as specified in the propTypes object.

  • totalRecords – indicates the total number of records to be paginated. It is required.

  • pageLimit – indicates the number of records to be shown per page. If not specified, it defaults to 30 as defined in the constructor().

  • pageNeighbours – indicates the number of additional page numbers to show on each side of the current page. The minimum value is 0 and the maximum value is 2. If not specified, it defaults to 0 as defined in the constructor(). The following image illustrates the effect of different values of the pageNeighbours prop:

Page Neighbours Illustration

  • onPageChanged – is a function that will be called with data of the current pagination state only when the current page changes.

In the constructor() function, we compute the total pages as follows:

this.totalPages = Math.ceil(this.totalRecords / this.pageLimit);

Notice that we use Math.ceil() here to ensure that we get an integer value for the total number of pages. This also ensures that the excess records are captured in the last page, especially in cases where the number of excess records is less than the number of records to be shown per page.

Finally, we initialize the state with the currentPage property set to 1. We need this state property to internally keep track of the currently active page.

Next, we will go ahead and create the method for generating the page numbers. Modify the Pagination component as shown in the following code snippet:

const LEFT_PAGE = 'LEFT';

 * Helper method for creating a range of numbers
 * range(1, 5) => [1, 2, 3, 4, 5]
const range = (from, to, step = 1) => {
  let i = from;
  const range = [];

  while (i <= to) {
    i += step;

  return range;

class Pagination extends Component {

   * Let's say we have 10 pages and we set pageNeighbours to 2
   * Given that the current page is 6
   * The pagination control will look like the following:
   * (1) < {4 5} [6] {7 8} > (10)
   * (x) => terminal pages: first and last page(always visible)
   * [x] => represents current page
   * {...x} => represents page neighbours
  fetchPageNumbers = () => {

    const totalPages = this.totalPages;
    const currentPage = this.state.currentPage;
    const pageNeighbours = this.pageNeighbours;

     * totalNumbers: the total page numbers to show on the control
     * totalBlocks: totalNumbers + 2 to cover for the left(<) and right(>) controls
    const totalNumbers = (this.pageNeighbours * 2) + 3;
    const totalBlocks = totalNumbers + 2;

    if (totalPages > totalBlocks) {

      const startPage = Math.max(2, currentPage - pageNeighbours);
      const endPage = Math.min(totalPages - 1, currentPage + pageNeighbours);

      let pages = range(startPage, endPage);

       * hasLeftSpill: has hidden pages to the left
       * hasRightSpill: has hidden pages to the right
       * spillOffset: number of hidden pages either to the left or to the right
      const hasLeftSpill = startPage > 2;
      const hasRightSpill = (totalPages - endPage) > 1;
      const spillOffset = totalNumbers - (pages.length + 1);

      switch (true) {
        // handle: (1) < {5 6} [7] {8 9} (10)
        case (hasLeftSpill && !hasRightSpill): {
          const extraPages = range(startPage - spillOffset, startPage - 1);
          pages = [LEFT_PAGE, ...extraPages, ...pages];

        // handle: (1) {2 3} [4] {5 6} > (10)
        case (!hasLeftSpill && hasRightSpill): {
          const extraPages = range(endPage + 1, endPage + spillOffset);
          pages = [...pages, ...extraPages, RIGHT_PAGE];

        // handle: (1) < {4 5} [6] {7 8} > (10)
        case (hasLeftSpill && hasRightSpill):
        default: {
          pages = [LEFT_PAGE, ...pages, RIGHT_PAGE];

      return [1, ...pages, totalPages];


    return range(1, totalPages);



Here, we first define two constants: LEFT_PAGE and RIGHT_PAGE. These constants will be used to indicate points where we have page controls for moving left and right respectively.

We also define a helper range() function that can help us generate ranges of numbers. If you use a utility library like Lodash in your project, then you can use the _.range() function provided by Lodash instead. The following code snippet shows the difference between the range() function we just defined and the one from Lodash:

range(1, 5); // returns [1, 2, 3, 4, 5]
_.range(1, 5); // returns [1, 2, 3, 4]

Next, we define the fetchPageNumbers() method in the Pagination class. This method handles the core logic for generating the page numbers to be shown on the pagination control. We want the first page and last page to always be visible.

First, we define a couple of variables. totalNumbers represents the total page numbers that will be shown on the control. totalBlocks represents the total page numbers to be shown plus two additional blocks for left and right page indicators.

If totalPages is not greater than totalBlocks, we simply return a range of numbers from 1 to totalPages. Otherwise, we return the array of page numbers, with LEFT_PAGE and RIGHT_PAGE at points where we have pages spilling to the left and right respectively.

Notice however that our pagination control ensures that the first page and last page are always visible. The left and right page controls appear inwards.

Now we will go ahead and add the render() method to enable us render the pagination control. Modify the Pagination component as shown in the following snippet:

class Pagination extends Component {

render() {

    if (!this.totalRecords || this.totalPages === 1) return null;

    const { currentPage } = this.state;
    const pages = this.fetchPageNumbers();

    return (
        <nav aria-label="Countries Pagination">
          <ul className="pagination">
            {, index) => {

              if (page === LEFT_PAGE) return (
                <li key={index} className="page-item">
                  <a className="page-link" href="#" aria-label="Previous" onClick={this.handleMoveLeft}>
                    <span aria-hidden="true">&laquo;</span>
                    <span className="sr-only">Previous</span>

              if (page === RIGHT_PAGE) return (
                <li key={index} className="page-item">
                  <a className="page-link" href="#" aria-label="Next" onClick={this.handleMoveRight}>
                    <span aria-hidden="true">&raquo;</span>
                    <span className="sr-only">Next</span>

              return (
                <li key={index} className={`page-item${ currentPage === page ? ' active' : ''}`}>
                  <a className="page-link" href="#" onClick={ this.handleClick(page) }>{ page }</a>

            }) }




Here, we generate the page numbers array by calling the fetchPageNumbers() method we created earlier. We then render each page number using Notice that we register click event handlers on each rendered page number to handle clicks.

Also notice that the pagination control will not be rendered if the totalRecords prop was not passed in correctly to the Pagination component or in cases where there is only 1 page.

Finally, we will define the event handler methods. Modify the Pagination component as shown in the following snippet:

class Pagination extends Component {

  componentDidMount() {

  gotoPage = page => {
    const { onPageChanged = f => f } = this.props;

    const currentPage = Math.max(0, Math.min(page, this.totalPages));

    const paginationData = {
      totalPages: this.totalPages,
      pageLimit: this.pageLimit,
      totalRecords: this.totalRecords

    this.setState({ currentPage }, () => onPageChanged(paginationData));

  handleClick = page => evt => {

  handleMoveLeft = evt => {
    this.gotoPage(this.state.currentPage - (this.pageNeighbours * 2) - 1);

  handleMoveRight = evt => {
    this.gotoPage(this.state.currentPage + (this.pageNeighbours * 2) + 1);


We define the gotoPage() method that modifies the state and sets the currentPage to the specified page. It ensures that the page argument has a minimum value of 1 and a maximum value of the total number of pages. It finally calls the onPageChanged() function that was passed in as prop, with data indicating the new pagination state.

When the component mounts, we simply go to the first page by calling this.gotoPage(1) as shown in the componentDidMount() lifecycle method.

Notice how we use (this.pageNeighbours * 2) in handleMoveLeft() and handleMoveRight() to slide the page numbers to the left and to the right respectively based on the current page number.

Here is a demo of the interaction we have been able to achieve:

Left-Right Movement

The App Component

We will go ahead and modify the App.js file in the src directory. The App.js file should look like the following snippet:

import React, { Component } from 'react';
import Countries from 'countries-api';
import './App.css';

import Pagination from './components/Pagination';
import CountryCard from './components/CountryCard';

class App extends Component {

  state = { allCountries: [], currentCountries: [], currentPage: null, totalPages: null }

  componentDidMount() {
    const { data: allCountries = [] } = Countries.findAll();
    this.setState({ allCountries });

  onPageChanged = data => {
    const { allCountries } = this.state;
    const { currentPage, totalPages, pageLimit } = data;

    const offset = (currentPage - 1) * pageLimit;
    const currentCountries = allCountries.slice(offset, offset + pageLimit);

    this.setState({ currentPage, currentCountries, totalPages });


export default App;

Here we initialize the App component’s state with the following attributes:

  • allCountries – an array of all the countries in our app. Initialized to an empty array([]).

  • currentCountries – an array of all the countries to be shown on the currently active page. Initialized to an empty array([]).

  • currentPage – the page number of the currently active page. Initialized to null.

  • totalPages – the total number of pages for all the country records. Initialized to null.

Next, in the componentDidMount() lifecycle method, we fetch all the world countries using the countries-api package by invoking Countries.findAll(). We then update the app state, setting allCountries to contain all the world countries. You can see the countries-api documentation to learn more about the package.

Finally, we defined the onPageChanged() method, which will be called each time we navigate to a new page from the pagination control. This method will be passed to the onPageChanged prop of the Pagination component.

There are two lines that are worth paying attention to in this method. The first is this line:

const offset = (currentPage - 1) * pageLimit;

The offset value indicates the starting index for fetching the records for the current page. Using (currentPage - 1) ensures that the offset is zero-based. Let’s say for example that we are displaying 25 records per page and we are currently viewing page 5. Then the offset will be ((5 - 1) * 25 = 100).

In fact, if you are fetching records on demand from a database for example, this is a sample SQL query to show you how offset can be used:

SELECT * FROM `countries` LIMIT 100, 25

Since, we are not fetching records on demand from a database or any external source, we need a way to extract the required chunk of records to be shown for the current page. That brings us to the second line:

const currentCountries = allCountries.slice(offset, offset + pageLimit);

Notice here that we use the Array.prototype.slice() method to extract the required chunk of records from allCountries by passing the offset as the starting index for the slice and (offset + pageLimit) as the index before which to end the slice.

Fetching records in real applications

In order to keep this tutorial as simple as possible, we did not fetch records from any external source. In a real application, you will probably be fetching records from a database or an API. The logic for fetching the records can easily go into the onPageChanged() method of the App component.

Let’s say we have a fictitious API endpoint /api/countries?page={current_page}&limit={page_limit}. The following snippet shows how we can fetch countries on demand from the API using the axios HTTP package:

onPageChanged = data => {
 const { currentPage, totalPages, pageLimit } = data;

   .then(response => {
     const currentCountries =;
     this.setState({ currentPage, currentCountries, totalPages });

Now we will go ahead and finish up the App component by adding the render() method. Modify the App component accordingly as shown in the following snippet:

class App extends Component {

  // other methods here ...

  render() {

    const { allCountries, currentCountries, currentPage, totalPages } = this.state;
    const totalCountries = allCountries.length;

    if (totalCountries === 0) return null;

    const headerClass = ['text-dark py-2 pr-4 m-0', currentPage ? 'border-gray border-right' : ''].join(' ').trim();

    return (
      <div className="container mb-5">
        <div className="row d-flex flex-row py-5">

          <div className="w-100 px-4 py-5 d-flex flex-row flex-wrap align-items-center justify-content-between">
            <div className="d-flex flex-row align-items-center">

              <h2 className={headerClass}>
                <strong className="text-secondary">{totalCountries}</strong> Countries

              { currentPage && (
                <span className="current-page d-inline-block h-100 pl-4 text-secondary">
                  Page <span className="font-weight-bold">{ currentPage }</span> / <span className="font-weight-bold">{ totalPages }</span>
              ) }


            <div className="d-flex flex-row py-4 align-items-center">
              <Pagination totalRecords={totalCountries} pageLimit={18} pageNeighbours={1} onPageChanged={this.onPageChanged} />

          { => <CountryCard key={country.cca3} country={country} />) }




The render() method is quite straightforward. We render the total number of countries, the current page, the total number of pages, the control and then the for each country in the current page.

Notice that we passed the onPageChanged() method we defined earlier to the onPageChanged prop of the control. This is very important for capturing page changes from the Pagination component. Also notice that we are displaying 18 countries per page.

At this point, the app should look like the following screenshot:

App Screenshot

Levelling up with some styles

You would have noticed that we have been adding some custom classes to the components we created earlier. We will go ahead and define some style rules for those classes in the src/App.scss file. The App.scss file should look like the following snippet:

/* Declare some variables */
$base-color: #ced4da;
$light-background: lighten(desaturate($base-color, 50%), 12.5%);

.current-page {
  font-size: 1.5rem;
  vertical-align: middle;

.country-card-container {
  height: 60px;
  cursor: pointer;
  position: relative;
  overflow: hidden;

.country-name {
  font-size: 0.9rem;

.country-region {
  font-size: 0.7rem;

.country-region {
  line-height: 1;

// Override some Bootstrap pagination styles
ul.pagination {
  margin-top: 0;
  margin-bottom: 0;
  box-shadow: 0 0 5px rgba(0, 0, 0, 0.1); { {
      color: saturate(darken($base-color, 50%), 5%) !important;
      background-color: saturate(lighten($base-color, 7.5%), 2.5%) !important;
      border-color: $base-color !important;
  } {
    padding: 0.75rem 1rem;
    min-width: 3.5rem;
    text-align: center;
    box-shadow: none !important;
    border-color: $base-color !important;
    color: saturate(darken($base-color, 30%), 10%);
    font-weight: 900;
    font-size: 1rem;

    &:hover {
      background-color: $light-background;

After adding the styles, the app should now look like the following screenshot:

App Screenshot with Styles


In this tutorial, we have been able to create a custom pagination widget in our React application. Although we didn’t make calls to any API or interact with any database back-end in this tutorial, it is possible that your application may demand such interactions. You are not in any way limited to the approach used in this tutorial – you can extend it as you wish to suite the requirements of your application.

For the complete sourcecode of this tutorial, checkout the build-react-pagination-demo repository on Github. You can also get a live demo of this tutorial on Code Sandbox.


​How to Integrate MongoDB Atlas and Segment using MongoDB Stitch

By Jesse Krasnostein​​

It can be quite difficult tying together multiple systems, APIs, and third-party services. Recently, we faced this exact problem in-house, when we wanted to get data from Segment into MongoDB so we could take advantage of MongoDB’s native analytics capabilities and rich query language. Using some clever tools we were able to make this happen in under an hour – the first time around.

While this post is detailed, the actual implementation should only take around 20 minutes. I’ll start off by introducing our cast of characters (what tools we used to do this) and then we will walk through how we went about it.

The Characters

To collect data from a variety of sources including mobile, web, cloud apps, and servers, developers have been turning to Segment since 2011. Segment consolidates all the events generated by multiple data sources into a single clickstream. You can then route the data to over 200+ integrations all at the click of a button. Companies like DigitalOcean, New Relic, InVision, and Instacart all rely on Segment for different parts of their growth strategies.

To store the data generated by Segment, we turn to MongoDB Atlas – MongoDB’s database as a service. Atlas offers the best of MongoDB:

  • A straightforward query language that makes it easy to work with your data
  • Native replication and sharding to ensure data can live where it needs to
  • A flexible data model that allows you to easily ingest data from a variety of sources without needing to know precisely how the data will be structured (its shape)

All this is wrapped up in a fully managed service, engineered and run by the same team that builds the database, which means that as a developer you actually can have your cake and eat it too.

The final character is MongoDB Stitch, MongoDB’s serverless platform. Stitch streamlines application development and deployment with simple, secure access to data and services – getting your apps to market faster while reducing operational costs. Stitch allows us to implement server-side logic that connects third-party tools like Segment, with MongoDB, while ensuring everything from security to performance is optimized.

Order of Operations

We are going to go through the following steps. If you have completed any of these already, feel free to just cherry pick the relevant items you need assistance with:

  1. Setting up a Segment workspace
  2. Adding Segment’s JavaScript library to your frontend application – I’ve also built a ridiculously simple HTML page that you can use for testing
  3. Sending an event to Segment when a user clicks a button
  4. Signing up for MongoDB Atlas
  5. Creating a cluster, so your data has somewhere to live
  6. Creating a MongoDB Stitch app that accepts data from Segment and saves it to your MongoDB Atlas cluster

While this blog focusses on integrating Segment with MongoDB, the process we outline below will work with other APIs and web services. Join the community slack and ask questions if you are trying to follow along with a different service.

Each time Segment sees new data a webhook fires an HTTP Post request to Stitch. A Stitch function then handles the authentication of the request and, without performing any data manipulation, saves the body of the request directly to the database – ready for further analysis.

Setting up a Workspace in Segment

Head over to and sign up for an account. Once complete, Segment will automatically create a Workspace for you. Workspaces allow you to collaborate with team members, control permissions, and share data sources across your whole team. Click through to the Workspace that you’ve just created.

To start collecting data in your Workspace, we need to add a source. In this case, I’m going to collect data from a website, so I’ll select that option, and on the next screen, Segment will have added a JavaScript source to my workspace. Any data that comes from our website will be attributed to this source. There is a blue toggle link I can click within the source that will give me the code I need to add to my website so it can send data to Segment. Take note of this as we will need it shortly.

Adding Segment to your Website

I mentioned a simple sample page I had created in case you want to test this implementation outside of other code you had been working on. You can grab it from this GitHub repo.

In my sample page, you’ll see I’ve copied and pasted the Segment code and dropped it in between my page’s tags. You’ll need to do the equivalent with whatever code or language you are working in.

If you open that page in a browser, it should automatically start sending data to Segment. The easiest way to see this is by opening Segment in another window and clicking through to the debugger.

Clicking on the debugger button in the Segment UI takes you to a live stream of events sent by your application.

Customizing the events you send to Segment

The Segment library enables you to get as granular as you like with the data you send from your application.

As your application grows, you’ll likely want to expand the scope of what you track. Best practice requires you to put some thought into how you name events and what data you send. Otherwise different developers will name events differently and will send them at different times – read this post for more on the topic.

To get us started, I’m going to assume that we want to track every time someone clicks a favorite button on a web page. We are going to use some simple JavaScript to call Segment’s analytics tracking code and send an event called a “track” to the Segment API. That way, each time someone clicks our favorite button, we’ll know about it.

You’ll see at the bottom of my web page, that there is a jQuery function attached to the .btn class. Let’s add the following after the alert() function.

analytics.track("Favorited", {
  itemName: itemName

Now, refresh the page in your browser and click on one of the favorite buttons. You should see an alert box come up. If you head over to your debugger window in Segment, you’ll observe the track event streaming in as well. Pretty cool, right!

You probably noticed that the analytics code above is storing the data you want to send in a JSON document. You can add fields with more specific information anytime you like. Traditionally, this data would get sent to some sort of tabular data store, like MySQL or PostgreSQL, but then each time new information was added you would have to perform a migration to add a new column to your table. On top of that, you would likely have to update the object-relational mapping code that’s responsible for saving the event in your database. MongoDB is a flexible data store, that means there are no migrations or translations needed, as we will store the data in the exact form you send it in.

Getting Started with MongoDB Atlas and Stitch

As mentioned, we’ll be using two different services from MongoDB. The first, MongoDB Atlas, is a database as a service. It’s where all the data generated by Segment will live, long-term. The second, MongoDB Stitch, is going to play the part of our backend. We are going to use Stitch to set up an endpoint where Segment can send data, once received, Stitch validates that the request Stitch was sent from Segment, and then coordinate all the logic to save this data into MongoDB Atlas for later analysis and other activities.

First Time Using MongoDB Atlas?

Click here to set up an account in MongoDB Atlas.

Once you’ve created an account, we are going to use Atlas’s Cluster Builder to set up our first cluster (every MongoDB Atlas deployment is made up of multiple nodes that help with high availability, that’s why we call it a cluster). For this demonstration, we can get away with an M0 instance – it’s free forever and great for sandboxing. It’s not on dedicated infrastructure, so for any production workloads, its worth investigating other instance sizes.

When the Cluster Builder appears on screen, the default cloud provider is AWS, and the selected region is North Virginia. Leave these as is. Scroll down and click on the Cluster Tier section, and this will expand to show our different sizing options. Select M0 at the top of the list.

You can also customize your cluster’s name, by clicking on the Cluster Name section.

Once complete, click Create Cluster. It takes anywhere from 7-10 minutes to set up your cluster so maybe go grab a drink, stretch your legs and come back… When you’re ready, read on.

Creating a Stitch Application

While the Cluster is building, on the left-hand menu, click Stitch Apps. You will be taken to the stitch applications page, from where you can click Create New Application.

Give your application a name, in this case, I call it “SegmentIntegration” and link it to the correct cluster. Click Create.

Once the application is ready, you’ll be taken to the Stitch welcome page. In this case, we can leave anonymous authentication off.

We do need to enable access to a MongoDB collection to store our data from Segment. For the database name I use “segment”, and for the collection, I use “events”. Click Add Collection.

Next, we will need to add a service. In this case, we will be manually configuring an HTTP service that can communicate over the web with Segment’s service. Scroll down and click Add Service.

You’ll jump one page and should see a big sign saying, “This application has no services”… not for long. Click Add a Service… again.

From the options now visible, select HTTP and then give the service a name. I’ll use “SegmentHTTP”. Click Add Service.

Next, we need to add an Incoming Webhook. A Webhook is an HTTP endpoint that will continuously listen for incoming calls from Segment, and when called, it will trigger a function in Stitch to run.

Click Add Incoming Webhook

  • Leave the default name as is and change the following fields:
  • Turn on Respond with Result as this will return the result of our insert operation
  • Change Request Validation to “Require Secret as Query Param”
  • Add a secret code to the last field on the page. Important Note: We will refer to this as our “public secret” as it is NOT protected from the outside world, it’s more of a simple validation that Stitch can use before running the Function we will create. Shortly, we will also define a “private secret” that will not be visible outside of Stitch and Segment.

Finally, click “Save”.

Define Request Handling Logic with Functions in Stitch

We define custom behavior in Stitch using functions, simple JavaScript (ES6) that can be used to implement logic and work with all the different services integrated with Stitch.

Thankfully, we don’t need to do too much work here. Stitch already has the basics set up for us. We need to define logic that does the following things:

  1. Grabs the request signature from HTTP headers
  2. Uses the signature to validate the requests authenticity (i.e., it came from Segment)
  3. Write the request to our collection in MongoDB Atlas

Getting an HTTP Header and Generating an HMAC Signature

Add the following to line 8, after the curly close brace }.

const signature = payload.headers['X-Signature'];

And then use Stitch’s built-in Crypto library to generate a digest that we will compare with the signature.

const digest = utils.crypto.hmac(payload.body.text(), context.values.get("segment_shared_secret"), "sha1", "hex");

A lot is happening here so I’ll step through each part and explain. Segment signs requests with a signature that is a combination of the HTTP body and a shared secret. We can attempt to generate an identical signature using the utils.crytop.hmac function if we know the body of the request, the shared secret, the hash function Segment uses to create its signatures, and the output format. If we can replicate what is contained within the X-Signature header from Segment, we will consider this to be an authenticated request.

Note: This will be using a private secret, not the public secret we defined in the Settings page when we created the webhook. This secret should never be publicly visible. Stitch allows us to define values that we can use for storing variables like API keys and secrets. We will do this shortly.

Validating that the Request is Authentic and Writing to MongoDB Atlas

To validate the request, we simply need to compare the digest and the signature. If they’re equivalent, then we will write to the database. Add the following code directly after we generate the digest.

if (digest == signature) {
  // Request is valid
} else {
  // Request is invalid
  console.log("Request is invalid");

Finally, we will augment the if statement with the appropriate behavior needed to save our data. On the first line of the if statement, we will get our “mongodb-atlas” service. Add the following code:

let mongodb ="mongodb-atlas");

Next, we will get our database collection so that we can write data to it.

let events = mongodb.db("segment").collection("events");

And finally, we write the data.


Click the Save button on the top left-hand side of the code editor. At the end of this, our entire function should look something like this:

exports = function(payload) {

  var queryArg = payload.query.arg || '';
  var body = {};

  if (payload.body) {
    body = JSON.parse(payload.body.text());

  // Get x-signature header and create digest for comparison
  const signature = payload.headers['X-Signature'];
  const digest = utils.crypto.hmac(payload.body.text(), 
    context.values.get("segment_shared_secret"), "sha1", "hex");

  //Only write the data if the digest matches Segment's x-signature!
  if (digest == signature) {

    let mongodb ="mongodb-atlas");

    // Set the collection up to write data
    let events = mongodb.db("segment").collection("events");

    // Write the data

  } else  {
    console.log("Digest didn't match");

  return queryArg + ' ' + body.msg;

Defining Rules for a MongoDB Atlas Collection

Next, we will need to update our rules that allow Stitch to write to our database collection. To do this, in the left-hand menu, click on “mongodb-atlas”.

Select the collection we created earlier, called “”. This will display the Field Rules for our Top-Level Document. We can use these rules to define what conditions must exist for our Stitch function to be able to Read or Write to the collection.

We will leave the read rules as is for now, as we will not be reading directly from our Stitch application. We will, however, change the write rule to “evaluate” so our function can write to the database.

Change the contents of the “Write” box:

  • Specify an empty JSON document {} as the write rule at the document level.
  • Set Allow All Other Fields to Enabled, if it is not already set.

Click Save at the top of the editor.

Adding a Secret Value in MongoDB Stitch

As is common practice, API keys and passwords are stored as variables, meaning they are never committed to a code repo – visibility is reduced. Stitch allows us to create private variables (values) that may be accessed only by incoming webhooks, rules, and named functions.

We do this by clicking Values on the Stitch menu, clicking Create New Value, and giving our value a name – in this case segment_shared_secret (we will refer to this as our private secret). We enter the contents in the large text box. Make sure to click Save once you’re done.

Getting Our Webhook URL

To copy the webhook URL across to Segment from Stitch, navigate using the Control menu: Services > SegmentHTTP > webhook0 > Settings (at the top of the page). Now copy the “Webhook URL”.

In our case, the Webhooks looks something like this:

Adding the Webhook URL to Segment

Head over to Segment and log in to your workspace. In destinations, we are going to click Add Destination.

Search for Webhook in the destinations catalog and click Webhooks. Once through to the next page, click Configure Webhooks. Then select any sources from which you want to send data. Once selected, click Confirm Source.

Next, we will find ourselves on the destination settings page. We will need to configure our connection settings. Click the box that says Webhooks (max 5).

Copy your webhook URL from Stitch, and make sure you append your public secret to the end of it using the following syntax:

Initial URL:

Add the following to the end: ?secret=

Final URL:

Click Save

We also need to tell Segment what our private secret is so it can create a signature that we can verify within Stitch. Do this by clicking on the Shared Secret field and entering the same value you used for the segment_shared_secret. Click Save.

Finally, all we need to do is activate the webhook by clicking the switch at the top of the Destination Settings page:

Generate Events, and See Your Data in MongoDB

Now, all we need to do is use our test HTML page to generate a few events that get sent to Segment – we can use Segment’s debugger to ensure they are coming in. Once we see them flowing, they will also be going across to MongoDB Stitch, which will be writing the events to MongoDB Atlas.

We’ll take a quick look using Compass to ensure our data is visible. Once we connect to our cluster, we should see a database called “segment”. Click on segment and then you’ll see our collection called “events”. If you click into this you’ll see a sample of the data generated by our frontend!

The End

Thanks for reading through – hopefully you found this helpful. If you’re building new things with MongoDB Stitch we’d love to hear about it. Join the community slack and ask questions in the #stitch channel!


Debug JavaScript in Production with Source Maps

By Ryan Goldman

These days, the code you use to write your application isn’t usually the same code that’s deployed in production and interpreted by browsers. Perhaps you’re writing your source code in a language that “compiles” to JavaScript, like CoffeeScript, TypeScript, or the latest standards-body approved version of JavaScript, ECMAScript 2015 (ES6). Or, even more likely, you’re minifying your source code in order to reduce the file size of your deployed scripts. You’re probably using a tool like UglifyJS or Google Closure Compiler.

Such transformation tools are often referred to as transpilers — tools that transform source code from one language into either the same language or another similar high-level language. Their output is transpiled code that, while functional in the target environment (e.g., browser-compatible JavaScript), typically bears little resemblance to the code from which it was generated.

This presents a problem: when debugging code in the browser or inspecting stack traces generated from errors in your application, you are looking at transpiled and (typically) hard-to-read JavaScript, not the original source code you used to write your application. This can make JavaScript error tracking hard.

The solution to this problem is a nifty browser feature called source maps. Let’s learn more.

Source Maps

Source maps are JSON files that contain information on how to map your transpiled source code back to its original source. If you’ve ever done programming in a compiled language like Objective-C, you can think of source maps as JavaScript’s version of debug symbols.

Here’s an example source map:

    version : 3,
    file: "app.min.js",
    sourceRoot : "",
    sources: ["foo.js", "bar.js"],
    names: ["src", "maps", "are", "fun"],
    mappings: "AAgBC,SAAQ,CAAEA"

You’ll probably never have to create these files yourself, but it can’t hurt to understand what’s inside:

  • version: The version of the source map spec this file represents (should be “3”)
  • file: The generated filename with which this source map is associated
  • sourceRoot: The URL root from which all sources are relative (optional)
  • sources: An array of URLs to the original source files
  • names: An array of variable/method names found in your code
  • mappings: The actual source code mappings, represented as base64-encoded VLQ values

If this seems like a lot to remember, don’t worry. We’ll explain how to use tools to generate these files for you.


To indicate to browsers that a source map is available for a transpiled file, the sourceMappingURL directive needs to be added to the end of that file:

// app.min.js
$(function () {
    // your application ...
//# sourceMappingURL=/path/to/

When modern browsers see the sourceMappingURL directive, they download the source map from the provided location and use the mapping information inside to corroborate the running client-code with the original source code. Here’s what it looks like when we step through Sentry’s original ES6 + JSX code in Firefox using source maps (note: browsers only download and apply source maps when developer tools are open. There is no performance impact for regular users):

Generating the Source Map

Okay, we now roughly know how source maps work and how to get the browser to download and use them. But how do we go about actually generating them and referencing them from our transpiled files?

Good news! Basically every modern JavaScript transpiler has a command-line option for generating an associated source map. Let’s take a look at a few common options.


UglifyJS is a popular tool for minifying your source code for production. It can dramatically reduce the size of your files by eliminating whitespace, rewriting variable names, removing dead code branches, and more.

If you are using UglifyJS to minify your source code, the following command from UglifyJS 3.3.x will additionally generate a source map mapping the minified code back to the original source:

$ uglifyjs app.js -o app.min.js --source-map

If you take a look at the generated output file, app.min.js, you’ll notice that the final line includes the sourceMappingURL directive pointing to our newly generated source map.


Note that this is a relative URL. In order for the browser to download the associated source map, it must be uploaded to and served from the same destination directory as the Uglified file, app.min.js. That means if app.min.js is served from, so too must your source map be served from

Relative URLs aren’t the only way of specifying sourceMappingURL. You can give Uglify an absolute URL via the --source-map-url option. Or you can even include the entire source map inline, although that is not recommended. Take a look at Uglify’s command-line options for more information.


Webpack is a powerful build tool that resolves and bundles your JavaScript modules into files fit for running in the browser. The Sentry project itself uses Webpack (along with Babel) to assemble and transpile its ES6 + JSX codebase into browser-compatible JavaScript.

Generating source maps with Webpack is super simple. In Webpack 4, specify the devtool property in your config:

// webpack.config.js
module.exports = {
    // ...
    entry: {
      "app": "src/app.js"
    output: {
      path: path.join(__dirname, 'dist'),
      filename: "[name].js",
      sourceMapFilename: "[name]"
    devtool: "source-map"
    // ...

Now when you run the webpack command-line program, Webpack will assemble your files, generate a source map, and reference that source map in the built JavaScript file via the sourceMappingUrl directive.

Private Source Maps

Until this point, all of our examples have assumed that your source maps are publicly available and served from the same server as your executing JavaScript code. In that case, any developer could use your source maps to obtain your original source code.

To prevent this, instead of providing a publicly-accessible sourceMappingURL, you can serve your source maps from a server that is only accessible to your development team. An example would be a server that is only reachable from your company’s VPN.

//# sourceMappingURL: http://company.intranet/app/static/

When a non-team member visits your application with developer tools open, they will attempt to download this source map but get a 404 (or 403) HTTP error, and the source map will not be applied.

Source Maps and Sentry

If you use Sentry to track exceptions in your client-side JavaScript applications, we have good news! Sentry automatically fetches and applies source maps to stack traces generated by errors. This means that you’ll see your original source code and not minified and/or transpiled code. Here’s how the stack trace of your unminified code looks in Sentry:

Basically, if you’ve followed the steps in this little guide and your deploy now has generated, uploaded source maps and transpiled files with sourceMappingURL directives pointing to those source maps, there’s nothing more to do. Sentry will do the rest.

Sentry and Offline Source Maps

Alternatively, instead of hosting source maps yourself, you can upload them directly to Sentry.

Why would you want to do that? A few reasons:

  • In case Sentry has difficulty reaching your servers (e.g., source maps are hosted on VPN)
  • Overcoming latency — source maps would be inside Sentry before an exception is thrown
  • You’re developing an application that runs natively on a device (e.g., using React Native or PhoneGap, whose source code/maps cannot be reached over the internet)
  • Avoid version mismatches where a fetched source map doesn’t match the code where the error was thrown

Wrapping Up

You just learned how source maps can save your skin by making your transpiled code easier to debug in production. Since your build tools likely already support source map generation, it won’t take very long to configure, and the results are very much worth it.

That’s it! Happy error monitoring.


3 Useful TypeScript Tips for Angular

By Jecelyn Yeen

These are the 3 tips I found pretty handy while working with Typescript:

  1. Eliminating the need to import interfaces
  2. Making all interface properties optional
  3. Stop throwing me error, I know what I’m doing

Though I discovered these while working with Angular application, but all tips are not Angular specific, it’s just Typescript.

Eliminating the need to import interfaces

I like interfaces. However, I don’t like to import them everytime. Although Visual Studio Code has auto-import feature, I don’t like my source files been “polluted” by multiple lines of imports – just for the purpose of strong typing.

This is how we do it normally.

// api.model.ts
export interface Customer {
    id: number;
    name: string;

export interface User {
    id: number;
    isActive: boolean;
// using the interfaces
import { Customer, User } from './api.model'; // this line will grow longer if there's more interfaces used

export class MyComponent {
    cust: Customer; 

Solution 1: Using namespace

By using namespace, we can eliminate the needs to import interfaces files.

// api.model.ts
namespace ApiModel {
    export interface Customer {
        id: number;
        name: string;

    export interface User {
        id: number;
        isActive: boolean;
// using the interfaces
export class MyComponent {
    cust: ApiModel.Customer; 

Nice right? Using namespace also help you to better organize & group the interfaces. Please note that you can split the namespace across many files.

Let’s say you have another file called api.v2.model.ts. You add in new interfaces, but you want to use the same namespace.

// api.v2.model.ts
namespace ApiModel {
    export interface Order {
        id: number;
        total: number;

You can definitely do so. To use the newly created interface, just use them like the previous example.

// using the interfaces with same namespaces but different files
export class MyComponent {
    cust: ApiModel.Customer; 
    order: ApiModel.Order;

Here is the detail documentation on Typescript namespacing.

Solution 2: Using d file

The other way to eliminate import is to create a typescirpt file end with .d.ts. “d” stands for declaration file in Typescript (more explanation here).

// api.model.d.ts
// you don't need to export the interface in d file
interface Customer {
    id: number;
    name: string;

Use it as normal without the need to import it.

// using the interfaces of d file
export class MyComponent {
    cust: Customer; 

I recommend solution 1 over solution 2 because:

  • d file usually use for external, 3rd party declaration
  • namespace allow us to better organize the files

Making all interface properties optional

It’s quite common where you will use same interface for CRUD. Let’s say you have a customer interface, during creation, all fields are mandatory, but during update, all fields are optional. Do you need to create two interfaces to handle this scenario?

Here is the interface

// api.model.ts
export interface Customer {
    id: number;
    name: string;
    age: number;

Solution: Use Partial

Partial is a type to make properties an object optional. The declaration is included in the default d file lib.es5.d.ts.

// lib.es5.d.ts
type Partial<T> = {
    [P in keyof T]?: T[P];

How can we use that? Look at the code below:

// using the interface but make all fields optional
import { Customer } from './api.model';

export class MyComponent {
    cust: Partial<Customer>;  /

    ngOninit() {
        this.cust = { name: 'jane' }; // no error throw because all fields are optional

If you don’t find Partial declaration, you may create a d file yourself (e.g. util.d.ts) and copy the code above into it.

For more advance type usage of Typescript, you can read here.

Stop throwing me error, I know what I’m doing

As a Javascript turns Typescript developer, one might find Typescript error is annoying sometimes. In some scenario, you just want to tell Typescript, “hey, I know what I am doing, please leave me alone.”.

Solution: Use @ts-ignore comment

From Typescript version 2.6 onwards, you can do so by using comment @ts-ignore to suppress error.

For example, Typescript will throw error “Unreachable code detected” in this following code:

if (false) {

You can suppress that by using comment @ts-ignore

if (false) {
    // @ts-ignore

Find out more details here: Typescript 2.6 release

Of course, I will suggest you to always try fix the error before ignoring it!


Typescript is good for your (code) health. It has pretty decent documentation. I like the fact that they have comprehensive What's new documentation for every release. It’s an open source project in Github if you would like to contribute. The longer I work with Typescript, the more I love it and appreciate it.

That’s it, happy coding!