Monthly Archives: January 2018

Build Your Own Giphy Clone with React and Cloudinary

By Chilezie Unachukwu

Cover Image

We’ve all seen and used them… animated GIFs they are funny, distracting and seem to be everywhere. We have Alex Chung and Jace Cooke to thank for this incredible distraction and waste of everyone’s time. Seriously. It’s a distraction. The initial idea came to them over breakfast. There is now over 250 million users, and the company raised $55 million in funding at a $300 million valuation.


Founder Alex Chung – 2016 SXSW

Wouldn’t it be awesome if you could build your own Giphy Alternative in about the same amount of time it takes to eat your frosted flakes?

What We’ll Build

You have been tasked with creating a Giphy Alternative in record time and immediately wonder how to do that. The client can’t wait and you need something to show to them ASAP.
Well, look no further; In this tutorial, we will be building our version of Giphy which utilizes Cloudinary for the hosting and transformation of our files. We’ll also be able to create tiny GIFs that we can share with the world.
By the end of this tutorial, our app will have the following features:

  • Users should be able to sign up and log in.
  • Registered users should be able to upload animated GIFs (hosted GIFs included) via the platform.
  • Registered and unregistered users should be able to view all the animated GIFs on a dashboard.
  • Users should able to share those GIFs on Facebook and Twitter.
  • Users will be able to convert videos to GIFs and download them.
    Shall we begin?

Setting Up

We will build our app using React and set it up using the create-react-app tool. Just navigate to your dev folder and run the command below in your terminal.

create-react-app cliphy
cd cliphy 

Let’s use bootstrap and material bootstrap theme to style the application.
For bootstrap, open up the index.html in the public folder of the application and add the following line where you would normally put your styles:

<link href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" rel="stylesheet">

Grab the CSS for the bootstrap material design, copy it to your public/css folder and reference it in your index.html:

<link rel="stylesheet" href="%PUBLIC_URL%/css/bootstrap-material-design.min.css">

Authentication

Install auth0-js and the following modules with the following command:

npm i auth0-js react-router@3.0.0 jwt-decode axios

auth0-js – For authentication
react-router – For routing within our app
jwt-decode – For decoding the JSON Web Token in our app
axios – For making network requests.
Create components and utils folders under your src directory. In the utils folder, create a file AuthService.js and add this code to it. (See the ReactJS Authentication Tutorial by unicodeveloper to ensure you are on the right track).

Creating Our Components

First let’s add Navigation and then create the different components that we need in our application. Now create a Nav.js in the components folder and the following piece of code to it.

import React, { Component } from 'react';
import { Link } from 'react-router';
import { login, logout, isLoggedIn } from '../utils/AuthService';
import { uploadWidget } from '../utils/WidgetHelper';

import '../App.css';

class Nav extends Component {

    render() {
        return (
            <nav className="navbar navbar-inverse">
                <div className="container-fluid">
                    <div className="navbar-header">
                        <Link className="navbar-brand" to="/">Cliphy</Link>
                    </div>
                    <div>
                        <ul className="nav navbar-nav">
                            <li>
                                <Link to="/">All Gifs</Link>
                            </li>
                            <li>
                                <Link to="/create">Create Gif</Link>
                            </li>
                        </ul>
                        <ul className="nav navbar-nav navbar-right">
                            <li>
                                {
                                    (isLoggedIn()) ? <button type="button" className="btn btn-raised btn-sm btn-default" >Upload Gif</button> : ''
                                }
                            </li>
                            <li>
                                {
                                    (isLoggedIn()) ?
                                        (
                                            <button type="button" className="btn btn-raised btn-sm btn-danger" onClick={() => logout()}>Log out </button>
                                        ) : (
                                            <button className="btn btn-sm btn-raised btn-default" onClick={() => login()}>Log In</button>
                                        )
                                }
                            </li>
                        </ul>
                    </div>
                </div>
            </nav>
        )
    }
}
export default Nav;

Go ahead and create a Dashboard.js file and add this code to it:

import React, { Component } from 'react';
import { Link } from 'react-router';
import axios from 'axios';

import Nav from './Nav';

class Dashboard extends Component {
    state = {
        gifs: []
    };

    render() {
        const { gifs } = this.state;
        return (
            <div>
                <Nav />
                <div className="row">
                    <h3 className="col-md-12"> The Dashboard</h3>
                </div>
            </div>
        );
    }
}

export default Dashboard;

Let’s create another component named create.js. This component will be empty for now. We’ll come back to it later.
components/Create.js

import React, { Component } from 'react';
import Nav from './Nav';

class Create extends Component {}

export default Create;

Uploading GIFs
As mentioned earlier, we will be using Cloudinary to handle our images, including uploading and manipulating them. To start, let’s create an account on cloudinary.com.
Uploading with Cloudinary is a breeze, using the simple Upload Widget. It enables you to upload files from various sources. So we are going to reference it in our application. Open your index.html file and add the following code to the bottom of your page.

<script src="//widget.cloudinary.com/global/all.js" type="text/javascript"></script> 

Now, create a file called WidgetHelper.js in the utils folder and add the following piece of code to it.

export function uploadWidget(cloudinarySettings, cb) {
    window.cloudinary
        .openUploadWidget(cloudinarySettings, (err, res) => {
            console.error(err);
            cb(res);
        });
}

We will make some changes to our Nav.js to trigger the upload widget. Modify your Nav.js file to look like this:

import React, { Component } from 'react';
import { Link } from 'react-router';
import { login, logout, isLoggedIn } from '../utils/AuthService';
import { uploadWidget } from '../utils/WidgetHelper';
import '../App.css';
import Create from './Create';
class Nav extends Component {
    uploadGif() {
        let cloudinarySettings = {
            cloud_name: '<CLOUD_NAME>',
            upload_preset: '<UPLOAD_PRESET>',
            tags: ['cliphy'],
            sources: ['local', 'url', 'google_photos', 'facebook'],
            client_allowed_formats: ['gif'],
            keep_widget_open: true,
            theme: 'minimal',
        }
        uploadWidget(cloudinarySettings, (res) => {
            console.log(res);
        });
    }
    render() {
        return (
            <nav className="navbar navbar-inverse">
                <div className="container-fluid">
                    <div className="navbar-header">
                        <Link className="navbar-brand" to="/">Cliphy</Link>
                    </div>
                    <div>
                        <ul className="nav navbar-nav">
                            <li>
                                <Link to="/">All Gifs</Link>
                            </li>
                            <li>
                                <Link to="/create">Create Gif</Link>
                            </li>
                        </ul>
                        <ul className="nav navbar-nav navbar-right">
                            <li>
                                {
                                    (isLoggedIn()) ? <button type="button" className="btn btn-raised btn-sm btn-default" onClick={this.uploadGif}>Upload Gif</button> : ''
                                }
                            </li>
                            <li>
                                {
                                    (isLoggedIn()) ?
                                        (
                                            <button type="button" className="btn btn-raised btn-sm btn-danger" onClick={() => logout()}>Log out </button>
                                        ) : (
                                            <button className="btn btn-sm btn-raised btn-default" onClick={() => login()}>Log In</button>
                                        )
                                }
                            </li>
                        </ul>
                    </div>
                </div>
            </nav>
        )
    }
}
export default Nav;

The _cloudname and _uploadpreset options are the only mandatory properties required to make uploads work, but we have set a few more options. Most important to note now is the tags, which we will use to work some magic.
Now, login to the application and then click on the upload button to upload a new GIF file. Note that I have added some restrictions to the type of file that can be uploaded using the _client_allowedformats option of the upload widget.
You can create a new upload preset from your Cloudinary dashboard. Do make sure it is an unsigned one. You also can check out your uploads from your dashboard.
We have added a callback to our uploadWidget helper, which takes in the result object, so we can work with it.

Displaying Our GIFs

We have uploaded GIFs, but they are nowhere to be found on our dashboard. There are several ways this can be achieved. But since we are using React, we are in luck.
Cloudinary provides a simple React component that is capable of handling the display of all our files. All we need to supply it is the publicId of our file on the cloud. Now remember that tag property we mentioned earlier during upload, here’s where we perform some magic with it.
First off, let us install the Cloudinary React component.

npm install cloudinary-react

With this installed, we can retrieve all the images that we uploaded that bears the tag cliphy.
There’s a simple way to do this – make a get request to a URL, such as this http://res.cloudinary.com/[CLOUD_NAME]/image/list/cliphy.json. Doing so returns many details about the GIF files we have uploaded, including the publicId of each of them.
Another thing to note: the last part of the URL is the tag name we supplied during upload.
Now let us modify components/Dashboard.js to look like this:

import React, { Component } from 'react';
import { Link } from 'react-router';
import { CloudinaryContext, Transformation, Image } from "cloudinary-react";
import axios from 'axios';
import { isLoggedIn } from '../utils/AuthService';
import Nav from './Nav';
class Dashboard extends Component {
    state = {
        gifs: []
    };
    getGifs() {
        axios.get('http://res.cloudinary.com/[CLOUD_NAME]/image/list/cliphy.json')
            .then(res => {
                this.setState({ gifs: res.data.resources })
            });
    }
    componentDidMount() {
        this.getGifs();
    }
    render() {
        const { gifs } = this.state;
        return (
            <div>
                <Nav />
                <div className="row">
                    <h3 className="col-md-12"> The Dashboard</h3>
                    <CloudinaryContext cloudName="[CLOUD_NAME]">
                        {
                            gifs.map((data, index) => (
                                <div className="col-md-4 col-sm-6 col-xs-12" key={index}>
                                    <div className="panel panel-default">
                                        <div className="panel-body">
                                            <div className="embed-responsive embed-responsive-16by9">
                                                <Image className="img-responsive" publicId={data.public_id}></Image>
                                            </div>
                                        </div>
                                        <div className="panel-footer">
                                                                                   </div>
                                    </div>
                                </div>
                            ))
                        }
                    </CloudinaryContext>
                </div>
            </div>
        );
    }
}
export default Dashboard;

You should now see all the GIFs displayed in a very beautiful grid.

Sharing to Social Media

Now we will implement sharing on social media. First we install react-share component.

npm install react-share

Now, copy and paste the following codes to the top of the components/Dashboard.js file.

import { ShareButtons, generateShareIcon, ShareCounts } from 'react-share';
const {
    FacebookShareButton,
    TwitterShareButton,
  } = ShareButtons;

Now add the following piece of code within the div with class panel-footer.

Share on:
<TwitterShareButton className="label label-info" title={"Cliphy"} url={`http://res.cloudinary.com/[CLOUD_NAME]/image/upload/${data.public_id}.gif`}>
Twitter </TwitterShareButton>
<FacebookShareButton className="label label-default" url={`http://res.cloudinary.com/[CLOUD_NAME]/image/upload/${data.public_id}.gif`}>
  Facebook
</FacebookShareButton>

On reload, you should now have both share buttons for Facebook and Twitter.

Making our own GIFs

It has been nice so far creating our own lovely GIF platform. But it would be even more awesome to be able to create our own GIFs from video files that we can then save to our computers.
Let’s go ahead and build a GIF maker into our application and see how Cloudinary takes away much of the effort that would have been required to build this.
Go ahead and open the empty components/Create.js file we created earlier and add the following piece of code to it.

import React, { Component } from 'react';
import { uploadWidget } from '../utils/WidgetHelper';
import Nav from './Nav';
class Create extends Component {
    state = {
        gifUrl: "",
        startTime: 0,
        endTime: 0,
        isResult: false,
    };
    setStartTime = (e) => {
        this.setState({ startTime: e.target.value });
    }
    setEndTime = (e) => {
        this.setState({ endTime: e.target.value });
    }
    createGif = () => {
        let cloudinarySettings = {
            cloud_name: '<CLOUD_NAME>',
            upload_preset: '<UPLOAD_PRESET>',
            sources: ['local'],
            client_allowed_formats: ['mp4', 'webm'],
            keep_widget_open: false,
            multiple: false,
            theme: 'minimal',
        }
        uploadWidget(cloudinarySettings, (res) => {
          if (res && res[0] !== undefined) {
                this.setState({ isResult: true });
                this.setGifString(res[0].public_id);
            }
            console.log(res);
        });
    }
    setGifString = (uploadedVideoId) => {
        this.setState({
            gifUrl: `http://res.cloudinary.com/[CLOUD_NAME]/video/upload${(this.state.startTime > 0 && this.state.endTime > 0) ? '/so_' + this.state.startTime + ',eo_' + this.state.endTime : ''}/${uploadedVideoId}.gif`
        });
    }
    render() {
        return (
            <div>
                <Nav />
                <div className="col-md-6 col-md-offset-3">
                    <div className="well well-sm">
                        <form className="form-horizontal">
                            <legend>Enter start and stop time for animation and hit upload to select file</legend>
                            <div className="form-group">
                                <label htmlFor="start" className="col-md-2 control-label">Start</label>
                                <div className="col-md-10">
                                    <input type="number" value={this.state.startTime} onChange={this.setStartTime} className="form-control" id="start"></input>
                                </div>
                            </div>
                            <div className="form-group">
                                <label htmlFor="end" className="col-md-2 control-label">End</label>
                                <div className="col-md-10">
                                    <input type="number" value={this.state.endTime} onChange={this.setEndTime} className="form-control" id="end"></input>
                                </div>
                            </div>
                            <div className="form-group">
                                <div className="col-sm-offset-2 col-sm-10">
                                    <button type="button" className="btn btn-raised btn-primary" onClick={this.createGif}>Create</button>
                                </div>
                            </div>
                        </form>
                    </div>
                    <div className="panel panel-default">
                        <div className="panel-body">
                            {
                                (this.state.isResult) ?
                                    <img className="img-responsive" src={this.state.gifUrl}></img> : <span className="label label-info">Kindly upload an mp4 video to create Gif</span>
                            }
                        </div>
                        <div className="panel-footer">
                        </div>
                    </div>
                </div >
            </div >
        );
    }
}
export default Create;

And just like that you have your own GIF maker.
What’s going on here, why does it seem so simple and straightforward? Well, let me explain the various pieces of code I consider important.
First, our reuse of uploadWidget from utils/WidgetHelper.js. We included it in our component and it is triggered when createGif is called. One notable change I’ve made in the options this time around is to only allow mp4 and webm formats. Go ahead and add other video formats if you feel the need to.
Now take a look at this segment:

setGifString = (uploadedVideoId) => {
        this.setState({
            gifUrl: `http://res.cloudinary.com/[CLOUD_NAME]/video/upload${(this.state.startTime > 0 && this.state.endTime > 0) ? '/so_' + this.state.startTime + ',eo_' + this.state.endTime : ''}/${uploadedVideoId}.gif`
        });
    }

There is just more magic going on here. Cloudinary is able to serve your videos as GIFs and all you have to do is append the extension .gif to the end of the resource url from Cloudinary. The so and eo parameters specify the start offset and end offset, meaning a specified start time to a stop time duration of the video will be converted to a GIF.

Conclusion

Now you have a fully functional clone of your Giphy ready for a demo and your very own GIF maker to go with it. What more could you ask for? Cloudinary provides us with limitless possibilities with file management and a lot of other awesomeness if you’re working with media files.
You should definitely go play with some more of the features that could be found in their documentation.
The source code of the project is on GitHub.
Let me know in the comments below if you have any questions or challenges.

Source:: scotch.io

Building an Events App with Meteor and React

By Joy Warugu

Meteor is an open source, fullstack javascript platform. This allows you to develop in one language. It has some really cool perks the best of which is;

  • It provides complete reactivity; this means your UI reflects the actual state of the world with little development effort
  • The server sends actual data not HTML, and then the client renders it

Meteor supported Blaze for its view layers but after Meteor 1.2 it supported other JS frameworks i.e Angular and React.

In this article, I’ll be showing off how Meteor and React play together. The point of this article is to get you interested in trying out these two technologies together and build something worthwhile in the future using them.

We will be creating an Events App, that encompasses the basic CRUD (Create, Read, Update, Delete) functions.

Pre-requisites

We require to have meteor running on our machines.

Windows: Install chocolatey and run the following command

> choco install meteor

OS X || Linux:

> curl https://install.meteor.com/ | sh

Some basic knowledge on meteor and react will also go a long way. But have no fear, you can still follow along even if you are a newbie. I share some resources in the tutorial that will help ramp up.

Setup

Its really easy to setup Meteor projects. All we need to do is:

> meteor create events-sync

Voilà we get our project all set up. There are a couple of files in the folder. Let’s do a quick run through to know what each is.

client
    main.js         // entry point on the client side
    main.html      // HTML that defines our template files
    main.css      // where we define our styles
server
    main.js      // entry point on the server side
package.json        // npm packages are stored in here
package-lock.json  // lock file for npm packages
.meteor           // meteor files
.gitignore       // git ignore stuff

To run the project we’ll run these commands:

> cd todos
> meteor

Then we can see our app running on the browser by running http://localhost:3000.

Adding in React Components

We need to add in some npm packages for react.

> meteor npm install --save react react-dom

Replace the contents of /clients/main.html with this piece of code:

<head>
  <link rel="stylesheet href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0-beta.3/css/bootstrap.min.css" integrity="sha384-Zug+QiDoJOrZ5t4lssLdxGhVrurbmBWopoEl+M6BdEfwnCJZtKxi1KgxUyJq13dy" crossorigin="anonymous">
  <title>Event Sync</title>
</head>
<body>
  <div id="app"> </div>
</body>

Then we replace /clients/main.js

import React from 'react';
import ReactDOM from 'react-dom';
import { Meteor } from 'meteor/meteor';

import App from '../imports/ui/App.js';

Meteor.startup(() => {
  ReactDOM.render(<App />, document.getElementById('app'));
});

We’ll add in a folder /imports/ui to the root of our project folder, where our React code will live in. We are using the app structure recommended by Meteor. The gist is, everything outside the /imports folder is loaded automatically on startup. Files inside the imports folder are lazy loaded. Only loaded when they are explicitly imported in another file. Find more literature on this here.

So inside /imports/ui we can add a file, App.js

import React, { Component } from 'react';

class Todo extends Component {
  render() {
    return (
      <div>
        Hello World
      </div>
    );
  }
}

export default Todo;

If we go to our browser now we should see Hello World printed on our screens. Just like that we have our first piece of React successfully rendering with Meteor.

Event App

Now that we have the most basic parts of the project up and running. Let’s get into creating the event app.

We want our users to be able to:

  1. Create an event
  2. View these events
  3. Delete events
  4. Edit/Update event details

Lets start by creating events.

Creating Events

React Components

We’ll create a form for entering new events into our app. Let’s create a new file inside /imports/ui called AddEvent.js.


> cd imports/ui
> touch AddEvent.js

import React, { Component } from 'react';

class AddEvent extends Component {
  constructor(props) {
    super(props);
    this.state = {
      title: "",
      description: "",
      date: ""
    }
  }

  handleChange = (event) => {
    const field = event.target.name;

    // we use square braces around the key `field` coz its a variable (we are setting state with a dynamic key name)
    this.setState({
      [field]: event.target.value
    })
  }

  handleSubmit = (event) => {
    event.preventDefault();

    // TODO: Create backend Meteor methods to save created events
    alert("Will be Saved in a little bit :)")
  }

  render() {
    return (
      <div>
        <div className="text-center">
          <h4>Event Sync</h4>
        </div>
        <hr />

        <div className="jumbotron" style={{ margin: "0 500px" }}>
          <form onSubmit={this.handleSubmit}>

            <div className="form-group">
              <label>Title:</label>
              <input
                type="text"
                className="form-control"
                placeholder="Enter event title"
                name="title"
                value={this.state.title}
                onChange={this.handleChange}
              />
            </div>

            <div className="form-group">
              <label>Description:</label>
              <input
                type="text"
                className="form-control"
                placeholder="Enter event description"
                name="description"
                value={this.state.description}
                onChange={this.handleChange}
              />
            </div>

            <div className="form-group">
              <label>Event Date:</label>
              <input
                type="text"
                className="form-control"
                placeholder="Enter date in the format mm.dd.yyyy"
                name="date"
                value={this.state.date}
                onChange={this.handleChange}
              />
            </div>

            <button type="submit" className="btn btn-primary">Add Event</button>
          </form>
        </div>
      </div>
    );
  }
}

export default AddEvent;

Note: We are using bootstrap, you might have noticed that already, and some inline styling. Bootstrap works because we added the bootstrap link tags into our /client/main.html file during our setup.

We have our AddEvent component. It is a simple form with three input fields and a submit button. On change of the input fields we save the text into the components state. On submit (when we click the button Add Event) we get an alert on the browser. We need to figure out a way to persist data in order to save the events we create, lets switch gears and delve back into Meteor.

Meteor Collections

In Meteor, we access MongoDB through collections i.e we persist our data using collections. The cool thing about Meteor is that we have access to these collections both in the server and the client. Which essentially means less server side code is needed to access our data from the client. Yay, us!!

Server-side collections: they create a collection within MongoDB and an interface that is used to communicate with the server

Client-side collections: do not have a direct connection to the DB. A collection on the client side is a cache of the database. When accessing data on the client side we can assume that the client has an up-to-date copy of the the full MongoDB collection.

If you want to read up more on how collections work you can have a look here.

Now that we have some understanding on how collections work lets define our collections.

We’ll create a new folder in the imports folder /imports/api and call this file events.js.

import { Mongo } from 'meteor/mongo';

// Creates a new Mongo collections and exports it
export const Events = new Mongo.Collection('events');

We then need to import our collection into our server. So inside /server/main.js add this line:

import '../imports/api/events.js';
Getting data from a collection into a React component

To get data into a React component we need to install a package that will allow us to take Meteor’s reactive data and feed it into a React component. There are a lot of cool npm packages out there, honorable mention to react-komposer. Find a really cool article on how to set it up here. We’ll use an Atmosphere package for this article react-meteor-data.

Note: I picked this package mainly because it was the most recently updated package on github. Feel free to experiment with others and add comments on how your experience using them was.

> meteor add react-meteor-data

We then use a HOC (Higher Order Component), withTracker, to wrap our component.

Lets replace our /imports/ui/App.js file with this:

import React, { Component } from 'react';
import AddEvent from './AddEvent';
// we import withTracker and Events into our app file
import { withTracker } from 'meteor/react-meteor-data';
import { Events } from "../api/events";

// Create a new React Component `EventApp`
class EventApp extends Component {
  render() {
    return (
      <div>
        <AddEvent />
        {/*
          we have the pre tags to view contents of db just for verification
        */}
        <pre>DB Stuff: {JSON.stringify(this.props, null, ' ')} </pre>
      </div>
    );
  }
}

// Wrap `EventApp` with the HOC withTracker and call the new component we get `App`
const App = withTracker(() => {
  return {
    events: Events.find({}). fetch()
  }
})(EventApp);

// export the component `App`
export default App;

Feeling overwhelmed yet? Not to worry we’ll go over what happens, keeping it short and as simple as possible.

Higher Order Components: are basically functions that take components and return new components. This is very useful coz you can do some cool stuff like:

  • Render highjacking
  • Code reuse
  • State manipulation
  • Props manipulation (just highlighting some use cases of HOCs don’t get too caught up in this)

So what withTracker does is; it takes the component you pass in it, in our case EventApp, and returns a new component that passes down events as a prop. This new component is what we have instantiated as App.

Long story short: we have two components App and EventApp, App is the parent component to EventApp. The App component reads data from our collection and returns it as a variable events. App passes down events to EventApp as a prop. Every time the database contents change events gets new data and the EventApp component re-renders. Therefore, we get data from our Meteor collection into our React components in a reactive way.

Phew! Thats that. Let’s test our code out. Switch to the browser on localhost:3000 and you’ll see that events is currently an empty array.

Let’s manually add something in our db.

> meteor mongo

meteor mongo command will open a console. In that console type in:

> db.events.insert({ title: "Hello world!", description: "", date: new Date() });

If you check your web browser events is now populated with the new entry. Amazing right, basically we didn’t need to write a whole lot of code on the server to get this data. You can keep playing with the database to experience the reactivity Meteor allows us. Any change to the db is automatically populated on the client.

Lets add this functionality onto our submit button. Inside imports/ui/AddEvent.js add the following code.

import React, { Component } from 'react';
// import Events collection
import { Events } from "../api/events";

class AddEvent extends Component {
  ...

  handleSubmit = (event) => {
      // prevents page from refreshing onSubmit
      event.preventDefault();

      const { title, description, date } = this.state;

      // TODO: Create backend Meteor methods to save created events
      // alert("Will be Saved in a little bit :)")

      // add method `insert` to db
      Events.insert({
        title,
        description,
        date
      });

      // clears input fields onSubmit
      this.setState({
        title: "",
        description: "",
        date: ""
      })
    }

  render() {
    ...
  }
}

export default AddEvent;

Now when we fill in the form we created and hit the Add Event button we actually add data into our db and see it displayed in our pre tags under the form.

Displaying Events

Now that we have persisted our data lets display it on our page. This calls for a new file, /imports/ui/ListEvents.js.

import React, { Component } from 'react';

class ListEvents extends Component {
  render() {
    return (
      <div>
        {this.props.events.length ? this.props.events.map((event) => (
          <div className="list-group" key={event._id} style={{ margin: "20px 100px" }}>
            <div className="list-group-item list-group-item-action flex-column align-items-start">
              <div className="d-flex w-100 justify-content-between">
                <h5 className="mb-1">{event.title}</h5>
                <small>{event.date}</small>
              </div>
              <p className="mb-1">{event.description}</p>
            </div>
          </div>
        )) : <div className="no-events">OOOPSY: NO EVENTS REGISTERED</div>}
      </div>
    );
  }
}

export default ListEvents;

Then we import this in our /imports/ui/App.js file.

import React, { Component } from 'react';
...
// Add ListEvents
import ListEvents from './ListEvents';

class EventApp extends Component {
  render() {
    return (
      <div>
        <AddEvent />
        {/*
          we have the pre tags to view contents of db just for verification
          <pre> DB STUFF: {JSON.stringify(this.props, null, ' ')} </pre>
        */}

        {/*
          pass in props into the component (this is where `events` live in)
        */}
        <ListEvents {...this.props}/>
      </div>
    );
  }
}

...

export default App;

If jump back to our browser you will now see a list of the events printed under your Add Event form. Try adding some more events and notice that they reactively show up on your page.

Updating and Deleting Events

Given the knowledge we now have I think deleting and updating events will be easy all you need is to add some buttons and add some db methods on your components and thats it. You can try writing out these components and methods by yourself if you get stuck have a look at the complete app in the repo here. Feel free to ask questions down on the comments and/or on twitter.

Code Refactor and Optimization

With the code we have running now we see that every time events change on the database we re-render the EventApp component. This means both the AddEvent and ListEvents components re-render (and potentially every other component we render on EventApp). This right now isn’t such a big deal coz we are running a tiny app. We however, need to keep in mind that our apps should run optimally for better performance.

All we need to re-render at the moment on change of the database fetch is the ListEvents component. As its the only component using the events props at the moment. This means we can move the withTracker HOC into the ListEvents component and have the app working as before but more optimally.

So, let’s do this:

In /imports/ui/ListEvents.js we add this:

import React, { Component } from 'react';
// import necessary files
import { withTracker } from 'meteor/react-meteor-data';
import { Events } from "../api/events";

class ListEvents extends Component {
  render() {
    return (
      ...
    );
  }
}

// add `withTracker` HOC and pass in `ListEvents` and export the new component
export default withTracker(() => {
  return {
    events: Events.find({}). fetch()
  }
})(ListEvents);

Remove the tracker from /imports/ui/App.js

import React, { Component } from 'react';
import AddEvent from './AddEvent';
import ListEvents from "./ListEvents";

class App extends Component {
  render() {
    return (
      <div>
        <AddEvent />
        <ListEvents />
      </div>
    );
  }
}

// export the component `App`
export default App;

That’s it!! We now have a simple Event App that works optimally.

Replacing Blaze templates with React

Some times you may already have a whole Meteor project running built with Blaze templates and all you want to migrate your app to React.

You can add in your React Components into Blaze easily by adding react-template-helper.

Displaying React in Blaze

First run:

> meteor add react-template-helper

Then on your template you can add in you React Component:

  <template name="events">
    <div>Hello</div>
    <div>{{> React component=AddEvent }}</div>
  </template>

Lastly, you need to pass in the component into the template using a helper:

  import { Template } from 'meteor/templating';
  import './events.html';
  import AddEvent from './AddEvent.js';

  Template.events.helpers({
    AddEvent() {
      return AddEvent;
    }
  })

Passing in props and/or callbacks:

  import { Template } from 'meteor/templating';
  import './events.html';
  import { Events } from '../api/events.js';
  import AddEvent from './AddEvent.js';
  import ListEvents from './ListEvents.js';

  Template.events.helpers({
    ...
    ListEvents() {
      return ListEvents
    }

    events() {
      return Events.find({}).fetch();
    }

    onClick() {
      const instance = Template.instance();
      return () => {
        instance.hasBeenClicked.set(true)
      }
    }
  })
  <template name="events">
    <div>Hello</div>
    <div>{{> React component=AddEvent }}</div>
    <div>{{> React component=ListEvents events=events onClick=onClick }}</div>
  </template>

Those are the basic additions you need to have your React components up and running in Blaze.

Conclusion

There we have it, a working Event App using Meteor and React. This tutorial encompasses just the basics on set-up and how to get React up and running with Meteor; there are a whole lot you can achieve using Meteor and React. I hope that this article has spiked some interest in learning more and delving deeper into this world.

There are some great open-source projects that use Meteor and React that you can contribute to and learn from. Find the link to an example repo here.

Feel free to post questions and suggestions on the comment section!!

Source:: scotch.io

Weekly Node.js Update - #3 - 01.19, 2018

By Tamas Kadlecsik

Weekly Node.js Update - #3 - 01.19, 2018

Below you can find RisingStack‘s collection of the most important Node.js news, updates, projects & tutorials from this week:

1, Learn Node.js — We created a directory of top articles from last year (v.2018)

Between Jan~Dec 2017, we’ve compared nearly 12,000 Node.js articles to pick the Top 50 for this directory.

This directory will make it easier for you to find best Node.js tutorials from last year, where experienced developers share their lessons, insights and mistakes working in Node.js. This could help solve your problems and advance your web development career in 2018.

This directory has 19 key topics as shown below:

Weekly Node.js Update - #3 - 01.19, 2018

2, A crash course on TypeScript with Node.js

What if I told you JavaScript could be viable in an enterprise environment? You’d think I was crazy. Isn’t JavaScript that toy of a language used for animating divs from left to right on a web page?

Weekly Node.js Update - #3 - 01.19, 2018

A couple of years ago, I’d agree with you. Not anymore. With the latest advancements in the ECMAScript standard and with the rise of TypeScript, I wouldn’t be surprised with it taking the interwebs with storm. We won’t even know what hit us.

3, How does Node load built-in modules?

Foreword from author Safia Abdalla:

So, in the last blog post that I wrote, I started looking at how the Node main process was initialized. I quickly discovered that there was quite a lot going on in there (and rightfully so!). One of the things that caught my eye in particular was the reference to a function that seemed to be loading built-in modules during the initialization phase of the Node main process.

node::RegisterBuiltinModules();

I wanted to look into this a little bit more so I started snooping around the codebase to learn more.

4, Building Secure JavaScript Applications

This article goes through the most frequently asked questions about how one can make a JavaScript application more secure.

Weekly Node.js Update - #3 - 01.19, 2018

A few weeks back, I’ve attended SFNode, where Randall Degges gave a presentation on JWTs, mostly on why you avoid using them. The talk was amazing, and also reminded me of an article I wanted to write for a long time now – how one can build secure JavaScript applications. Here we go!

5, Migrating your Node.js REST API to Serverless

Many of the use cases make sense to let the cloud provider handle the server management, scaling and up time. You’re a developer, why should you need to get your hands dirty with the horror of the command line. Ew, the terminal! How do you exit Vim again?

Weekly Node.js Update - #3 - 01.19, 2018

I’ll show you how to use the code you’re already used to, and apply it to a Serverless environment.

6, Handling Node.js Microservices with Kubernetes (February 22-23) – Barcelona:

Two days of training to master the usage one of the most popular container management platform, Kubernetes.

During the training, we’ll work with a microservices architecture and deploy the dockerized services into a Kubernetes cluster, set up application secrets, use load balancers, rate-limiters, take a look at some popular management tools and apply several design principles and best practices coming with microservices.

7, Meet Middy, – The stylish Node.js middleware engine for AWS Lambda

Middy is a very simple middleware engine. If you are used to web frameworks like express, than you will be familiar with the concepts adopted in Middy and you will be able to get started very quickly.

How does it work? Middy implements the classic onion-like middleware pattern, with some peculiar details.

Weekly Node.js Update - #3 - 01.19, 2018

When you attach a new middleware this will wrap the business logic contained in the handler in two separate steps.

When another middleware is attached this will wrap the handler again and it will be wrapped by all the previously added middlewares in order, creating multiple layers for interacting with the request (event) and the response.

This way the request-response cycle flows through all the middlewares, the handler and all the middlewares again, giving to every step, the opportunity to modify or enrich the current request, context or the response.

8, Internationalizing Node.js

The effort to Internationalize Node.js was recently entrusted to its Community Committee, and we’re excited to pick up where the previous work left off.

Weekly Node.js Update - #3 - 01.19, 2018

I’ll be bringing you up to speed on:

  • The current international-language needs of Node.js
  • A Node.js Internationalization (i18n) & Localization (l10n) status report
  • Our proposal for moving Node’s i18n forward as a function of the Community Committee

9, Announcing The Node.js Application Showcase

The Node.js Application Showcase lists a variety of projects and products built with Node.js. It is sortable by development status (in development, in beta, in production), application type and by the other technology used.

Weekly Node.js Update - #3 - 01.19, 2018

When clicked, each application “tile” reveals a short description of the app, its intended users, how the app helps them and how Node.js helped the app’s creator.

The program is open to anyone, anywhere via this simple form.

Previously Node.js Updates:

In the previous Weekly Node.js Update, we collected great articles, like

  • Simple and Robust Face Recognition using Node.js;
  • 8 Tips to Build Better Node.js Apps in 2018;
  • the npm operational incident of 6 Jan 2018;
  • Building Your First Node App Using Docker;
  • Running a Node app on Amazon ECS;

& more..

We help you to stay up-to-date with Node.js on a daily basis too. Check out our Node.js news page and its Twitter feed!

Source:: risingstack.com

How to get latest entries from MongoDB collection with Mongoose

By Adrian Matei

I have a use case where I need to retrieve the latest added public codingmarks.
They are store in a mongo database, and I use Mongoose as ORM.

I want to have two possibilites to achieve this:

  1. specify the number of days to look back
  2. specify the timestamp since when to look forward

I think the code is self explanatory

/**
 * Returns the codingmarks added recently.
 * 
 * The since query parameter is a timestamp which specifies the date since we want to look forward to present time.
 * If this parameter is present it has priority. If it is not present, we might specify the number of days to look back via
 * the query parameter numberOfDays. If not present it defaults to 7 days, last week.
 *
 */
router.get('/latest-entries', async (req, res) => {
  try
  {

    if(req.query.since) {
      const bookmarks = await Bookmark.find(
        {
          createdAt: { $gte: new Date(parseFloat(req.query.since,0)) }
        }).sort({createdAt: 'desc'}).lean().exec();

      res.send(bookmarks);
    } else {
      const numberOfDaysToLookBack = req.query.days ? req.query.days : 7;

      const bookmarks = await Bookmark.find(
        {
          createdAt: { $gte: new Date((new Date().getTime() - (numberOfDaysToLookBack * 24 * 60 * 60 * 1000))) }
        }).sort({createdAt: 'desc'}).lean().exec();

      res.send(bookmarks);
    }

  }
  catch (err)
  {
    return res.status(500).send(err);
  }
});

How to get latest entries from MongoDB collection with Mongoose was originally published by Codingpedia Association at CodingpediaOrg on January 17, 2018.

Source:: codingpedia.org

All in One Authentication and Route Protection for a React + GraphQL App

By Chris Nwamba

Email, Facebook, Google, Twitter, Github and the list can go as long as you wish. These are possible options for authenticating users in your web apps. Apps built with React and GraphQL are no less candidates for such authentications.

In this article, we will learn how to add varieties of authentication providers to a GraphQL app using Graphcool (GraphQL backend as a service) and Auth0 (Authentication as a service). We will also learn how to protect React routes from being accessed if the user fails to be authenticated by our authentication server.

This tutorial assumes a fair knowledge of GraphQL and React as well as how authentication flow works. You can refer to my previous post on Building a Chat App with GraphQL and React to learn a few concepts about GraphQL. You can also learn about JWT authentication from one of our previous posts.

Getting Started

The plan is to have a React project setup. In the same directory where the React project is setup, we can configure a Graphcool server and tell our project how we want our server to behave. Start with installing the create-react-app and graphcool-framework CLI tools:

npm install -g create-react-app graphcool-framework

Then use the React CLI tool to scaffold a new React app:

# scotch-auth is the name of our project

create-react-app scotch-auth

Create the Graphcool server by moving into the React app you just created and running the Graphcool init command:

# Move into the project folder

cd scotch-auth

# Create a Graphcool server
# server is the name of the graphcool server folder
graphcool-framework init server

Create React Routes and UIs

We need a few routes — both public and protected:

  • A home page (public)
  • A profile page (protected)
  • An admin page (protected and available to only admins)
  • An about page (public)

Create a containers folder and add home.js, profile.js, admin.js, and about.js as files to represent each of these routes.

Home.js

import React from 'react';
import Hero from '../components/hero';

const Home = (props) => (
     <div>
        <Hero page="Home"></Hero>
        <h2>Home page</h2>
      </div>
    )
    export default Home;

About.js

import React from 'react';
import Hero from '../components/hero';

const About = (props) => (
     <div>
        <Hero page="About"></Hero>
        <h2>About page</h2>
      </div>
    )
    export default About;

Profile.js

import React from 'react';
import Hero from '../components/hero';

const Profile = props => (
     <div>
        <Hero page="Profile"></Hero>
        <h2>Profile page</h2>
      </div>
    );
    export default Profile;

Admin.js

import React from 'react';
import Hero from '../components/hero';

const Admin = (props) => (
     <div>
        <Hero></Hero>
        <Hero page="Admin"></Hero>
      </div>
    )
    export default Admin;

Each of the page imports and uses Hero to display a landing hero message. Create a components folder and create Hero.js with the following:

import React from 'react';
import Nav from './nav'
import './hero.css'

const Hero = ({ page }) => (
     <section className="hero is-large is-dark">
        <div className="hero-body">
        <Nav></Nav>
          <div className="container">
            <h1 className="title">Scotch Auth</h1>
            <h2 className="subtitle">Welcome to the Scotch Auth {page}</h2>
          </div>
        </div>
      </section>
    );
    export default Hero;

You can add the navigation component as nav.js in the components folder as well. Before we do that, we need to setup routing for the React app and have the pages exposed as routes.

Start with installing the React Router library:

yarn add react-router-dom

Next, provide the router to the App through the index.js entry file:

    //...
import { BrowserRouter } from 'react-router-dom';
import App from './App';
    //...

ReactDOM.render(
    <BrowserRouter>
        <App />
      </BrowserRouter>,
      document.getElementById('root')
    );
    //...

Then configure the routes in the App component:

    import React, { Component } from 'react';
    import { Switch, Route } from 'react-router-dom';

    import Profile from './containers/profile';
    import About from './containers/about';
    import Admin from './containers/admin';
    import Home from './containers/home';

    class App extends Component {
      render() {
        return (
          <div className="App">
            <Switch>
              <Route exact path="/" component={Home} />
              <Route exact path="/about" component={About} />
              <Route exact path="/profile" component={Profile} />
              <Route exact path="/admin" component={Admin} />
            </Switch>
          </div>
        );
      }
    }
    export default App;

Back to the navigation component (nav.js), we want to use the Link component from react-router-dom to provision navigation:

    import React from 'react';
    import { Link } from 'react-router-dom'
    import './nav.css'
    const Nav = () => {
      return (
        <nav className="navbar">
          <div className="navbar-brand">
            <Link className="navbar-item" to="/">
              <strong>Scotch Auth</strong>
            </Link>
          </div>
          <div className="navbar-menu">
            <div className="navbar-end">
              <Link to="/about" className="navbar-item">
                About
              </Link>
              <Link to="/profile" className="navbar-item">
                Profile
              </Link>
              <div className="navbar-item join">
                Join
              </div>
            </div>
          </div>
        </nav>
      );
    };
    export default Nav;

This is what the app should look like at this stage:

Create a Graphcool Server

graphcool-framework deploy

Auth0 for Authentication: Server

Let’s step away from the client app for a second and get back to the Graphcool server we created earlier. Graphcool has a serverless function concept that allows you to extend the functionalities of your server. This feature can be used to achieve a lot of 3rd party integrations including authentication.

Some functions for such integrations have been pre-packaged for you so you don’t have to create them from scratch. You only need to install the template, uncomment some configuration and types, then update or tweak the code as you wish.

Let’s add the Auth0 template. Make sure you are in the server folder and run the following:

graphcool-framework add-template graphcool/templates/auth/auth0

This will create an auth0 folder in server/src. This folder contains both the function logic, types and the mutation definition that triggers this function.

Create and Add Credentials

We need to create an Auth0 API and then add the API’s configuration to our server. Create an account first, then create a new API from you API dasboard. You can name the API what ever you like. Provide an identifier that is unique to all your existing APIs (URLs are mostly recommended):

Uncomment Config
Uncomment the template configuration in server/src/graphcool.yml and update it to look like this:

    authenticate:
        handler:
          code:
            src: ./src/auth0/auth0Authentication.js
            environment:
              AUTH0_DOMAIN: [YOUR AUTH0 DOMAIN]
              AUTH0_API_IDENTIFIER: [YOUR AUTH0 IDENTIFIER]
        type: resolver
        schema: ./src/auth0/auth0Authentication.graphql

AUTH0_DOMAIN and AUTH0_API_IDENTIFIER will be exposed on process.env in your function as environmental variables.

Uncomment Model
The add template command also generates a type in server/src/types.graphql. It’s commented by default. You need to uncomment:

    type User @model {
      # Required system field:
      id: ID! @isUnique # read-only (managed by Graphcool)
      # Optional system fields (remove if not needed):
      createdAt: DateTime! # read-only (managed by Graphcool)
      updatedAt: DateTime! # read-only (managed by Graphcool)
      email: String
      auth0UserId: String @isUnique
    }

You need to remove the User type that was generated when the server was created so that this auth User type can replace it.

Update Code
We need to make some tweaks to the authentication logic. Find the following code block:

    jwt.verify(
            token,
            signingKey,
            {
              algorithms: ['RS256'],
              audience: process.env.AUTH0_API_IDENTIFIER,
              ignoreExpiration: false,
              issuer: `https://${process.env.AUTH0_DOMAIN}/`
            },
            (err, decoded) => {
              if (err) throw new Error(err)
              return resolve(decoded)
            }
          )

And update the audience property in the verify method to aud:

    jwt.verify(
            token,
            signingKey,
            {
              algorithms: ['RS256'],
              aud: process.env.AUTH0_API_IDENTIFIER,
              ignoreExpiration: false,
              issuer: `https://${process.env.AUTH0_DOMAIN}/`
            },
            (err, decoded) => {
              if (err) throw new Error(err)
              return resolve(decoded)
            }
          )

Lastly, the auth token will always encode an email so there is no need doing the following to get the email:

    let email = null
    if (decodedToken.scope.includes('email')) {
      email = await fetchAuth0Email(accessToken)
    }

We can get the email from the decodedToken right away:

const email = decodedToken.email

This makes the fetchAuth0Email function useless — you can remove it.

Deploy the server to Graphcool by running the following:

graphcool-framework

If it’s your first time using Graphcool, you should be taken to the following page to create a Graphcool account:

Auth0 for Authentication: Client

Our server is set to receive tokens for authentication. You can test this out in the Graphcool playground by running the following command:

graphcool-framework playground

We supply the mutation an Auth0 token and we get a node token from Graphcool. Let’s see how we can get tokens from Auth0.

Create client
Just like creating an API, we also need to create a client for the project. The client is used to trigger authentication from the browser.

Create a Client
On your Auth0 dashboard navigation, click clients and create a new client:

The application type should be set to Single Page Web Applications which is what a routed React app is.

Configure callback url
When auth is initiated, it redirects to your Auth0 domain to verify the user. When the user is verified, it needs to redirect the user back to your app. The callback URL is where it comes back to after redirecting. Go to the settings tab of the client you just created and set the callback URL:

Auth Service
We are done with setting up a client configuration on the Auth0 dashboard. Next thing we want to do is create a service in our React app that exposes few methods. These methods will handle utility tasks like triggering authentication, handling response from Auth0, logging out, etc.

First install the Auth0 JS library:

yarn add auth0-js

Then create a services folder in src. Add a auth.js in the new folder with the following content:

    import auth0 from 'auth0-js';

    export default class Auth {
      auth0 = new auth0.WebAuth({
        domain: '[Auth0 Domain]',
        clientID: '[Auth0 Client ID]',
        redirectUri: 'http://localhost:3000/callback',
        audience: '[Auth0 Client Audience]',
        responseType: 'token id_token',
        scope: 'openid profile email'
      });
      handleAuthentication(cb) {
        this.auth0.parseHash({hash: window.location.hash}, (err, authResult) => {
          if (authResult && authResult.accessToken && authResult.idToken) {
            this.auth0.client.userInfo(authResult.accessToken, (err, profile) => {
              this.storeAuth0Cred(authResult, profile);
              cb(false, {...authResult, ...profile})
            });
          } else if (err) {
            console.log(err);
            cb(true, err)
          }
        });
      }
      storeAuth0Cred(authResult, profile) {
        // Set the time that the access token will expire at
        let expiresAt = JSON.stringify(
          authResult.expiresIn * 1000 + new Date().getTime()
        );
        localStorage.setItem('scotch_auth_access_token', authResult.accessToken);
        localStorage.setItem('scotch_auth_id_token', authResult.idToken);
        localStorage.setItem('scotch_auth_expires_at', expiresAt);
        localStorage.setItem('scotch_auth_profile', JSON.stringify(profile));
      }
      storeGraphCoolCred(authResult) {
        localStorage.setItem('scotch_auth_gcool_token', authResult.token);
        localStorage.setItem('scotch_auth_gcool_id', authResult.id);
      }
      login() {
        this.auth0.authorize();
      }
      logout(history) {
        // Clear access token and ID token from local storage
        localStorage.removeItem('scotch_auth_access_token');
        localStorage.removeItem('scotch_auth_id_token');
        localStorage.removeItem('scotch_auth_expires_at');
        localStorage.removeItem('scotch_auth_profile');
        localStorage.removeItem('scotch_auth_gcool_token');
        localStorage.removeItem('scotch_auth_gcool_id');
        // navigate to the home route
        history.replace('/');
      }
      isAuthenticated() {
        // Check whether the current time is past the
        // access token's expiry time
        const expiresAt = JSON.parse(localStorage.getItem('scotch_auth_expires_at'));
        return new Date().getTime() < expiresAt;
      }
      getProfile() {
        return JSON.parse(localStorage.getItem('scotch_auth_profile'));
      }
    }

Let me take some time to walk you through what’s going on:

  • First thing we did is to create an instance of the Auth0 SDK and configured it using our Auth0 client credentials. This instance is then stored in the instance variable, auth0
  • handleAuthentication will be called by one of our components when authentication is completed. Auth0 will pass the token back to us through URL hash. This method reads and passes this hash.
  • storeAuth0Cred and storeGraphCoolCred will persist our credentials to localStorage for future use.
  • We can call isAuthenticated to check if the token stored in localStorage is still valid.
  • getProfile returns the JSON payload of the user’s profile.

Callback page
We want to redirect the user to our profile page if they are authenticated or send them back to the home page (which is the default page) if they are not. The Callback page is the best candidate for this. First add another route to the routes in App.js:

    //...
    import Home from './containers/home';
    import Callback from './containers/callback'
    class App extends Component {
      render() {
        return (
          <div className="App">
            <Switch>
              <Route exact path="/" component={Home} />
              {/* Callback route */}
              <Route exact path="/callback" component={Callback} />
              ...
            </Switch>
          </div>
        );
      }
    }
    export default App;

/callback uses the Callback component which we need to create in src/container/Callback.js:

    import React from 'react';
    import { graphql } from 'react-apollo';
    import gql from 'graphql-tag';
    import Auth from '../services/auth'
    const auth = new Auth();
    class Callback extends React.Component {
      componentDidMount() {
        auth.handleAuthentication(async (err, authResult) => {
          // Failed. Send back home
          if (err) this.props.history.push('/');
          // Send mutation to Graphcool with idToken
          // as the accessToken
          const result = await this.props.authMutation({
            variables: {
              accessToken: authResult.idToken
            }
          });
          // Save response to localStorage
          auth.storeGraphCoolCred(result.data.authenticateUser);
          // Redirect to profile page
          this.props.history.push('/profile');
        });
      }
      render() {
        // Show a loading text while the app validates the user
        return <div>Loading...</div>;
      }
    }

    // Mutation query
    const AUTH_MUTATION = gql`
      mutation authMutation($accessToken: String!) {
        authenticateUser(accessToken: $accessToken) {
          id
          token
        }
      }
    `;
// Expose mutation query on `this.props` eg: this.props.authMutation(...)
export default graphql(AUTH_MUTATION, { name: 'authMutation' })(Callback);

It uses the handleAuthentication method exposed by Auth to send a mutation to the Graphcool server. We are yet to setup a connection to the server but once we do and attempt to authenticate a user, this callback page will send a write mutation to the Graphcool server to tell the server that the user exists and is allowed to access resources.

Setup Apollo and Connect to Server

In the Callback component, we used the graphql (which we are yet to install) to connect the component to a mutation. This doesn’t imply that there is a connection to the Graphcool server yet. We need to setup this connection using Apollo and then provide the Apollo instance at the top level of our app.

Start with installing the required dependencies:

yarn add apollo-client-preset react-apollo graphql-tag graphql

Update the src/index.js entry file:

    import React from 'react';
    import ReactDOM from 'react-dom';
    import { BrowserRouter } from 'react-router-dom';
    import './index.css';
    import App from './App';
    import registerServiceWorker from './registerServiceWorker';

    // Import modules
    import { ApolloProvider } from 'react-apollo';
    import { ApolloClient } from 'apollo-client';
    import { HttpLink } from 'apollo-link-http';
    import { InMemoryCache } from 'apollo-cache-inmemory';

    // Create connection link
    const httpLink = new HttpLink({ uri: '[SIMPLE API URL]' });

    // Configure client with link
    const client = new ApolloClient({
      link: httpLink,
      cache: new InMemoryCache()
    });

    // Render App component with Apollo provider
    ReactDOM.render(
      <BrowserRouter>
        <ApolloProvider client={client}>
          <App />
        </ApolloProvider>
      </BrowserRouter>,
      document.getElementById('root')
    );

    registerServiceWorker();

First we imported all the dependencies, then created a link using HttpLink. The argument passed is an object with the URI. You can get your server’s URI by running the following command on the server folder:

graphcool-framework list

Use the Simple URI to replace the placeholder in the code above.

Next we created and configured an Apollo client instance with this link as well as a cache. This created client instance is now passed as prop to the Apollo provider which wraps the App component.

Test Authentication Flow

Now that everything seems intact, let’s add an event to the button on our navigation bar to trigger the auth process:

    //...
    import Auth from '../services/auth'
    const auth = new Auth();
    const Nav = () => {
      return (
        <nav className="navbar">
          ...
          <div className="navbar-menu">
            <div className="navbar-end">
              ...
              <div className="navbar-item join" onClick={() => {auth.login()}}>
                Join
              </div>
            </div>
          </div>
        </nav>
      );
    };
    export default Nav;

When the Join button is clicked, we trigger the auth.login method which would redirect us to our Auth0 domain for authentication:

After the user logs in, Auth0 would confirm from the user if (s)he wants you to access his/her profile information:

After authentication, watch the page move back to /callback and the /profile if authentication was successful.

You can also confirm that the user is created by going to the data view in your Graphcool dashboard and opening the Users table:

Conditional Buttons

We should hide the login button when the user is authenticated and show the logout button rather. Back in the nav.js replace the Join button element with this conditional logic:

     {auth.isAuthenticated() ? (
        <div
          className="navbar-item join"
          onClick={() => {
            auth.logout();
          }}
        >
          Logout
        </div>
      ) : (
        <div
          className="navbar-item join"
          onClick={() => {
            auth.login();
          }}
        >
          Join
        </div>
      )}

The Auth services exposes a method called isAuthenticated to check if the token is stored in the localStorage and not expired. Here is an image of the page showing that the user is logged in:

Displaying User’s Profile

You can also use the Auth service to retrieve a logged in user profile. This profile is already available in the localStorage:

    //...
    import Auth from '../services/auth';
    const auth = new Auth();
    const Profile = props => (
      <div>
        //...
        <h2 className="title">Nickname: {auth.getProfile().nickname}</h2>
      </div>
    );
    export default Profile;

The nickname gets printed in the browser:

Securing Graphcool Endpoint

We only accounted for generating tokens but what happens when a user tries to access a restricted backend route? Of course, an authenticated user will still have access because we haven’t told the server about the token.

We can send the token as Bearer token in the header of our request. Update the index.js with the following:

    //...
    import { ApolloLink } from 'apollo-client-preset'
    const httpLink = new HttpLink({ uri: '[SIMPLE URL]' });

    const middlewareAuthLink = new ApolloLink((operation, forward) => {
      const token = localStorage.getItem('scotch_auth_gcool_token')
      const authorizationHeader = token ? `Bearer ${token}` : null
      operation.setContext({
        headers: {
          authorization: authorizationHeader
        }
      })
      return forward(operation)
    })
    const httpLinkWithAuthToken = middlewareAuthLink.concat(httpLink)

Instead of creating the Apollo client with just the HTTP Link, we update it to use the middleware we just created which adds a token to all server requests we make:

    const client = new ApolloClient({
      link: httpLinkWithAuthToken,
      cache: new InMemoryCache()
    })

You can then use the token at the server’s end to validate the request.

Protecting Routes

In as much as you have done a great job by securing the most important part of your project which is the server and it’s data, it doesn’t make sense to leave the user hanging at a route where there will be no content. The /profile route needs to be protected from being accessed when user is not authenticated.

Update App.js to redirect to the home page if the user goes to profile but not authenticated:

    //...
    import { Switch, Route, Redirect } from 'react-router-dom';
    //...
    import Auth from './services/auth';
    const auth = new Auth();
    class App extends Component {
      render() {
        return (
          <div className="App">
            <Switch>
              <Route exact path="/" component={Home} />
              <Route exact path="/callback" component={Callback} />
              <Route exact path="/about" component={About} />
              <Route
                exact
                path="/profile"
                render={props =>
                  auth.isAuthenticated() ? (
                    <Profile />
                  ) : (
                    <Redirect
                      to={{
                        pathname: '/'
                      }}
                    />
                  )
                }
              />
            </Switch>
          </div>
        );
      }
    }
    export default App;

We are still using auth.isAuthenticated to check for authentication. If it returns true, /profile wins, else, / wins.

Conclusion

We just learned how to authenticate a user using Auth0 in a GraphQL project. What you can do is go to your Auth0 dashboard and add few more social authentication options like Twitter. You will be asked for your Twitter developer credentials which can get from the Twitter Developer website.

Source:: scotch.io

Serverless Development with Node, MongoDB Atlas, and AWS Lambda

By Chris Sevilleja

The developer landscape has dramatically changed in recent years. It used to be fairly common for us developers to run all of our tools (databases, web servers, development IDEs…) on our own machines, but cloud services such as GitHub, MongoDB Atlas and AWS Lambda are drastically changing the game. They make it increasingly easier for developers to write and run code anywhere and on any device with no (or very few) dependencies.

A few years ago, if you crashed your machine, lost it or simply ran out of power, it would have probably taken you a few days before you got a new machine back up and running with everything you need properly set up and configured the way it previously was.

With developer tools in the cloud, you can now switch from one laptop to another with minimal disruption. However, it doesn’t mean everything is rosy. Writing and debugging code in the cloud is still challenging; as developers, we know that having a local development environment, although more lightweight, is still very valuable.

What We’ll Learn

And that’s exactly what I’ll try to show you in this blog post: how to easily integrate an AWS Lambda Node.js function with a MongoDB database hosted in MongoDB Atlas, the DBaaS (database as a service) for MongoDB. More specifically, we’ll write a simple Lambda function that creates a single document in a collection stored in a MongoDB Atlas database. I’ll guide you through this tutorial step-by-step, and you should be done with it in less than an hour.

Requirements

Let’s start with the necessary requirements to get you up and running:

  1. An Amazon Web Services account available with a user having administrative access to the IAM and Lambda services. If you don’t have one yet, sign up for a free AWS account.
  2. A local machine with Node.js (I told you we wouldn’t get rid of local dev environments so easily…). We will use Mac OS X in the tutorial below but it should be relatively easy to perform the same tasks on Windows or Linux.
  3. A MongoDB Atlas cluster alive and kicking. If you don’t have one yet, sign up for a free MongoDB Atlas account and create a cluster in just a few clicks. You can even try our M0, free cluster tier, perfect for small-scale development projects!).

Now that you know about the requirements, let’s talk about the specific steps we’ll take to write, test and deploy our Lambda function:

  1. MongoDB Atlas is by default secure, but as application developers, there are steps we should take to ensure that our app complies with least privilege access best practices. Namely, we’ll fine-tune permissions by creating a MongoDB Atlas database user with only read/write access to our app database.
  2. We will set up a Node.js project on our local machine, and we’ll make sure we test our lambda code locally end-to-end before deploying it to Amazon Web Services.
  3. We will then create our AWS Lambda function and upload our Node.js project to initialize it.
  4. Last but not least, we will make some modifications to our Lambda function to encrypt some sensitive data (such as the MongoDB Atlas connection string) and decrypt it from the function code.

A short note about VPC Peering

I’m not delving into the details of setting up VPC Peering between our MongoDB Atlas cluster and AWS Lambda for 2 reasons: 1) we already have a detailed VPC Peering documentation page and a VPC Peering in Atlas post that I highly recommend and 2) M0 clusters (which I used to build that demo) don’t support VPC Peering.

Here’s what happens if you don’t set up VPC Peering though:

  1. You will have to add the infamous 0.0.0.0/0 CIDR block to your MongoDB Atlas cluster IP Whitelist because you won’t know which IP address AWS Lambda is using to make calls to your Atlas database.
  2. You will be charged for the bandwidth usage between your Lambda function and your Atlas cluster.

If you’re only trying to get this demo code to write, these 2 caveats are probably fine, but if you’re planning to deploy a production-ready Lambda-Atlas integration, setting up VPC Peering is a security best practice we highly recommend. M0 is our current free offering; check out our MongoDB Atlas pricing page for the full range of available instance sizes.

As a reminder, for development environments and low traffic websites, M0, M10 and M20 instance sizes should be fine. However, for production environments that support high traffic applications or large datasets, M30 or larger instances sizes are recommended.

Setting up security in your MongoDB Atlas cluster

Making sure that your application complies with least privilege access policies is crucial to protect your data from nefarious threats. This is why we will set up a specific database user that will only have read/write access to our travel database. Let’s see how to achieve this in MongoDB Atlas:

On the Clusters page, select the Security tab, and press the Add New User button

Cluster Security

In the pop-up window that opens, add a user name of your choice (such as lambdauser):

lambdauser

In the User Privileges section, select the Show Advanced Options link. This allows us to assign read/write on a specific database, not any database.

User Privileges

You will then have the option to assign more fine-grained access control privileges:

Access Control

In the Select Role dropdown list, select readWrite and fill out the Database field with the name of the database you’ll use to store documents. I have chosen to name it travel.

Select Role

In the Password section, use the Autogenerate Secure Password button (and make a note of the generated password) or set a password of your liking. Then press the Add User button to confirm this user creation.

Let’s grab the cluster connection string while we’re at it since we’ll need it to connect to our MongoDB Atlas database in our Lambda code:

Assuming you already created a MongoDB Atlas cluster, press the Connect button next to your cluster:

Connect Cluster

Copy the URI Connection String value and store it safely in a text document. We’ll need it later in our code, along with the password you just set.

URI Connection String

Additionally, if you aren’t using VPC Peering, navigate to the IP Whitelist tab and add the 0.0.0.0/0 CIDR block or press the Allow access from anywhere button. As a reminder, this setting is strongly NOT recommended for production use and potentially leaves your MongoDB Atlas cluster vulnerable to malicious attacks.

Add Whitelist Entry

Create a local Node.js project

Though Lambda functions are supported in multiple languages, I have chosen to use Node.js thanks to the growing popularity of JavaScript as a versatile programming language and the tremendous success of the MEAN and MERN stacks (acronyms for MongoDB, Express.js, Angular/React, Node.js – check out Andrew Morgan’s excellent developer-focused blog series on this topic). Plus, to be honest, I love the fact it’s an interpreted, lightweight language which doesn’t require heavy development tools and compilers.

Time to write some code now, so let’s go ahead and use Node.js as our language of choice for our Lambda function.

Start by creating a folder such as lambda-atlas-create-doc

mkdir lambda-atlas-create-doc 
&& cd lambda-atlas-create-doc

Next, run the following command from a Terminal console to initialize our project with a package.json file

npm init

You’ll be prompted to configure a few fields. I’ll leave them to your creativity but note that I chose to set the entry point to app.js (instead of the default index.js) so you might want to do so as well.

We’ll need to use the MongoDB Node.js driver so that we can connect to our MongoDB database (on Atlas) from our Lambda function, so let’s go ahead and install it by running the following command from our project root:

npm install mongodb --save

We’ll also want to write and test our Lambda function locally to speed up development and ease debugging, since instantiating a lambda function every single time in Amazon Web Services isn’t particularly fast (and debugging is virtually non-existent, unless you’re a fan of the console.log() function). I’ve chosen to use the lambda-local package because it provides support for environment variables (which we’ll use later):

(sudo) npm install lambda-local -g

Create an app.js file. This will be the file that contains our lambda function:

touch app.js

Now that you have imported all of the required dependencies and created the Lambda code file, open the app.js file in your code editor of choice (Atom, Sublime Text, Visual Studio Code…) and initialize it with the following piece of code:

use strict
var MongoClient = require(mongodb).MongoClient;
let atlas_connection_uri;
let cachedDb = null;
exports.handler = (event, context, callback) => {
var uri = process.env[MONGODB_ATLAS_CLUSTER_URI];
if (atlas_connection_uri != null) {
processEvent(event, context, callback);
}
else {
atlas_connection_uri = uri;
console.log(the Atlas connection string is + atlas_connection_uri);
processEvent(event, context, callback);
}
};
function processEvent(event, context, callback) {
console.log(Calling MongoDB Atlas from AWS Lambda with event: + JSON.stringify(event));
}

Let’s pause a bit and comment the code above, since you might have noticed a few peculiar constructs:

  • The file is written exactly as the Lambda code Amazon Web Services expects (e.g. with an “exports.handler” function). This is because we’re using lambda-local to test our lambda function locally, which conveniently lets us write our code exactly the way AWS Lambda expects it. More about this in a minute.
  • We are declaring the MongoDB Node.js driver that will help us connect to and query our MongoDB database.
  • Note also that we are declaring a cachedDb object OUTSIDE of the handler function. As the name suggests, it’s an object that we plan to cache for the duration of the underlying container AWS Lambda instantiates for our function. This allows us to save some precious milliseconds (and even seconds) to create a database connection between Lambda and MongoDB Atlas. For more information, please read my follow-up blog post on how to optimize Lambda performance with MongoDB Atlas.
  • We are using an environment variable called MONGODB_ATLAS_CLUSTER_URI to pass the uri connection string of our Atlas database, mainly for security purposes: we obviously don’t want to hardcode this uri in our function code, along with very sensitive information such as the username and password we use. Since AWS Lambda supports environment variables since November 2016 (as the lambda-local NPM package does), we would be remiss not to use them.
  • The function code looks a bit convoluted with the seemingly useless if-else statement and the processEvent function but it will all become clear when we add decryption routines using AWS Key Management Service (KMS). Indeed, not only do we want to store our MongoDB Atlas connection string in an environment variable, but we also want to encrypt it (using AWS KMS) since it contains highly sensitive data (note that you might incur charges when you use AWS KMS even if you have a free AWS account).

Now that we’re done with the code comments, let’s create an event.json file (in the root project directory) and fill it with the following data:

{
   "address" : {
      "street" : "2 Avenue",
      "zipcode" : "10075",
      "building" : "1480",
      "coord" : [ -73.9557413, 40.7720266 ]
   },
   "borough" : "Manhattan",
   "cuisine" : "Italian",
   "grades" : [
      {
         "date" : "2014-10-01T00:00:00Z",
         "grade" : "A",
         "score" : 11
      },
      {
         "date" : "2014-01-16T00:00:00Z",
         "grade" : "B",
         "score" : 17
      }
   ],
   "name" : "Vella",
   "restaurant_id" : "41704620"
}

(in case you’re wondering, that JSON file is what we’ll send to MongoDB Atlas to create our BSON document)

Next, make sure that you’re set up properly by running the following command in a Terminal console:

lambda-local -l app.js -e event.json -E {“MONGODB_ATLAS_CLUSTER_URI”:”mongodb://lambdauser:$PASSWORD@lambdademo-shard-00-00-7xh22.mongodb.net:27017,lambdademo-shard-00-01-7xh22.mongodb.net:27017,lambdademo-shard-00-02-7xh22.mongodb.net:27017/$DATABASE?ssl=true&replicaSet=lambdademo-shard-0&authSource=admin”}

If you want to test it with your own cluster URI Connection String (as I’m sure you do), don’t forget to escape the double quotes, commas and ampersand characters in the E parameter, otherwise lambda-local will throw an error (you should also replace the $PASSWORD and $DATABASE keywords with your own values).

After you run it locally, you should get the following console output:

console output

If you get an error, check your connection string and the double quotes/commas/ampersand escaping (as noted above).

Now, let’s get down to the meat of our function code by customizing the processEvent() function and adding a createDoc() function:

function processEvent(event, context, callback) {
console.log(Calling MongoDB Atlas from AWS Lambda with event: + JSON.stringify(event));
var jsonContents = JSON.parse(JSON.stringify(event));
//date conversion for grades array
if(jsonContents.grades != null) {
for(var i = 0, len=jsonContents.grades.length; i len; i++) {
//use the following line if you want to preserve the original dates
//jsonContents.grades[i].date = new Date(jsonContents.grades[i].date);
//the following line assigns the current date so we can more easily differentiate between similar records
jsonContents.grades[i].date = new Date();
}
}
//the following line is critical for performance reasons to allow re-use of database connections across calls to this Lambda function and avoid closing the database connection. The first call to this lambda function takes about 5 seconds to complete, while subsequent, close calls will only take a few hundred milliseconds.
context.callbackWaitsForEmptyEventLoop = false;
try {
if (cachedDb == null) {
console.log(=> connecting to database);
MongoClient.connect(atlas_connection_uri, function (err, db) {
cachedDb = db;
return createDoc(db, jsonContents, callback);
});
}
else {
createDoc(cachedDb, jsonContents, callback);
}
}
catch (err) {
console.error(an error occurred, err);
}
}
function createDoc (db, json, callback) {
db.collection(restaurants).insertOne( json, function(err, result) {
if(err!=null) {
console.error(an error occurred in createDoc, err);
callback(null, JSON.stringify(err));
}
else {
console.log(Kudos! You just created an entry into the restaurants collection with id: + result.insertedId);
callback(null, SUCCESS);
}
//we don’t need to close the connection thanks to context.callbackWaitsForEmptyEventLoop = false (above)
//this will let our function re-use the connection on the next called (if it can re-use the same Lambda container)
//db.close();
});
};

Note how easy it is to connect to a MongoDB Atlas database and insert a document, as well as the small piece of code I added to translate JSON dates (formatted as ISO-compliant strings) into real JavaScript dates that MongoDB can store as BSON dates.

You might also have noticed my performance optimization comments and the call to context.callbackWaitsForEmptyEventLoop = false. If you’re interested in understanding what they mean (and I think you should!), please refer to my follow-up blog post on how to optimize Lambda performance with MongoDB Atlas.

You’re now ready to fully test your Lambda function locally. Use the same lambda-local command as before and hopefully you’ll get a nice “Kudos” success message:

kudos message

If all went well on your local machine, let’s publish our local Node.js project as a new Lambda function!

Create the Lambda function

The first step we’ll want to take is to zip our Node.js project, since we won’t write the Lambda code function in the Lambda code editor. Instead, we’ll choose the zip upload method to get our code pushed to AWS Lambda.

I’ve used the zip command line tool in a Terminal console, but any method works (as long as you zip the files inside the top folder, not the top folder itself!) :

zip -r archive.zip node_modules/ app.js package.json

Next, sign in to the AWS Console and navigate to the IAM Roles page and create a role (such as LambdaBasicExecRole) with the AWSLambdaBasicExecutionRole permission policy:

AWSLambdaBasicExecutionRole

Let’s navigate to the AWS Lambda page now. Click on Get Started Now (if you’ve never created a Lambda function) or on the Create a Lambda function button. We’re not going to use any blueprint and won’t configure any trigger either, so select Configure function directly in the left navigation bar:

AWS Lambda Configure

In the Configure function page, enter a Name for your function (such as MongoDB_Atlas_CreateDoc). The runtime is automatically set to Node.js 4.3, which is perfect for us, since that’s the language we’ll use. In the Code entry type list, select Upload a .ZIP file, as shown in the screenshot below:

Configure Function

Click on the Upload button and select the zipped Node.js project file you previously created.

In the Lambda function handler and role section, modify the Handler field value to app.handler (why? here’s a hint: I’ve used an app.js file, not an index.js file for my Lambda function code…) and choose the existing LambdaBasicExecRole role we just created:

Lambda function handler

In the Advanced Settings section, you might want to increase the Timeout value to 5 or 10 seconds, but that’s always something you can adjust later on. Leave the VPC and KMS key fields to their default value (unless you want to use a VPC and/or a KMS key) and press Next.

Last, review your Lambda function and press Create function at the bottom. Congratulations, your Lambda function is live and you should see a page similar to the following screenshot:

Lambda Create Function

But do you remember our use of environment variables? Now is the time to configure them and use the AWS Key Management Service to secure them!

Configure and secure your Lambda environment variables

Scroll down in the Code tab of your Lambda function and create an environment variable with the following properties:

Name Value
MONGODB_ATLAS_CLUSTER_URI YOUR_ATLAS_CLUSTER_URI_VALUE

environment variables

At this point, you could press the Save and test button at the top of the page, but for additional (and recommended) security, we’ll encrypt that connection string.

Check the Enable encryption helpers check box and if you already created an encryption key, select it (otherwise, you might have to create one – it’s fairly easy):

encryption key

Next, select the Encrypt button for the MONGODB_ATLAS_CLUSTER_URI variable:

select encrypt

Back in the inline code editor, add the following line at the top:

const AWS = require('aws-sdk');

and replace the contents of the “else” statement in the “exports.handler” method with the following code:

const kms = new AWS.KMS();
kms.decrypt({ CiphertextBlob: new Buffer(uri, base64) }, (err, data) => {
if (err) {
console.log(Decrypt error:, err);
return callback(err);
}
atlas_connection_uri = data.Plaintext.toString(ascii);
processEvent(event, context, callback);
});

(hopefully the convoluted code we originally wrote makes sense now!)

If you want to check the whole function code I’ve used, check out the following Gist. And for the Git fans, the full Node.js project source code is also available on GitHub.

Now press the Save and test button and in the Input test event text editor, paste the content of our event.json file:

input test event

Scroll and press the Save and test button.

If you configured everything properly, you should receive the following success message in the Lambda Log output:

Lambda Log Output

Kudos! You can savor your success a few minutes before reading on.

What’s next?

I hope this AWS Lambda-MongoDB Atlas integration tutorial provides you with the right steps for getting started in your first Lambda project. You should now be able to write and test a Lambda function locally and store sensitive data (such as your MongoDB Atlas connection string) securely in AWS KMS.

So what can you do next?

And of course, don’t hesitate to ask us any questions or leave your feedback in a comment below. Happy coding!

Source:: scotch.io

Understand The JavaScript Ternary Operator like ABC

By William Imoh

If you made it this far then it’s either you know of ternary operators and want to know more, you have no idea of ternary operators, or you’re just somewhere in-between. Keep on.

From my time writing JavaScript, and of course looking through the JavaScript of others especially beginner developers, I have noticed the age-long trend of using if/else statements and really long chains of them too! As an advocate of DRY code -Don’t Repeat Yourself, I ensure my code stays DRY, and maybe cold-hearted, lol.

In this post, we’ll be explaining ternary operators in the simplest way and how they make coding and life in general, easier.

Operators in JavaScript

In the JavaScript theatre, various operations require operators, which are basically denoted with symbols- + - / = * %. For the various operations like arithmetic and assignment operations, various symbols are used. These operators in their usage are split into 3 types:

  • Unary Operators – Requires one operand either before or after the operator.
  • Binary Operators – Requires two operands on either side of the operator.
  • Ternary Operator – Requires three operands and is a conditional operator.
time++ //Unary operator (increment)
2 + 3 //Binary Operator (addition)
a ? 'hello' : 'world' //Ternary/Conditional operator

We will focus on the ternary operator as earlier stated.

Ternary Operator

The ternary operator has been around for a while now, but isn’t widely used, maybe because of the syntax or some form of ambiguity I’m unaware of. The ternary operator is a conditional operator and can effectively and efficiently replace several lines of IF statements. It simply verifies if a condition is true or false and returns an expression or carry out an operation based on the state of the condition, in probably one line of code. Using an IF statement we have:

var day = true;
if(day){
  alert('It is day-time')
} else{
  alert('It is night-time')
}
// It is day-time

Using the ternary operator:

var day = true; //conditon
alert(day ? 'It is day-time' : 'It is night-time') // It is day-time

This reduced the syntax of the IF statement to:

- ? - means IF 
- : - means Else

So, if day is true, alert It is daylight, else alert It is nighttime. Simple!

Let’s get to more details.

Variable Assignment

As stated earlier, the result of the condition evaluation can be an expression or an operation, and in this case a variable assignment.

var myName = false;
var age = false;
var message = myName ? "I have a name" : "I don't have a name, duh!"
console.log(message) // I have don't have a name, duh!

myName = true;
message = myName ? age = true : 'Get out of here, nameless'
console.log(message) // true

Notice we assigned the result of a ternary operation first to a global variable message, and later on, reassigned it when the condition changed. Notice the reassignment of the global variable age in the condition of the ternary operator. Reassignment operations can occur in a ternary operation – so much done in one line yeah? The ELSE part of the operation can also be an expression or an operation on its own, just like it’s done in conventional IF statements.

Usage in Functions

Usually, the next use case for IF statements are in functions, basically ternary operations make a function ‘syntactically sweet’. Same way variables are assigned the result of ternary operations is the same way functions return the result of ternary operations. With IF statements we have:

var dog = true;
function myPet(){
  if(dog){
    return 'How do i get it?';
  } else {
    return 'Get me a dog!';
  }
}
console.log(myPet()) // How do i get it?

with a ternary operator:

var dog = false;
function myPet(){
  return dog ? 'How do i get it?' : 'Get me a dog!';
}
console.log(myPet()) // Get me a dog!

Imagine if we had quite a number of IF statements, with a host of return expressions in them, now think of how these can be shortened using ternary operators. Next, we will see how we can chain multiple conditions together, you can have the conditions in functions as well!

Multiple Conditions

Just like in good old IF statement with IF/ELSE IF, multiple conditions can be chained in ternary operations to give one robust operation. Normally we would write:

var myName = true;
var myAge = true;
var message = '';
if (myName){
  if(myAge){
    message = "I'll like to know your age and name"
  }else{
    message = "I'll stick with just your name then"
  }
} else {
  "Oh, I'll call you JavaScript then, cus you fine and mysterious"
}
console.log(message) //I'll like to know your age and name

but with a ternary operator we have:

var myName = true;
var myAge = true;
var message = myName? (myAge ? "I'll like to know your age and name" : "I'll stick with just your name") : "Oh, I'll call you JavaScript then, cus you fine and mysterious"
console.log(message) // I'll like to know your age and name

This is just a simple illustration of having two conditions in one IF statement. Here’s a lighter one:

var email = false;
var phoneNumber = true;
var message = email ? 'Thanks for reaching out to us' : phoneNumber ? 'Thanks for reaching out to us' : 'Please fill in your email or phone-number'
console.log(message) //Thanks for reaching out to us

Here we simply have multiple conditions chained to one another, and if a condition doesn’t pass, another condition is put forward, and if it still doesn’t pass, (now you cannot be offered any further assistance, lol) an expression is returned.

Multiple Operations per Condition

A friend of mine would say “in code, you read A + B but later you are required to pull B from a remote server, make A a ninja, minify B, before they can be added together”. So far we have seen multiple conditions chained together, what about expressions or conditions per condition? Say we would like to make a request to an API if a condition passes as true. Just like in IF statements, here’s a simple one:

var home = true;
if(home){
  alert('Welcome 127.0.0.1');
  var port = prompt('What port do you like?');
  alert('Serving you a cool dish on ' + port)
} else {
  alert('Check back when you get home' )
}
//serving you a cool dish on ???

but with a ternary operator, simply separate each operation with a comma.

var home = true;
var port = '';
home ? (
  alert('Welcome 127.0.0.1'),
  port = prompt('What port do you like?'),
  alert('Serving you a cool dish on ' + port)
) : alert('Check back when you get home' )
//serving you a cool dish on ???

Syntactic sugar!

Note: All variables used in a ternary operation should be defined before the operation is created. Also, If the operations in the condition are assignment operations, the value of the last expression or operation is used i.e

var home = true;
var myLocation = 'Lagos';
myLocation = home ? ('Brussels', 'london', 'Rio de Janero', 'Newark' ) : 'Kinshasha'
console.log(myLocation) // Newark

Conclusion

So far, you have seen how invaluable using ternary operators to write conditional statements make code plain and effortless, from writing simple lines of conditional statements to writing large chunks of chained operations in or out of functions. Using ternary operators, keep writing better code, …and DRY code.

Source:: scotch.io