Monthly Archives: April 2017

Node.js Weekly Update - 21 April, 2017

By Gergely Németh

Node.js Weekly Update - 21 April, 2017

Below you can find RisingStack‘s collection of the most important Node.js news, projects, updates & security leaks from this week:

1. Hard-won lessons: Five years with Node.js

Scott Nonnenberg shared his 5 years of Node.js knowledge on topics, like Classes, NaN, Event Loop, Testing, Dependencies, and on failing to use New Relic to monitor Node.js apps.

After five years working with Node.js, I’ve learned a lot. I’ve already shared a few stories, but this time I wanted to focus on the ones I learned the hard way. Bugs, challenges, surprises, and the lessons you can apply to your own projects!

2. The Definitive Guide to Object Streams in Node.js

Node.js Streams come with a great power: You have an asynchronous way of dealing with input and output, and you can transform data in independent steps.

In this tutorial, I’ll walk you through the theory, and teach you how to use object stream transformables, just like Gulp does.

3. Improving Startup Time at Atom

Over the last months, the Atom team has been working hard on improving one of the aspects of the editor our users care about the most: startup time.

We will first provide the reader with some background about why reducing startup time is a non-trivial task, then illustrate the optimizations we have shipped in Atom 1.17 (currently in beta) and, finally, describe what other improvements to expect in the future.

4. Announcing Free Node.js Monitoring & Debugging with Trace

Today, we’re excited to announce that Trace, our Node.js monitoring & debugging tool is now free for open-source projects.

Node.js Weekly Update - 21 April, 2017

We know from experience that developing an open-source project is hard work, which requires a lot of knowledge and persistence. Trace will save a lot of time for those who use Node for their open-source projects.

5. PSA: Node.js 8 will be delayed until May.

As we’ve mentioned in the previous Node.js Weekly Update, V8 5.9 will be the first version with TurboFan + Ignition (TF+I) turned on by default.

As parts of the Node.js codebase have been tuned to CrankShaft, there will be a non trivial amount of churn to adapt to the new pipeline. This also creates a security risk as CrankShaft and FullCodeGen are no longer maintained by the V8 team or tested by the Chrome security team. If TF + I lands in Node.js 9.x backporting any changes to Node.js 8.x is going to prove extremely difficult and time consuming.

The Node.js Core Team decided that they should target 5.8 in 8.x release. The foundation will delay release with 3-4 weeks to allow forward compatible ABI to 6.0. Upgrade to TF+I as semver minor.

6. Meet Awaiting – the async/await utility for browsers and Node.js

Code written with async functions benefits from superior readability, improved terseness and expressiveness, and unified error handling. No more nested callbacks, opaque Promise chains, and if (err) checks littering your code.

However, this pattern isn’t a panacea. It’s easy to do some things: iterate through single items, wait on a single result, run an array of promises in parallel. Other workflows require abstraction or state. I kept finding myself writing the same utility functions in each project: delays, throttled maps, skipping try/catch on optional operations, adapting to events or callbacks. Await, combined with these simple abstractions, yields readable yet powerful async workflows.

7. Call for Papers (NodeTalk Proposals)

Node Summit 2017 will host the fifth annual NodeTalks. The conference will host leading technology and business experts from across the Node.js ecosystem who will present real-world case studies and talks that highlight the rapidly growing number of high profile companies and critical applications that rely on the Node.js ecosystem.

Submit your talk here!

Security Vulnerabilities Discovered:

High severity

  • ReDoSdecamelize package, versions >=1.1.0 <1.1.2
  • ReDoSuseragent package, versions <2.1.12
  • ReDoSuri-js package, versions <3.0.0
  • DoSnes package, versions <6.4.1

Medium severity

Low severity

Previously in the Node.js Weekly Update

In the previous Node.js Weekly Update we read interviews with Matt Loring & Mark Hinkle, read about tracking the growth of Open-Source & Mastering Node CLI..

We help you to stay up-to-date with Node.js on a daily basis too. Check out our Node.js news page and its Twitter feed!

Source:: risingstack.com

Creating Twitter Single Sign On App with Monaca using Angular 1 and Onsen UI v2

By Khemry Khourn

Authentication and Profile Pages

In this article, you will learn how to use Twitter Single Sign On (SSO) with Monaca Cloud IDE using Angular 1 and Onsen UI. The authentication is done by using twitter-connect-plugin. This plugin uses Twitter’s Fabric SDK to enable SSO with your Android and iOS apps. After a successful authentication, the user’s basic information will be displayed in the app.

Tested Environments: Android 6.2 & iOS 10.1

Prerequisite

Getting Twitter Consumer Key and Consumer Secret

You are required to obtain Consumer Key and Consumer Secret by registering your Monaca app with Twitter Apps page. Please do as follows:

  1. Go to Twitter Apps page and sign in with a valid Twitter account.
  2. Click on Create New App button.
  3. Fill in the information of your app such as: Name, Description, Website and Callback URL (optional). Then, tick Yes, I have read and agree to the Twitter Developer Agreement. and click Create your Twitter application button.
  4. Go to Settings tab and tick Allow this application to be used to Sign in with Twitter. Then, click Update Settings button.
  5. Go to Keys and Access Tokens tab, you will find the Consumer Key and Consumer Secret here.

Getting Fabric API Key

In order to get Fabric API key, please do as follows:

  1. Login to Fabric account and open Crashlytics page. If you are new to Fabric, please sign up here.
  2. Find your API Key in inside code block in AndroidManifest.xml file (see the screenshot below).

Fabric API Key

Step 1: Creating a Project

From Monaca Cloud IDE, create a new project with a template called Onsen UI v2 Angular 1 Minimum.

Step 2: Adding the Plugin

  1. From Monaca Cloud IDE, please import twitter-connect-plugin. For more details, please refer to Add/Import Cordova Plugins.
  2. Add the Fabric API Key to the plugin’s configuration.

Plugin Configuration

Step 3: Editing config.xml File

  1. Open the config.xml file and add the following code within the tag. Please remember to replace your own Twitter Consumer Key and Twitter Consumer Secret.

    <preference name="TwitterConsumerKey" value="<Twitter Consumer Key>" />
    <preference name="TwitterConsumerSecret" value="<Twitter Consumer Secret>" />
    
  2. Save the file.

NOTE: For iOS, the deployment target needs to be at least 7.0. You can set this in the config.xml file like so:

<preference name="deployment-target" value="7.0" />

Step 4: It’s Coding Time!

File Tree

In this sample application, there are 4 main files such as:

  • index.html and home.html: Each one of these pages is a combination of HTML, Onsen UI v2 and Angular 1 elements.
  • style.css: This is a stylesheet for the application.
  • app.js: An Angular 1 file containig a service and a controller used in the application.

index.html

This is, of course, the main page of the application, the one loaded by default. Replace the following code to index.html file:

<!DOCTYPE HTML>
<html>
<head>
    <meta charset="utf-8">
    <meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1, user-scalable=no">
    <meta http-equiv="Content-Security-Policy" content="default-src * data:; style-src * 'unsafe-inline'; script-src * 'unsafe-inline' 'unsafe-eval'">
    <script src="components/loader.js"></script>
    <script src="lib/angular/angular.min.js"></script>
    <script src="lib/onsenui/js/onsenui.min.js"></script>
    <script src="lib/onsenui/js/angular-onsenui.min.js"></script>
    <script src="js/app.js"></script>

    <link rel="stylesheet" href="components/loader.css">
    <link rel="stylesheet" href="lib/onsenui/css/onsenui.css">
    <link rel="stylesheet" href="lib/onsenui/css/onsen-css-components.css">
    <link rel="stylesheet" href="css/style.css">
</head>

<body >
    <ons-navigator id="myNavigator" page="home.html"></ons-navigator>
</body>
</html>

As you can see, inside the body there is only an component. It provides page stack management and navigation. The attribute page is used to identify the first page in the stack. Since we have only one main page in this sample application, home.html (will be described in the next section) is of course the first in the page stack and will be loaded as soon as the index.html file is completed.

home.html

Please create a home.html file with the following content:

<ons-page ng-controller="HomeCtrl as home" ng-init="CheckLoginStatus()">
    <ons-toolbar>
        <div class="center">Twitter Demo</div>
        <div class="right" ng-show="login_status">
            <ons-toolbar-button ng-click="Logout()">
                <ons-icon icon="fa-sign-out"></ons-icon>
            </ons-toolbar-button>
        </div>
    </ons-toolbar>
    <div class="page">
        <div ng-hide="login_status">
            <p class="center">
                Welcome to Twitter Demo with Monaca using Onsen UI and AngularJS!
            </p>
            <ons-button ng-click="Login()">                    
                Connect to Twitter
            </ons-button>        
        </div>
        <div ng-show="login_status">
            <p class="center">
                <p>Currently, logged in as <b>{{user.name}}</b></p>
                <img src="{{user.profile_url}}" class="profile">
                <p>(@{{user.screen_name}})</p>
                <p>{{user.description}}</p>
                <p><ons-icon icon="fa-map-marker"></ons-icon> {{user.location}}</p>
            </p>
        </div>
    </div>
</ons-page>

This page contains two sections which is shown based on the login status (login_status variable) of the user in the application:

  1. Login section: This section is shown if there is no existing login information found in the device.

    Login Section

  2. Profile section: When the existing login info is found, this section will be displayed.

    Profile Section

style.css

There is a default empty style.css file after you creating a project with the specified template. Add the following code to it. The main purpose of this sheet is simply for the page and Twitter profile image.

div.page {
   padding: 5%; 
   text-align: center;
}

p.center {
    text-align: center;
}

img.profile {
    width: 40%;
    border: solid 1px #1da1f2;
    border-radius: 5px;
}

.navigation-bar {
    background-color: #1da1f2;
}

.button {
    background-color: #1da1f2;
}

app.js

This is an Angular 1 file containig a service and a controller used in the application. Add the following code to app.js file:

ons.bootstrap()
.service('StorageService', function() {
    var setLoginUser = function(user_info) {
        window.localStorage.login_user = JSON.stringify(user_info);
    };

    var getLoginUser = function(){
        return JSON.parse(window.localStorage.login_user || '{}');
    };

    return {
        getLoginUser: getLoginUser,
        setLoginUser: setLoginUser
    };
})

.controller('HomeCtrl', function($scope, StorageService, $http) {
    $scope.CheckLoginStatus = function(){
        $scope.user = StorageService.getLoginUser();
        console.log(JSON.stringify($scope.user));
        //check if there is any stored information of a login user
        if(JSON.stringify($scope.user) === "{}"){
            console.log('No login info!');
            $scope.login_status = 0;
        } else {
            console.log('Login info is found!');
            $scope.login_status = 1;

        }
    }

    $scope.Login = function(){
        TwitterConnect.login(
            function(result) {
            console.log('Successful login!');

            TwitterConnect.showUser(
                function(user) {
                    //Get larger profile picture
                    user.profile_url = user.profile_image_url.replace("_normal", "");
                    StorageService.setLoginUser({
                        name: user.name,
                        screen_name: user.screen_name,
                        location: user.location,
                        description: user.description,
                        profile_url: user.profile_image_url.replace("_normal", "")
                    });
                    myNavigator.pushPage('home.html');
                }, function(error) {
                    console.log('Error retrieving user profile');
                    console.log(error);
                }
            );

            }, function(error) {
                console.log('Error logging in');
                console.log(error);
            }
        );   
    }

    var LogoutFromTwitter = function(){
        TwitterConnect.logout(
            function() {
                console.log('Successful logout!');
                StorageService.setLoginUser({});
                myNavigator.pushPage("home.html");
            },
            function(error) {
                console.log('Error logging out: ' + JSON.stringify(error));
            }
        );  
    }

    $scope.Logout = function(){
        ons.notification.confirm({
            message: "Are you sure you want to log out?",
            title: 'Twitter Demo',
            buttonLabels: ["Yes", "No"],
            callback: function(idx) {
            switch (idx) {
                case 0:
                    LogoutFromTwitter();
                case 1:
                    break;
                break;
            }
          }
        });
    }
});

Inside this file, there is service called StorageService to store the login information of the user using the device’s Local Storage.

There is also one controller called HomeCtrl which consists of two main functions such as Login() and Logout(). Inside the Login() function, TwitterConnect.login() is called asking the user the login with a valid Twitter account information.

NOTE: If you have logged in with a Twitter application on your device, the information of that account will be shown automatically (see the screenshot below as example). If you want to login with a different account, please go to your Twitter application and change the account there.

Authentication

After a successful login, StorageService is called to store the login information and you will be directed back to home.html page showing the profile information of the logged in user. Inside the Logout() function, a confirmation dialog is shown to confirm the activity (see the screenshot below). If the user selects Yes, the TwitterConnect.logout() function is called and StorageService is also called to remove the login information.

NOTE: This Logout() function can only log the user out of this application, not the Twitter application.

Confirmation

Conclusion

Until this point, you have completed a Twitter SSO app with Monaca. Now, you can start testing your app with Monaca Debugger. Then, try to build it and install on your device.

Source:: https://onsen.io/

The Definitive Guide to Object Streams in Node.js

By Stefan Baumgartner

The Definitive Guide to Object Streams in Node.js

Node.js Streams come with a great power: You have an asynchronous way of dealing with input and output, and you can transform data in independent steps. In this tutorial, I’ll walk you through the theory, and teach you how to use Gulp as a streaming build system.


When I was researching for my book Front-End Tooling with Gulp, Bower and Yeoman, I decided to not just explain APIs and use cases, but also focus on the concepts underneath.

You know that especially in JavaScript, tools and frameworks come and go faster than you can register domains and Github groups for them. For Gulp.js, one of the most crucial concepts are streams!

Some 50 years of streams

With Gulp, you want to read input files and transform them into the desired output, loading lots of JavaScript files and combining them into one. The Gulp API provides some methods for reading, transforming, and writing files, all using streams under the hood.

Streams are a fairly old concept in computing, originating from the early Unix days in the 1960s.

The source can be of multiple types: files, the computer’s memory, or input devices like a keyboard or a mouse.

Once a stream is opened, data flows in chunks from its origin to the process consuming it. Coming from a file, every character or byte would be read one at a time; coming from the keyboard, every keystroke would transmit data over the stream.

The biggest advantage compared to loading all the data at once is that, in theory, the input can be endless and without limits.

Coming from a keyboard, that makes total sense – why should anybody close the input stream you’re using to control your computer?

Input streams are also called readable streams, indicating that they’re meant to read data from a source. On the other hand, there are outbound streams or destinations; they can also be files or some place in memory, but also output devices like the command line, a printer, or your screen.

They’re also called writeable streams, meaning that they’re meant to store the data that comes over the stream. The figure below illustrates how streams work.

The Definitive Guide to Object Streams in Node.js

The data is a sequence of elements made available over time (like characters or bytes).

Readable streams can originate from different sources, such as input devices (keyboards), files, or data stored in memory. Writeable streams can also end in different places, such as files and memory, as well as the command line.

Not only is it possible to have an endless amount of input, but you also can combine different readable and writeable streams. Key input can be directly stored into a file, or you can print file input out to the command line or even a connected printer. The interface stays the same no matter what the sources or destinations are.

The easiest program in Node.js involving streams is piping the standard key input to the standard output, the console:

process.stdin.pipe(process.stdout);  

We take our readable (process.stdin) and pipe it to a writeable (process.stdout). As said before, we can stream any content from any readable source to any writeable destination.

Take the request package for example, where you can do an HTTP request to a URL. Why not fetching some page on the web and printing it out on process.stdin?

const request = require('request');

request('https://fettblog.eu').pipe(process.stdout);  

The output of an HTML page might not be particularly useful on a console but think of it being piped to a file for a web scraper.

Transforming data

Streams aren’t just good for transferring data between different input sources and output destinations.

With the data exposed once a stream is opened, developers can transform the data that comes from the stream before it reaches its destination, such as by transforming all lowercase characters in a file to uppercase characters.

This is one of the greatest powers of streams. Once a stream is opened and you can read the data piece by piece, you can slot different programs in between. The figure below illustrates this process.

The Definitive Guide to Object Streams in Node.js

To modify data, you add transformation blocks between the input and the output.

In this example, you get your input data from different sources and channel it through a toUpperCase transformation. This changes lowercase characters to their uppercase equivalent. Those blocks can be defined once and reused for different input origins and outputs.

In the following listing, we define a toUpperCase function that — well — transforms every letter to its uppercase equivalent. There are many ways to create this functionality, but I’ve always been a huge fan of the Node.js streaming packages like through2. They define a good wrapper to make creating new transformables in a breeze:

const through2 = require('through2');

const toUpperCase = through2((data, enc, cb) => {      /* 1 */  
  cb(null, new Buffer(data.toString().toUpperCase())); /* 2 */
});

process.stdin.pipe(toUpperCase).pipe(process.stdout);  /* 3 */  
  1. The through2 package takes a function for the first parameter. This function passes data (in a Buffer), some encoding information and a callback we can call once we’re done with our transformation.
  2. Usually, in Node.js streams, we pass Buffers with the data from the stream. Coming from process.stdin this is most likely the current line before we press Return. Coming from a file, this can be actually anything. We transform the current Buffer to a string, create the uppercase version, and convert it back to a Buffer again. The callback takes two arguments. The first one is a possible error. The stream will crash and the program will stop the execution if you are not listening to an end event to catch the error. Pass null if everything is okay. The second parameter is the transformed data.
  3. We can use this transformable and pipe our input data from the readable to it. The transformed data is piped to our writeable.

This is totally in the vein of functional programming. We can use and reuse the same transformable for every other input or output, as long as it’s coming from a readable stream. We don’t care about the input source or the output. Also, we are not limited to one single transformable. We can chain as many transformables as we like:

const through2 = require('through2');

const toUpperCase = through2((data, enc, cb) => {  
  cb(null, new Buffer(data.toString().toUpperCase()));
});

const dashBetweenWords = through2((data, enc, cb) => {  
  cb(null, new Buffer(data.toString().split(' ').join('-')));
});

process.stdin  
  .pipe(toUpperCase)
  .pipe(dashBetweenWords)
  .pipe(process.stdout);

If you are familiar with Gulp, the code above should ring some bell. Very similar, isn’t it? However, Gulp streams are different in one specific matter: We don’t pass data in Buffers, we use plain, old JavaScript objects.

Object streams

In standard streams, it’s usual to see the file just as a possible input source for the real data, which has to be processed. All information on the origin, like the path or filename, is lost once the stream has opened up.

In Gulp, you’re not just working with the contents of one or a few files, you need filename and the origin of the file system as well.

Think of having 20 JavaScript files and wanting to minify them. You’d have to remember each filename separately and keep track of which data belongs to which file to restore a connection once the output (the minified files of the same name) must be saved.

Luckily, Gulp takes care of that for you by creating both a new input source and a data type that can be used for your streams: virtual file objects.

Once a Gulp stream is opened, all the original, physical files are wrapped in such a virtual file object and handled in the virtual file system, or Vinyl, as the corresponding software is called in Gulp.

Vinyl objects, the file objects of your virtual file system, contain two types of information: the path where the file originated, which becomes the file’s name, as well as a stream exposing the file’s contents. Those virtual files are stored in your computer’s memory, known for being the fastest way to process data.

There all the modifications are done that would usually be made on your hard disk. By keeping everything in memory and not having to perform expensive read and write operations in between processes, Gulp can make changes extraordinarily quickly.

Internally, Gulp is using object streams to emit file by file into the processing pipeline. Object streams behave just like normal streams, but instead of Buffers and strings, we pass through plain old JavaScript objects.

We can create our own readable object stream using the readable-stream package:

const through2 = require('through2');  
const Readable = require('readable-stream').Readable;

const stream = Readable({objectMode: true});   /* 1 */  
stream._read = () => {};                       /* 2 */

setInterval(() => {                            /* 3 */  
  stream.push({
    x: Math.random()
  });
}, 100);

const getX = through2.obj((data, enc, cb) => { /* 4 */  
  cb(null, `${data.x.toString()}n`);
});

stream.pipe(getX).pipe(process.stdout);        /* 5 */  
  1. Important for creating an object readable is to set the objectMode flag to true. In doing so, the stream is capable of passing JavaScript objects through the pipeline. It would expect Buffers or Strings otherwise.
  2. Every stream needs a _read function. This function gets called when the stream checks for data. This is the proper place to start other mechanisms around and push new contents to the stream. Since we push data from outside, we don’t need this function and can keep it void. However, readable streams need to implement this, otherwise we would get an error.
  3. Here we are filling the stream with demo data. Every 100 milliseconds, we push another object with a random number to our stream.
  4. Since we want to pipe the results of the object stream to process.stdout, and process.stdout just accepts strings, we have a small transformable where we extract the property from our passed through JavaScript object.
  5. We create a pipeline. Our readable object stream pipes all its data to the getX transformable, and finally to the writeable process.stdout

A note on stream packages in Node.js

You might have noticed that we use different stream packages that are installable via NPM. Isn’t that odd?

However, the streaming core was constantly subject to change back in the old 0.x days of Node, that’s why the community stepped in and created a solid and stable API around the basic packages. With semantic versioning, you can be sure that the streaming ecosystem moves nicely along with your application.

Enough demos. Let’s do something real

Alright! Let’s go for a small app that reads CSV data and stores them into JSON. We want to use object streams because at some points we might want to change data depending on the use case. Since streams are awesome, we want to be able to push the result to different output formats.

First things first, we install a few packages:

const through2 = require('through2');  
const fs = require('fs');  
const split = require('split2');  
  1. We know through2 already. We use this one to create all our transformables.
  2. The fs package is obviously for reading and writing files. Cool thing: It allows you to create a readable! Exactly what we need.
  3. Since you never know how the data from fs.createReadStream is pulled into your memory, the split2 package makes sure that you can process data line by line. Note the “2” in the name of this transformable. It tells you that it’s part of the semantically versioned wrapper ecosystem.

Parse CSV!

CSV is great for parsing because it follows a very easy to understand format: A comma means a new cell. A line means a new row.

Easy.

In this example, the first line is always the heading for our data. So we want to treat the first line in a special way: It will provide the keys for our JSON objects.

const parseCSV = () => {  
  let templateKeys = [];
  let parseHeadline = true;
  return through2.obj((data, enc, cb) => {       /* 1 */
    if (parseHeadline) {
      templateKeys = data.toString().split(',');
      parseHeadline = false;
      return cb(null, null);                     /* 2 */
    }

    const entries = data.toString().split(',');
    const obj = {};

    templateKeys.forEach((el, index) => {       /* 3 */
      obj[el] = entries[index];
    });

    return cb(null, obj);                       /* 4 */
  });
};
  1. We create a transformable for object streams. Notice the .obj method. Even if your input data is just strings, you need an object stream transformable if you want to emit objects further on.
  2. In this block, we parse the headline (comma separated). This is going to be our template for the keys. We remove this line from the stream, that’s why we pass null both times.
  3. For all other lines, we create an object each through the help of the template keys we parsed earlier.
  4. We pass this object through to the next stage.

That’s all it needs to create JavaScript objects out of a CSV file!

Changing and adapting data

Once we have everything available in objects, we can transform the data much easier. Delete properties, add new ones; filter, map and reduce. Anything you like. For this example, we want to keep it easy: Pick the first 10 entries:

const pickFirst10 = () => {  
  let cnt = 0;
  return through2.obj((data, enc, cb) => {
    if (cnt++ < 10) {
      return cb(null, data);
    }
    return cb(null, null);
  });
};

Again, like in the previous example: Passing data for the second argument of a callback means that we keep the element in the stream. Passing null means that we throw the data away. This is crucial for filters!

Flushing to a JSON

You know what JSON stands for? JavaScript object notation. This is great, because we have JavaScript objects, and we can note them down in a string representation!

So, what we want to do with the objects in our stream is to collect all of them that are passing through, and store them into a single string representation. JSON.stringify comes into mind.

One important thing you have to know when working with streams is that once the object (or Buffer data for that matter) passes through your transformable to the next stage, it’s gone for this stage.

This also means that you can pass objects just to one writeable, not more. There is, however, a way of collecting data and doing something different with it. If there’s no more data coming through a stream, each transformable calls a flush method.

Think of a sink that’s getting filled with fluids.

You are not able to pick every single drop of it and analyze it again. But you can flush the whole thing to the next stage. This is what we’re doing with the next transformable toJSON:

const toJSON = () => {  
  let objs = [];
  return through2.obj(function(data, enc, cb) {
    objs.push(data);                              /* 1 */
    cb(null, null);
  }, function(cb) {                               /* 2 */
    this.push(JSON.stringify(objs));
    cb();
  });
};
  1. We collect all data that’s passing through in an array. We remove the objects from our stream.
  2. In the second callback method, the flush method, we are transforming the collected data to a JSON string. With this.push (note the classic function notation there), we push this new object to our stream into the next stage. In this example, the new “object” is merely a string. Something that’s compatible with standard writeables!

Gulp, for example, uses this behavior when working with concatenation plugins. Reading all files in stage one, and then flushing one single file to the next stage.

Combining everything

Functional programming comes into mind again: Each transformable that we’ve written in the last couple of lines is completely separated from the others. And they are perfectly reusable for different scenarios, regardless of input data or output format.

The only constraints are in the format of CSV (the first line is the headline) and that pickFirst10 and toJSON need JavaScript objects as input. Let’s combine them and put the first ten entries as JSON on our standard console output:

const stream = fs.createReadStream('sample.csv');

stream  
  .pipe(split())
  .pipe(parseCSV())
  .pipe(pickFirst10())
  .pipe(toJSON())
  .pipe(process.stdout);

Perfect! We can pipe the whole lot to different writeables, though. In Node.js, the core IO is all compatible with streams. So let’s use a quick HTTP server and pipe everything out into the internet:

const http = require('http');

// All from above
const stream = fs.createReadStream('sample.csv')  
  .pipe(split())
  .pipe(parseCSV())
  .pipe(pickFirst10())
  .pipe(toJSON())

const server = http.createServer((req, res) => {  
  stream.pipe(res);
});

server.listen(8000);  

This is the great power of Node.js streams. You have an asynchronous way of dealing with input and output, and you can transform data in independent steps. With object streams, you can leverage JavaScript objects that you know and love to transform your data.

This is the foundation of Gulp as a streaming build system, but also a great tool for your everyday development.

Further reading

If you are hooked on streams, I can recommend a few resources:

Source:: risingstack.com

Handle Mouse And Touch Input With The Pointer Events API

By Danny Markov

pointer-events-api

With more and more people using their mobile phones and tablets for web browsing, we as developers have to make sure that our web interfaces are fully accessible via touch. Setting up click and hover event listeners sort of works, but it is clearly a leftover solution from the mouse era.

Thankfully, there is a new API in town that accommodates the needs of mouse, touch, and stylus devices. It’s called Pointer events (not to be mistaken with the CSS property of the same name) and it allows us to add event listeners that are better suited for working with all types on input.

Meet The New Events

The new Pointer Event API is an evolved version of the Mouse Event interface we’ve all been using so far. It extends the functionality of the old API and adds support for multi-touch gestures, precise pen input, and overall smoother touchscreen interaction.

Most of the Pointer Events have direct alternatives among the old mouse events. Once the new API gets full browser support we can directly substitute with the more modern alternatives:

const button = document.querySelector("button");

// Instead of mouseover
button.addEventListener('mouseover', doSomething);

// We can use pointerover
button.addEventListener('pointerover', doSomething);

Interacting with a mouse should be the same in both cases. Using fingers or a stylus, however, will be easier to program with the new API.

Recognizing Input Type

An awesome feature of the Pointer Events API is that it can tell which type of input has been used. This can be helpful when you want to ignore some of the input methods or provide special feedback for each one.

button.addEventListener('pointereover', function(ev){
  switch(ev.pointerType) {
    case 'mouse':
      // The used device is a mouse or trackpad.
      break;
    case 'touch':
      // Input via touchscreen.
      break;
    case 'pen':
      // Stylus input.
      break;
    default:
      // Browser can't recognize the used device.
      break;
  }
});

Other Properties

The Pointer Events interface provides some some other interesting data as well. It includes all the MouseEvent properties plus the following:

  • pointerId – Unique ID for the pointer causing the event.
  • width and height – Size of the contact area in pixels.
  • pressure – Pressure of touch, if available.
  • tiltX and tiltY – The angle at which a stylus is touching the screen.
  • isPrimary – Determines whether an event has been emitted by the pirmary pointer device.

Mouse clicks always have a width and height of 1 while touch size varies.

Browser Support

Pointer Events are fairly new, so browser compatibility isn’t perfect yet. Chrome (desktop and mobile), Edge, IE, and Opera have full support; Firefox and Safari don’t.

Pointer Events Browser Compatibility on Can I Use

Pointer Events Browser Compatibility on Can I Use

To check whether a browser has the Pointer Events API you can use the window object:

if (window.PointerEvent) {
  // Pointer Events enabled.
} else {
  // Pointer Events not supported
}

A popular open-source pollyfill is also available for those who don’t want to wait for full browser adoption.

Conclusion

Although it doesn’t have full browser support yet, the Pointer Events API is eventually going to take over the old mouse events. It provides a lot of cool features that will increase web accessibility and enable developers to create more advanced touch and stylus-based apps.

If you want to learn more about the Power Events API we recommend checking out these resources:

Source:: Tutorialzine.com