Monthly Archives: March 2015

Tutorial: Creating an NPM-driven Website

By Nick Anastasov

creating-an-npm-driven-website copy

A few weeks ago, the jQuery plugins website, which developers used to find and download plugins for the popular client-side library, was switched to read-only mode. Developers are now encouraged to use npm to publish and search for jQuery plugins.

This demonstrates how central npm has become to the JavaScript community. It originally arose as the package manager for node.js, but quickly proved itself versatile for any kind of JavaScript code, and developers started using it to host client-side libraries as well.

In this tutorial, we will show you how you can use npm to develop a small web application. We will start from scratch – we will set up a package.json, install a bunch of libraries and make things work nicely together.

How to install and use client-side libraries with npm

There are two ways to do this. We are going to use the simpler one, which is a great fit for small websites and apps:

  1. We will install the libraries that we need with npm. You can see a bunch of jQuery plugins here. To install one of them, run the command npm install --save
  2. npm creates the node_modules folder and places the libraries there.
  3. In our HTML file, we include the scripts and CSS files directly from the node_modules folder with and tags.
  4. When the time comes to put your web site/app online, just upload the node_modules folder together with the other files.

This is similar to how Bower works, but has the benefit that we are only using npm without installing additional package managers.

Setting things up

We are ready to start coding! However, there are a few things that you have to do first:

  1. Make sure that you have node.js installed. If you don’t, download an installer for your OS and run it. This will also set up npm for you.
  2. Create a new empty folder for your new website. As an example, we will use project-folder throughout this tutorial.
  3. Open a terminal (or a command prompt if you are on Windows) and navigate to the project folder (cd is your friend!).
  4. Type npm init. This will create an empty package.json file. Press enter to use the defaults, if you don’t know what info to supply.

Great! Now you’ve got an empty folder with a valid package.json inside it. package.json is a special file which is used by npm to write down the libraries you’ve installed so far, and details about your project.

Let’s install some libraries

We are going to make a simple web app that will visualize addresses using Google Maps, and will let people save addresses in their browser’s localStorage. For this purpose, we will need a bunch of libraries which are available on npm:

npm install bootswatch gmaps jquery moment --save

This will download and write to node_modules Bootswatch (Bootstrap with pretty themes applied), gmaps (an easy way for working with Google Maps), jQuery and moment.js (library for working with date and time in JavaScript). The –save flag will write them to package.json in addition to downloading them.

All that is left is to include these libraries in your HTML.

Tutorialzine NPM-Driven Website

The HTML

We have a basic HTML5 document with a few Bootstrap components. Notice how we’ve included the bootswatch stylesheet and the libraries by directly specifying their path inside the node_modules folder.

index.html

<!DOCTYPE html>
<html>
<head lang="en">
  <meta charset="UTF-8">
  <title>Creating an Npm-driven Website</title>
  <link href="node_modules/bootswatch/flatly/bootstrap.min.css" type="text/css" rel="stylesheet" />
  <link href="assets/css/styles.css" type="text/css" rel="stylesheet" />
</head>
<body>

  <div class="container">

    <h1>Your Google Maps Locations</h1>

    <form id="geocoding_form" class="form-horizontal">
      <div class="form-group">
        <div class="col-xs-12 col-md-6 col-md-offset-3">
          <div class="input-group">
            <input type="text" class="form-control" id="address" placeholder="Enter your location...">
            <span class="input-group-btn">
              <span class="glyphicon glyphicon-search" aria-hidden="true"></span>
            </span>
          </div>
        </div>
      </div>
    </form>

    <div class="map-overlay">
      <p>Loading...</p>
      <div id="map"></div>
    </div>
    <div class="col-xs-12 col-md-6 col-md-offset-3 save-container">
      <h4 id="save-location"></h4>
      <span class="glyphicon glyphicon-star-empty" aria-hidden="true"></span>
    </div>

    <div class="list-group col-xs-12 col-md-6 col-md-offset-3">
      <span class="list-group-item active">Saved Locations</span>
    </div>

  </div>


  <!-- The jQuery library -->
  <script src="node_modules/jquery/dist/jquery.min.js"></script>

  <!-- The Moment.js library -->
  <script src="node_modules/moment/moment.js"></script>

  <!-- Including the Google Maps API and the GMaps library -->
  <script src="http://maps.google.com/maps/api/js?sensor=true"></script>
  <script src="node_modules/gmaps/gmaps.js"></script>

  <!-- Including our own JavaScript file -->
  <script src="assets/js/script.js"></script>

</body>
</html>

I have chosen the modern-looking Flatly theme from Bootswatch, which we installed a moment ago. In the HTML you can also see some of Bootstrap’s grid classes, along with a list group for presenting the favorite locations.

The JavaScript

Our JavaScript file will handle saving to and reading from localStorage, creating Google Maps using the Gmaps library and converting from addresses to geographic coordinates. You can see the entire file below.

assets/js/script.js

$(function(){

	var saveContainer = $('.save-container'),
		favoriteIcon = saveContainer.find('.glyphicon'),
		favoriteLocationsListGroup = $('.list-group');

	var hasFavoriteLocations = false;

	// Initialize a google maps using the gmaps library.

	var map = new GMaps({
		el: '#map',
		lat: '0',
		lng: '0',
		zoom: 1
	});

	// Initialize the favorite locations array which is kept in localStorage

	if(!localStorage.hasOwnProperty('favorite-locations')) {
		localStorage.setItem('favorite-locations', JSON.stringify([]));
	}

	hasFavoriteLocations = JSON.parse(localStorage.getItem('favorite-locations')).length ? true : false;

	// Form submit and Search icon handlers
	$('.glyphicon-search').click(showLocationByAddress);
	$('#geocoding_form').submit(showLocationByAddress);

	// Click handler on any of the favorite locations
	$(document).on('click','a.list-group-item', showLocationByCoordinates);

	// Click handler on the favorite(star) icon to become saved or removed
	$(document).on('click', '.glyphicon-star', removeFavoriteLocation);
	$(document).on('click', '.glyphicon-star-empty', saveFavoriteLocation);

	// If there are any favorite locations, append them to the favorite location list

	if(hasFavoriteLocations) {

		var array = JSON.parse(localStorage.getItem('favorite-locations'));

		favoriteLocationsListGroup.empty();
		favoriteLocationsListGroup.append('<span class="list-group-item active">Saved Locations</span>');

		array.forEach(function(item){
			favoriteLocationsListGroup.append('<a class="list-group-item" data-lat="'+item.lat+'" data-lng="'+item.lng+'" data-createdAt="'+item.createdAt+'">'+item.address+'<span class="createdAt">'+moment(item.createdAt).fromNow()+'</span><span class="glyphicon glyphicon-menu-right"></span></a>');
		});

		favoriteLocationsListGroup.show();

	}

	// This function presents the address which was entered in the text field in the map

	function showLocationByAddress(e) {

		e.preventDefault();

		// Getting the coordinates of the entered address

		GMaps.geocode({
			address: $('#address').val().trim(),
			callback: function(results, status) {

				if (status !== 'OK') return;


				var latlng = results[0].geometry.location,
					fullAddress = results[0].formatted_address,
					isLocationFavorite = false,
					locationsArray = JSON.parse(localStorage.getItem('favorite-locations')),
					saveLocation = $('#save-location');

				var map = new GMaps({
					el: '#map',
					lat: latlng.lat(),
					lng: latlng.lng()
				});

				// Adding a marker on the wanted location
				
				map.addMarker({
					lat: latlng.lat(),
					lng: latlng.lng()
				});

				// Checking if this address exists in the favorites array

				if(locationsArray.length) {
					locationsArray.forEach(function (item) {
						if (item.lat == latlng.lat() && item.lng == latlng.lng()) {
							isLocationFavorite = true;
						}
					});
				}

				// Adding the address to the html and setting data attributes with the coordinates
				saveLocation.text(fullAddress).attr({'data-lat': latlng.lat(), 'data-lng': latlng.lng()});

				// Removing the active class from all favorite locations
				favoriteLocationsListGroup.find('a.list-group-item').removeClass('active-location');

				// Changing the icon to become non-favorite
				
				if(!isLocationFavorite) {
					favoriteIcon.removeClass('glyphicon-star').addClass('glyphicon-star-empty');
				}
				else {
					
					// Adding the active class and add the favorite icon on the given favorite location
					favoriteIcon.removeClass('glyphicon-star-empty').addClass('glyphicon-star');

					// Find the entry in the favorite locations list that corresponds 
					// to the current location, and mark it as active.

					favoriteLocationsListGroup.find('a.list-group-item[data-lat="'+latlng.lat()+'"][data-lng="'+latlng.lng()+'"]').addClass('active-location');
				}

				// Show the html of the given location
				saveContainer.show();

			}

		});
	}

	// This functions is called when a favorite location is clicked.
	// It reads the coordinates and shows them in a map

	function showLocationByCoordinates(e) {

		e.preventDefault();

		var elem = $(this),
			location = elem.data();

		// Getting the address from the location's coordinates

		GMaps.geocode({
			location: {lat: location.lat, lng: location.lng},
			callback: function(results, status) {

				if (status !== 'OK') return;

				var fullAddress = results[0].formatted_address,
					saveLocation = $('#save-location');

				var map = new GMaps({
					el: '#map',
					lat: location.lat,
					lng: location.lng
				});

				map.addMarker({
					lat: location.lat,
					lng: location.lng
				});

				// Adding the address to the html and setting
				// data attributes with the location's coordinates

				saveLocation.text(fullAddress);
				saveLocation.attr({
					'data-lat': location.lat,
					'data-lng': location.lng
				});

				// Adding colored background to the active favorite location and
				// removing the old active location

				favoriteLocationsListGroup.find('a.list-group-item').removeClass('active-location');
				favoriteLocationsListGroup.find('a.list-group-item[data-lat="'+location.lat+'"][data-lng="'+location.lng+'"]').addClass('active-location');

				// Add the favorite icon on the given location
				favoriteIcon.removeClass('glyphicon-star-empty').addClass('glyphicon-star');

				// Show the html of the given location
				saveContainer.show();

				// Clear the search field
				$('#address').val('');

			}

		});

	}

	// This function saves a location to favorites and adds it to localStorage

	function saveFavoriteLocation(e){

		e.preventDefault();

		var saveLocation = $('#save-location'),
			locationAddress = saveLocation.text(),
			isLocationFavorite = false,
			locationsArray = JSON.parse(localStorage.getItem('favorite-locations'));

		var location = {
			lat: saveLocation.attr('data-lat'),
			lng: saveLocation.attr('data-lng'),
			createdAt: moment().format()
		};

		// Checking if this location is in the favorites array

		if(locationsArray.length) {
			locationsArray.forEach(function (item) {
				if (item.lat == location.lat && item.lng == location.lng) {
					isLocationFavorite = true;
				}
			});
		}

		// If the given location is not in favorites,
		// add it to the HTML and to localStorage's array

		if(!isLocationFavorite) {

			favoriteLocationsListGroup.append(
				'<a class="list-group-item active-location" data-lat="'+location.lat+'" data-lng="'+location.lng+'" data-createdAt="'+location.createdAt+'">'+
				locationAddress+'<span class="createdAt">'+moment(location.createdAt).fromNow()+'</span>' +
				'<span class="glyphicon glyphicon-menu-right"></span>' +
				'</span></a>');

			favoriteLocationsListGroup.show();

			// Adding the given location to the localStorage's array
			locationsArray.push({
				address: locationAddress,
				lat: location.lat,
				lng: location.lng,
				createdAt: moment().format()
			});

			localStorage.setItem('favorite-locations', JSON.stringify(locationsArray));

			// Make the star icon full, to signify that this location is now favorite
			favoriteIcon.removeClass('glyphicon-star-empty').addClass('glyphicon-star');

			// Now we have at least one favorite location
			hasFavoriteLocations = true;
		}

	}

	// This function removes a favorite location from the favorites list
	// and removes it from localStorage
	
	function removeFavoriteLocation(e){

		e.preventDefault();

		var saveLocation = $('#save-location'),
			isLocationDeleted = false,
			locationsArray = JSON.parse(localStorage.getItem('favorite-locations'));

		var location = {
			lat: saveLocation.attr('data-lat'),
			lng: saveLocation.attr('data-lng')
		};

		// Removing the given location from the localStorage's Array
		if(locationsArray.length) {
			locationsArray.forEach(function (item, index) {
				if (item.lat == location.lat && item.lng == location.lng) {
					locationsArray.splice(index,1);
					isLocationDeleted = true;
				}
			});
		}

		if(isLocationDeleted) {

			// Remove the given location from the favorites list

			favoriteLocationsListGroup.find('a.list-group-item[data-lat="'+location.lat+'"][data-lng="'+location.lng+'"]').remove();

			localStorage.setItem('favorite-locations', JSON.stringify(locationsArray));

			// Removing the favorite icon from the html
			favoriteIcon.removeClass('glyphicon-star').addClass('glyphicon-star-empty');

			if(!locationsArray.length) {
				
				// There are no more favorite locations

				hasFavoriteLocations = false;
				favoriteLocationsListGroup.hide();
			}
			else {
				hasFavoriteLocations = true;
			}

		}

	}

});

The CSS

We mostly rely on Bootstrap with the Flatly theme to do the styling for us. However I did write a few additional CSS rules, which you can see in assets/css/styles.css in the downloadable zip with the source code.

To wrap it up

This concludes our tutorial! Npm has a huge number of JavaScript libraries, a lot of which are usable in the browser directly (for the rest we have Browserify, but this is a topic for another article). Do you think you will use npm in your client side development? Share your thoughts in our comment section.

Source:: Tutorialzine.com

ES6 generators in depth

By Axel Rauschmayer

Generators, a new feature of ECMAScript 6 [4], are functions that can be paused and resumed. This enables many intriguing applications: iterators, asynchronous programming, etc. This blog post explains how generators work and gives an overview of their applications.

Overview

Two important applications of generators are:

  • Implementing iterables
  • Blocking on asynchronous function calls

The following subsections give brief overviews of these applications, more thorough explanations are provided later (plus discussions of other topics).

Implementing iterables via generators

The following function returns an iterable over the properties of an object, one [key,value] pair per property:

    // The asterisk after `function` means that
    // `objectEntries` is a generator
    function* objectEntries(obj) {
        let propKeys = Reflect.ownKeys(obj);
    
        for (let propKey of propKeys) {
            // `yield` returns a value and then pauses
            // the generator. Later, execution continues
            // where it was previously paused.
            yield [propKey, obj[propKey]];
        }
    }

How exactly objectEntries() works is explained later. It is used like this:

    let jane = { first: 'Jane', last: 'Doe' };
    for (let [key,value] of objectEntries(jane)) {
        console.log(`${key}: ${value}`);
    }
    // Output:
    // first: Jane
    // last: Doe

Blocking on asynchronous function calls

In the following code, I use the control flow library co to asynchronously retrieve two JSON files. Note how, in line (A), execution blocks (waits) until the result of Promise.all() is ready. That means that the code looks synchronous while performing asynchronous operations.

    co(function* () {
        try {
            let [croftStr, bondStr] = yield Promise.all([  // (A)
                getFile('http://localhost:8000/croft.json'),
                getFile('http://localhost:8000/bond.json'),
            ]);
            let croftJson = JSON.parse(croftStr);
            let bondJson = JSON.parse(bondStr);
    
            console.log(croftJson);
            console.log(bondJson);
        } catch (e) {
            console.log('Failure to read: ' + e);
        }
    });

getFile(url) retrieves the file pointed to by url. Its implementation is shown later. I’ll also explain how co works.

What are generators?

Generators are functions that can be paused and resumed, which enables a variety of applications.

As a first example, consider the following generator function whose name is genFunc:

    function* genFunc() {
        console.log('First');
        yield; // (A)
        console.log('Second'); // (B)
    }

Two things distinguish genFunc from a normal function declaration:

  • It starts with the “keyword” function*.
  • It is paused in the middle via yield.

Calling genFunc does not execute it. Instead, it returns a so-called generator object that lets us control genFunc‘s execution:

    > let genObj = genFunc();

genFunc() is initially suspended at the beginning of its body. The method genObj.next() continues the execution of genFunc, until the next yield:

    > genObj.next()
    First
    { value: undefined, done: false }

As you can see in the last line, genObj.next() also returns an object. Let’s ignore that for now. It will matter once we look at generators as iterators.

genFunc is now paused in line (A). If we call next() again, execution resumes and line (B) is executed:

    > genObj.next()
    Second
    { value: undefined, done: true }

Afterwards, the function is finished, execution has left the body and further calls of genObj.next() have no effect.

Ways of creating generators

There are four ways in which you can create generators:

  1. Via a generator function declaration:

        function* genFunc() { ··· }
        let genObj = genFunc();
    
  2. Via a generator function expression:

        const genFunc = function* () { ··· };
        let genObj = genFunc();
    
  3. Via a generator method definition in an object literal:

        let obj = {
            * generatorMethod() {
                ···
            }
        };
        let genObj = obj.generatorMethod();
    
  4. Via a generator method definition in a class definition (which can be a class declaration or a class expression [11]):

        class MyClass {
            * generatorMethod() {
                ···
            }
        }
        let myInst = new MyClass();
        let genObj = myInst.generatorMethod();
    

Roles played by generators

Generators can play three roles:

  1. Iterators (data producers): Each yield can return a value via next(), which means that generators can produce sequences of values via loops and recursion. Due to generator objects implementing the interface Iterable [5], these sequences can be processed by any ECMAScript 6 construct that supports iterables. Two examples are: for-of loops and the spread operator (...).

  2. Observers (data consumers): yield can also receive a value from next() (via a parameter). That means that generators become data consumers that pause until a new value is pushed into them via next().

  3. Coroutines (data producers and consumers): Given that generators are pausable and can be both data producers and data consumers, not much work is needed to turn them into coroutines (cooperatively multitasked tasks).

The next sections provide deeper explanations of these roles.

Generators as iterators (data production)

As explained before, generator objects can be data producers, data consumers or both. This section looks at them as data producers, where they implement both the interfaces Iterable and Iterator (shown below). Note that that means the the result of a generator function is both an iterable and an iterator. The full interface of generator objects will be shown later.

    interface Iterable {
        [System.iterator]() : Iterator;
    }
    interface Iterator {
        next() : IteratorResult;
        return?(value? : any) : IteratorResult;
    }
    interface IteratorResult {
        value : any;
        done : boolean;
    }

A generator function produces a sequence of values via yield, a data consumer consumes thoses values via the iterator method next(). For example, the following generator function produces the values 'a' and 'b':

    function* genFunc() {
        yield 'a';
        yield 'b';
    }

This interaction shows how to retrieve the yielded values via the generator object genObj:

    > let genObj = genFunc();
    > genObj.next()
    { value: 'a', done: false }
    > genObj.next()
    { value: 'b', done: false }
    > genObj.next() // end of sequence reached
    ( value: undefined, done: true }

Ways of iterating over a generator

As generator objects are iterable, ES6 language constructs that support iterables can be applied to them. The following three ones are especially important.

First, the for-of loop:

    for (let x of genFunc()) {
        console.log(x);
    }
    // Output:
    // a
    // b

Second, the spread operator (...), which turns iterated sequences into elements of an array (consult [7] for more information on this operator):

    let arr = [...genFunc()]; // ['a', 'b']

Third, destructuring [6]:

    > let [x, y] = genFunc();
    > x
    'a'
    > y
    'b'

Returning from a generator

The previous generator function did not contain an explicit return. An implicit return is equivalent to returning undefined. Let’s examine a generator with an explicit return:

    function* genFuncWithReturn() {
        yield 'a';
        yield 'b';
        return 'result';
    }

The returned value shows up in the last object returned by next(), whose property done is true:

    > let genObjWithReturn = genFuncWithReturn();
    > genObjWithReturn.next()
    { value: 'a', done: false }
    > genObjWithReturn.next()
    { value: 'b', done: false }
    > genObjWithReturn.next()
    { value: 'result', done: true }

However, most constructs that work with iterables ignore the value inside the done object:

    for (let x of genFuncWithReturn()) {
        console.log(x);
    }
    // Output:
    // a
    // b
    
    let arr = [...genFuncWithReturn()]; // ['a', 'b']

yield*, an operator for making recursive generator calls, does consider values inside done objects. It is explained later.

Example: iterating over properties

Let’s look at an example that demonstrates how convenient generators are for implementing iterables. The following function, objectEntries(), returns an iterable over the properties of an object:

    function* objectEntries(obj) {
        // In ES6, you can use strings or symbols as property keys,
        // Reflect.ownKeys() retrieves both
        let propKeys = Reflect.ownKeys(obj);
    
        for (let propKey of propKeys) {
            yield [propKey, obj[propKey]];
        }
    }

This function enables you to iterate over the properties of an object jane via the for-of loop:

    let jane = { first: 'Jane', last: 'Doe' };
    for (let [key,value] of objectEntries(jane)) {
        console.log(`${key}: ${value}`);
    }
    // Output:
    // first: Jane
    // last: Doe

For comparison – an implementation of objectEntries() that doesn’t use generators is much more complicated:

    function objectEntries(obj) {
        let index = 0;
        let propKeys = Reflect.ownKeys(obj);
    
        return {
            [Symbol.iterator]() {
                return this;
            },
            next() {
                if (index < propKeys.length) {
                    let key = propKeys[index];
                    index++;
                    return { value: [key, obj[key]] };
                } else {
                    return { done: true };
                }
            }
        };
    }

Recursion via yield* (for output)

The yield* operator lets you call another generator from within a generator, as if you made a function call. For now, I only explain how this works if both generators produce output, I’ll later explain how things work if input is involved.

How can one generator recursively call another generator? Let’s assume you have written a generator function foo:

    function* foo() {
        yield 'a';
        yield 'b';
    }

How would you call foo from another generator function bar? The following approach does not work!

    function* bar() {
        yield 'x';
        foo(); // does nothing!
        yield 'y';
    }

Calling foo() returns an object, but does not actually execute foo(). That’s why ECMAScript 6 has the operator yield* for making recursive generator calls:

    function* bar() {
        yield 'x';
        yield* foo();
        yield 'y';
    }
    
    // Collect all values yielded by bar() in an array
    let arr = [...bar()];
        // ['x', 'a', 'b', 'y']

Internally, yield* works roughly as follows:

    function* bar() {
        yield 'x';
        for (let value of foo()) {
            yield value;
        }
        yield 'y';
    }

The operand of yield* does not have to be a generator object, it can be any iterable:

    function* bla() {
        yield 'sequence';
        yield* ['of', 'yielded'];
        yield 'values';
    }
    
    let arr = [...bla()];
        // ['sequence', 'of', 'yielded', 'values']
yield* considers end-of-iteration values

Most constructs that support iterables ignore the value included in the end-of-iteration object (whose property done is true). Generators provide that value via return. The result of yield* is the end-of-iteration value:

    function* genFuncWithReturn() {
        yield 'a';
        yield 'b';
        return 'The result';
    }
    function* logReturned(genObj) {
        let result = yield* genObj;
        console.log(result); // (A)
    }

If we want to get to line (A), we first must iterate over all values yielded by logReturned():

    > [...logReturned(genFuncWithReturn())]
    The result
    [ 'a', 'b' ]
Iterating over trees

Iterating over a tree with recursion is simple, writing an iterator for a tree with traditional means is complicated. That’s why generators shine here: they let you implement an iterator via recursion. As an example, consider the following data structure for binary trees. It is iterable, because it has a method whose key is Symbol.iterator. That method is a generator method and returns an iterator when called.

    class BinaryTree {
        constructor(value, left=null, right=null) {
            this.value = value;
            this.left = left;
            this.right = right;
        }
    
        /** Prefix iteration */
        * [Symbol.iterator]() {
            yield this.value;
            if (this.left) {
                yield* this.left;
            }
            if (this.right) {
                yield* this.right;
            }
        }
    }

The following code creates a binary tree and iterates over it via for-of:

    let tree = new BinaryTree('a',
        new BinaryTree('b',
            new BinaryTree('c'),
            new BinaryTree('d')),
        new BinaryTree('e'));
    
    for (let x of tree) {
        console.log(x);
    }
    // Output:
    // a
    // b
    // c
    // d
    // e

You can only yield in generators

A significant limitation of generators is that you can only yield while you are (statically) inside a generator function. That is, yielding in callbacks doesn’t work:

    function genFunc() {
        ['a', 'b'].forEach(x => yield x); // SyntaxError
    }

yield is not allowed inside non-generator functions, which is why the previous code causes a syntax error. In this case, it is easy to rewrite the code so that it doesn’t use callbacks (as shown below). But unfortunately that isn’t always possibe.

    function genFunc() {
        for (let x of ['a', 'b']) {
            yield x; // OK
        }
    }

The upsides of this limitation are explained later: they make generators easier to implement and compatible with event loops.

Generators as observers (data consumption)

As consumers of data, generator objects conform to the second half of the generator interface, Observer:

    interface Observer {
        next(value? : any) : void;
        return(value? : any) : void;
        throw(error) : void;
    }

As an observer, a generator pauses until it receives input. There are three kinds of input, transmitted via the methods specified by the interface:

  • next() sends normal input.
  • return() terminates the generator.
  • throw() signals an error.

Sending values via next()

If you use a generator as an observer, you send values to it via next() and it receives those values via yield:

    function* dataConsumer() {
        console.log('Started');
        console.log(`1. ${yield}`); // (A)
        console.log(`2. ${yield}`);
        return 'result';
    }

Let’s use this generator interactively. First, we create a generator object:

    > let genObj = dataConsumer();

We now call genObj.next(), which starts the generator. Execution continues until the first yield, which is where the generator pauses. The result of next() is the value yielded in line (A) (undefined, because yield doesn’t have an operand). In this section, we are not interested in what next() returns, because we only use it to send values, not to retrieve values.

    > genObj.next()
    Started
    { value: undefined, done: false }

We call next() two more times, in order to send the value 'a' to the first yield and the value 'b' to the second yield:

    > genObj.next('a')
    1. a
    { value: undefined, done: false }
    
    > genObj.next('b')
    2. b
    { value: 'result', done: true }

The result of the last next() is the value returned from dataConsumer(). done being true indicates that the generator is finished.

Unfortunately, next() is asymmetric, but that can’t be helped: It always sends a value to the currently suspended yield, but returns the operand of the following yield.

The first next()

When using a generator as an observer, it is important to note that the only purpose of the first invocation of next() is to start the observer. It is only ready for input afterwards, because this first invocation has advanced execution to the first yield. Therefore, you can’t send input via the first next() – you even get an error if you do:

    > function* g() { yield }
    > g().next('hello')
    TypeError: attempt to send 'hello' to newborn generator

The following utility function fixes this issue:

    /**
     * Returns a function that, when called,
     * returns a generator object that is immediately
     * ready for input via `next()`
     */
    function coroutine(generatorFunction) {
        return function (...args) {
            let generatorObject = generatorFunction(...args);
            generatorObject.next();
            return generatorObject;
        };
    }

To see how coroutine() works, let’s compare a wrapped generator with a normal one:

    const wrapped = coroutine(function* () {
        console.log(`First input: ${yield}`);
        return 'DONE';
    });
    const normal = function* () {
        console.log(`First input: ${yield}`);
        return 'DONE';
    };

The wrapped generator is immediately ready for input:

    > wrapped().next('hello!')
    First input: hello!

The normal generator needs an extra next() until it is ready for input:

    > let genObj = normal();
    > genObj.next()
    { value: undefined, done: false }
    > genObj.next('hello!')
    First input: hello!
    { value: 'DONE', done: true }

yield binds loosely

yield binds very loosely, so that we don’t have to put its operand in parentheses:

    yield a + b + c;

This is treated as:

    yield (a + b + c);

Not as:

    (yield a) + b + c;

As a consequence, many operators bind more tightly than yield and you have to put yield in parentheses if you want to use it as an operand. For example, you get a SyntaxError if you make an unparenthesized yield an operand of plus:

    console.log('Hello' + yield); // SyntaxError
    console.log('Hello' + yield 123); // SyntaxError
    
    console.log('Hello' + (yield)); // OK
    console.log('Hello' + (yield 123)); // OK

You do not need parens if yield is a direct argument in a function or method call:

    foo(yield 'a', yield 'b');

You also don’t need parens if you use yield on the right-hand side of an assignment:

    let input = yield;
Grammar

The need for parens can be seen in the following grammar rules in the ECMAScript 6 specification. These rules describe how expressions are parsed. I list them here from general (loose binding, lower precedence) to specific (tight binding, higher precedence). Wherever a certain kind of expression is demanded, you can also use more specific ones. The opposite is not true. The hierarchy ends with ParenthesizedExpression, which means that you can mention any expression anywhere, if you put it in parentheses.

    Expression :
        AssignmentExpression
        Expression , AssignmentExpression
    AssignmentExpression :
        ConditionalExpression
        YieldExpression
        ArrowFunction
        LeftHandSideExpression = AssignmentExpression
        LeftHandSideExpression AssignmentOperator AssignmentExpression
    
    ···
    
    AdditiveExpression :
        MultiplicativeExpression
        AdditiveExpression + MultiplicativeExpression
        AdditiveExpression - MultiplicativeExpression
    MultiplicativeExpression :
        UnaryExpression
        MultiplicativeExpression MultiplicativeOperator UnaryExpression
    
    ···
    
    PrimaryExpression :
        this
        IdentifierReference
        Literal
        ArrayLiteral
        ObjectLiteral
        FunctionExpression
        ClassExpression
        GeneratorExpression
        RegularExpressionLiteral
        TemplateLiteral
        ParenthesizedExpression
    ParenthesizedExpression :
        ( Expression )

The operands of an AdditiveExpression are an AdditiveExpression and a MultiplicativeExpression. Therefore, using a (more specific) ParenthesizedExpression as an operand is OK, but using a (more general) YieldExpression isn’t.

return() and throw()

Let’s recap how next(x) works (after the first invocation):

  1. The generator is currently suspended at a yield operator.
  2. Send the value x to that yield, which means that it evaluates to x.
  3. Proceed to the next yield or return:
    • yield x leads to next() returning with { value: x, done: false }
    • return x leads to next() returning with { value: x, done: true }

return() and next() work similarly to next(), but they do something different in step 2:

  • return(x) executes return x at the location of yield.
  • throw(x) executes throw x at the location of yield.

return() terminates the generator

return() performs a return at the location of the yield that led to the last suspension of the generator. Let’s use the following generator function to see how that works.

    function* genFunc1() {
        try {
            console.log('Started');
            yield; // (A)
        } finally {
            console.log('Exiting');
        }
    }

In the following interaction, we first use next() to start the generator and to proceed until the yield in line (A). Then we return from that location via return().

    > let genObj1 = genFunc1();
    > genObj1.next()
    Started
    { value: undefined, done: false }
    > genObj1.return('Result')
    Exiting
    { value: 'Result', done: true }
Preventing termination

You can prevent return() from terminating the generator if you yield inside the finally clause (using a return statement in that clause is also possible):

    function* genFunc2() {
        try {
            console.log('Started');
            yield;
        } finally {
            yield 'Not done, yet!';
        }
    }

This time, return() does not exit the generator function. Accordingly, the property done of the object it returns is false.

    > let genObj2 = genFunc2();
    
    > genObj2.next()
    Started
    { value: undefined, done: false }
    
    > genObj2.return('Result')
    { value: 'Not done, yet!', done: false }

You can invoke next() one more time. Similarly to non-generator functions, the return value of the generator function is the value that was queued prior to entering the finally clause.

    > genObj2.next()
    { value: 'Result', done: true }
Returning from a newborn generator

Returning a value from a newborn generator (that hasn’t started yet) is allowed:

    > function* genFunc() {}
    > genFunc().return('yes')
    { value: 'yes', done: true }

throw() signals an error

throw() throws an exception at the location of the yield that led to the last suspension of the generator. Let’s examine how that works via the following generator function.

    function* genFunc1() {
        try {
            console.log('Started');
            yield; // (A)
        } catch (error) {
            console.log('Caught: ' + error);
        }
    }

In the following interaction, we first use next() to start the generator and proceed until the yield in line (A). Then we throw an exception from that location.

    > let genObj1 = genFunc1();
    
    > genObj1.next()
    Started
    { value: undefined, done: false }
    
    > genObj1.throw(new Error('Problem!'))
    Caught: Error: Problem!
    { value: undefined, done: true }

The result of throw() (shown in the last line) stems from us leaving the function with an implicit return.

Uncaught exceptions

If you don’t catch the exception inside the generator, it is thrown by throw(). For example, the following generator function does not catch exceptions:

    function* genFunc2() {
        console.log('Started');
        yield; // (A)
    }

If we use throw() to throw an instance of Error at line (A), the method itself throws that error:

    > let genObj2 = genFunc2();
    > genObj2.next()
    Started
    { value: undefined, done: false }
    > genObj2.throw(new Error('Problem!'))
    Error: Problem!
Throwing from a newborn generator

Throwing an exception in a newborn generator (that hasn’t started yet) is allowed:

    > function* genFunc() {}
    > genFunc().throw(new Error('Problem!'))
    Error: Problem!

Example: processing asynchronously pushed data

The fact that generators-as-observers pause while they wait for input makes them perfect for on-demand processing of data that is received asynchronously. The pattern for setting up a chain of generators for processing is as follows:

  • First chain member: a normal function that has a parameter target, which is the generator object of the next element in the chain of generators. The function makes an asynchronous request and pushes the results to the target via target.next().

  • Intermediate chain members: generators that have a parameter target. They receive data via yield and send data via target.next().

  • Last chain member: a generator that has no parameter target and only receives data.

As an example, let’s chain generators to process a file that is read asynchronously. The code shown in this section is a Node.js script that is executed via babel-node [8].

The following code sets up the chain, which starts with the normal function readFile, continues with the generators splitLines and numberLines and ends with the generator printLines:

    let fileName = process.argv[2];
    readFile(fileName, chain(splitLines, numberLines, printLines));

I’ll explain what these functions do when I show their code. The following helper function sets up a chain of generators: Starting with the last generator, each generator function is called and the resulting generator object is used to start the generator via an initial next(). If a generator has a successor, it receives the successor’s generator object via the parameter target. The result of chain() is the generator object of the first generator function (in our example: splitLines).

    function chain(...generatorFuncs) {
        if (generatorFuncs.length < 1) {
            throw new Error('Need at least 1 argument');
        }
        let generatorObject = generatorFuncs[generatorFuncs.length-1]();
        generatorObject.next(); // generator is now ready for input
        for (let i=generatorFuncs.length-2; i >= 0; i--) {
            let generatorFunction = generatorFuncs[i];
            // Link current generator to successor
            generatorObject = generatorFunction(generatorObject);
            // Start the generator
            generatorObject.next();
        }
        return generatorObject;
    }

readFile() is the non-generator function that starts everything.

    import {createReadStream} from 'fs';
    
    /**
     * Create an asynchronous ReadStream for the file whose name
     * is `fileName` and feed it to the generator object `target`.
     *
     * @see ReadStream https://nodejs.org/api/fs.html#fs_class_fs_readstream
     */
    function readFile(fileName, target) {
        let readStream = createReadStream(fileName,
            { encoding: 'utf8', bufferSize: 1024 });
        readStream.on('data', buffer => {
            let str = buffer.toString('utf8');
            target.next(str);
        });
        readStream.on('end', () => {
            // Signal end of output sequence
            target.return();
        });
    }

The chain of generators starts with splitLines:

    /**
     * Turns a sequence of text chunks into a sequence of lines
     * (where lines are separated by newlines)
     */
    function* splitLines(target) {
        let previous = '';
        try {
            while (true) {
                previous += yield;
                let eolIndex;
                while ((eolIndex = previous.indexOf('n')) >= 0) {
                    let line = previous.slice(0, eolIndex);
                    target.next(line);
                    previous = previous.slice(eolIndex+1);
                }
            }
        } finally {
            // Handle the end of the input sequence
            // (signaled via `return()`)
            if (previous.length > 0) {
                target.next(previous);
            }
            // Signal end of output sequence
            target.return();
        }
    }

Note an important pattern:

  • readFile uses the generator object method return() to signal the end of the sequence of chunks that it sends.
  • readFile sends that signal while splitLines is waiting for input via yield, inside an infinite loop. return() breaks from that loop.
  • splitLines uses a finally clause to handle the end-of-sequence.

The next generator is numberLines:

    /**
     * Prefixes numbers to a sequence of lines
     */
    function* numberLines(target) {
        try {
            for (let lineNo = 0; ; lineNo++) {
                let line = yield;
                target.next(`${lineNo}: ${line}`);
            }
        } finally {
            // Signal end of output sequence
            target.return();
        }
    }

The last generator is printLines:

    /**
     * Receives a sequence of lines (without newlines)
     * and logs them (adding newlines).
     */
    function* printLines() {
        while (true) {
            let line = yield;
            console.log(line);
        }
    }

The neat thing about this code is that everything happens lazily (on demand): lines are split, numbered and printed as they arrive; we don’t have to wait for all of the text before we can start printing.

yield*: the full story

So far, we have only seen one aspect of yield: it propagates yielded values from the callee to the caller. Now that we are interested in generators receiving input, another aspect becomes relevant: yield* also forwards input received by the caller to the callee.

I’ll first explain the complete semantics of yield* by showing how you’d implemented it in JavaScript. Then I give simple examples where input received by the caller via next(), return() and throw() is forwarded to the callee.

The following statement:

    let yieldStarResult = yield* calleeFunc();

is roughly equivalent to:

    let yieldStarResult;
    
    let calleeObj = calleeFunc();
    let prevReceived = undefined;
    while (true) {
        try {
            // Forward input previously received
            let item = calleeObj.next(prevReceived);
            if (item.done) {
                yieldStarResult = item.value;
                break;
            }
            prevReceived = yield item.value;
        } catch (e) {
            // Pretend `return` can be caught like an exception
            if (e instanceof Return) {
                // Forward input received via return()
                calleeObj.return(e.returnedValue);
                return e.returnedValue; // “re-throw”
            } else {
                // Forward input received via throw()
                calleeObj.throw(e); // may throw
            }
        }
    }

To keep things simple, several things are missing in this code:

  • The operand of yield* can be any iterable value.

  • return() and throw() are optional iterator methods. We should only call them if they exist.

  • If an exception is received and throw() does not exist, but return() does then return() is called (before throwing an exception) to give calleeObject the opportunity to clean up.

  • calleeObj can refuse to close, by returning an object whose property done is false. Then the caller also has to refuse to close and yield* must continue to iterate.

Example: yield* forwards next()

The following generator function caller() invokes the generator function callee() via yield*.

    function* callee() {
        console.log('callee: ' + (yield));
    }
    function* caller() {
        while (true) {
            yield* callee();
        }
    }

callee logs values received via next(), which allows us to check whether it receives the value 'a' and 'b' that we send to caller.

    > let callerObj = caller();
    
    > callerObj.next() // start
    { value: undefined, done: false }
    
    > callerObj.next('a')
    callee: a
    { value: undefined, done: false }
    
    > callerObj.next('b')
    callee: b
    { value: undefined, done: false }
Example: yield* forwards throw()

Let’s use the following code to demonstrate how throw() works while yield* is delegating to another generator.

    function* callee() {
        try {
            yield 'b'; // (A)
            yield 'c';
        } finally {
            console.log('finally callee');
        }
    }
    function* caller() {
        try {
            yield 'a';
            yield* callee();
            yield 'd';
        } catch (e) {
            console.log('[caller] ' + e);
        }
    }

We first create a generator object and advance until line (A).

    > let genObj = caller();
    
    > genObj.next().value
    'a'
    > genObj.next().value
    'b'

In that line, we throw an exception:

    > genObj.throw(new Error('Problem!'))
    finally callee
    [caller] Error: Problem!
    { value: undefined, done: true }

callee doesn’t catch the exception, which is why it is propagated into caller, where it is logged before caller finishes.

Example: yield* forwards return()

Let’s use the following code to demonstrate how return() works while yield* is delegating to another generator.

    function* callee() {
        try {
            yield 'b';
            yield 'c';
        } finally {
            console.log('finally callee');
        }
    }
    function* caller() {
        try {
            yield 'a';
            yield* callee();
            yield 'd';
        } finally {
            console.log('finally caller');
        }
    }

Destructuring closes an iterator via return() if one doesn’t iterate until the end:

    let [x, y] = caller(); // ['a', 'b']
    
    // Output:
    // finally callee
    // finally caller

Interestingly, the return() is sent to caller and forwarded to callee (which it terminates early), but then also terminates caller (which is what someone invoking return() would expect). That is, return is propagated much like an exception.

How to think about yield*

There are two ways to think about yield*:

  • As a function call from generator to generator.
  • In order to understand return(), it helps to ask yourself: what should happen if I copy-pasted the code of the callee function into the code of the caller function.

Generators as coroutines (cooperative multitasking)

We have seen generators being used as either sources or sinks of data. For many applications, it’s good practice to strictly separate these two roles, because it keeps things simpler. This section describes the full generator interface (which combines both roles) and one application where both roles are needed: cooperative multitasking, where tasks must be able to both send and receive information.

The full generator interface

The full interface of generator objects, Generator, handles both output and input and combines two interfaces that we have seen previously: Iterator for output and Observer for input.

    interface Iterator { // data producer
        next() : IteratorResult;
        return?(value? : any) : IteratorResult;
    }
    
    interface Observer { // data consumer
        next(value? : any) : void;
        return(value? : any) : void;
        throw(error) : void;
    }

This is the full interface of generator objects (as described by the ECMAScript language specification):

    interface Generator {
        next(value? : any) : IteratorResult;
        throw(value? : any) : IteratorResult;
        return(value? : any) : IteratorResult;
    }
    interface IteratorResult {
        value : any;
        done : boolean;
    }

Cooperative multitasking

Cooperative multitasking is an application of generators where we need them to handle both output and input. Before we get into how that works, let’s first review the current state of parallelism in JavaScript [9].

JavaScript runs in a single process. There are two ways in which this limitation is being abolished:

  • Multiprocessing: Web Workers let you run JavaScript in multiple processes. Shared access to data is one of the biggest pitfalls of multiprocessing. Web Workers avoid it by not sharing any data. That is, if you want a Web Worker to have a piece of data, you must send it a copy or transfer your data to it (after which you can’t access it anymore).

  • Cooperative multitasking: There are various patterns and libraries that experiment with cooperative multitasking. Multiple tasks are run, but only one at a time. Each task must explicitly suspend itself, giving it full control over when a task switch happens. In these experiments, data is often shared between tasks. But due to explicit suspension, there are few risks.

Two use cases benefit from cooperative multitasking, because they involve control flows that are mostly sequential, anyway, with occasional pauses:

  • Asynchronous computations: A task blocks (pauses) until it receives the result of a long-running computation.
  • Streams: A task sequentially processes a stream of data and pauses if there is no data available.

For binary streams, WHATWG is currently working on a standard proposal that is based on callbacks and promises.

For streams of data, Communicating Sequential Processes (CSP) are an interesting solution. A generator-based CSP library is covered later in this blog post.

For asynchronous computations, Promises [10] have become popular and are included in ECMAScript 6.

Simplifying asynchronous code via generators

Several promise-based libraries simplify asynchronous code via generators. Generators are ideal as clients of promises, because they can be suspended until a result arrives.

The following example demonstrates what that looks like if one uses the library co by T.J. Holowaychuk. We need two libraries (if we run Node.js code via babel-node [8]):

    require('isomorphic-fetch'); // polyfill
    let co = require('co');

co is the actual library for cooperative multitasking, isomorphic-fetch is a polyfill for the new promise-based fetch API (a replacement of XMLHttpRequest; read “That’s so fetch!” by Jake Archibald for more information). fetch makes it easy to write a function getFile that returns the text of a file at a url via a Promise:

    function getFile(url) {
        return fetch(url)
            .then(request => request.text());
    }

We now have all the ingredients to use co. The following task reads the texts of two files, parses the JSON inside them and logs the result.

    co(function* () {
        try {
            let [croftStr, bondStr] = yield Promise.all([  // (A)
                getFile('http://localhost:8000/croft.json'),
                getFile('http://localhost:8000/bond.json'),
            ]);
            let croftJson = JSON.parse(croftStr);
            let bondJson = JSON.parse(bondStr);
    
            console.log(croftJson);
            console.log(bondJson);
        } catch (e) {
            console.log('Failure to read: ' + e);
        }
    });

Note how nicely synchronous this code looks, even though it makes an asynchronous call in line (A). A generator-as-task makes an async call by yielding a promise to the scheduler function co. The yielding pauses the generator. Once the promise returns a result, the scheduler resumes the generator by passing it the result via next(). A simple version of co looks as follows.

    function co(genFunc) {
        let genObj = genFunc();
        run();
    
        function run(promiseResult = undefined) {
            let item = genObj.next(promiseResult);
            if (!item.done) {
                // A promise was yielded
                item.value
                .then(result => run(result))
                .catch(error => {
                    genObj.throw(error);
                });
            }
        }
    }

The limitations of cooperative multitasking via generators

Coroutines are cooperatively multitasked tasks that have no limitations: Inside a coroutine, any function can suspend the whole coroutine (the function activation itself, the activation of the function’s caller, the caller’s caller, etc.).

In contrast, you can only suspend a generator from directly within a generator and only the current function activation is suspended. Due to these limitations, generators are occasionally called shallow coroutines [3].

The benefits of the limitations of generators

The limitations of generators have two main benefits:

  • Generators are compatible with event loops, which provide simple cooperative multitasking in browsers. I’ll explain the details momentarily.

  • Generators are relatively easy to implement, because only a single function activation needs to be suspended and because browsers can continue to use event loops.

JavaScript already has a very simple style of cooperative multitasking: the event loop [9], which schedules the execution of tasks in a queue. Each task is started by calling a function and finished once that function is finished (that’s a simplification of how things actually work, but it’ll do for now). Events, setTimeout() and other mechanisms add tasks to the queue.

This style of multitasking makes one important guarantee: run to completion; every function can rely on not being interrupted by another task until it is finished. Functions become transactions and can perform complete algorithms without anyone seeing the data they operate on in an itermediate state. Concurrent access to shared data makes multitasking complicated and is not allowed by JavaScript’s concurrency model. That’s why run to completion is a good thing.

Alas, coroutines prevent run to completion, because any function could suspend its caller. For example, the following algorithm consists of multiple steps:

    step1(sharedData);
    step2(sharedData);
    lastStep(sharedData);

If step2 was to suspend the algorithm, other tasks could run before the last step of the algorithm is performed. Those tasks could contain other parts of the application which would see sharedData in an unfinished state.

Generators preserve run to completion, they only suspend themselves and return to their caller. co and similar libraries give you most of the power of coroutines:

  • They provide schedulers for tasks defined via generators.
  • Tasks start with generators and can thus be fully suspended.
  • A recursive (generator) function call is only suspendable if it is done via yield*. That gives callers control over suspension.

Examples of generators

This section gives several examples of what generators can be used for.

Implementing iterables via generators

In the blog post “Iterables and iterators in ECMAScript 6”, I implemented several iterables “by hand”. In this section, I use generators, instead.

The iterable combinator take()

take() converts a (potentially infinite) sequence of iterated values into a sequence of length n:

    function* take(n, iterable) {
        for (let x of iterable) {
            if (n <= 0) return;
            n--;
            yield x;
        }
    }

The following is an example of using it:

    let arr = ['a', 'b', 'c', 'd'];
    for (let x of take(2, arr)) {
        console.log(x);
    }
    // Output:
    // a
    // b

An implementation of take() without generators is more complicated:

    function take(n, iterable) {
        let iter = iterable[Symbol.iterator]();
        return {
            [Symbol.iterator]() {
                return this;
            },
            next() {
                if (n > 0) {
                    n--;
                    return iter.next();
                } else {
                    iter.return()
                    return { done: true };
                }
            },
            return() {
                n = 0;
                iter.return();
            }
        };
    }

Note that the iterable combinator zip() does not profit much from being implemented via a generator, because multiple iterables are involved (and for-of can’t be used).

Infinite iterables

naturalNumbers() returns an iterable over all natural numbers:

    function* naturalNumbers() {
        for (let n=0;; n++) {
            yield n;
        }
    }

This function is often used in conjunction with a combinator:

    for (let x of take(3, naturalNumbers())) {
        console.log(x);
    }
    // Output
    // 0
    // 1
    // 2

One last time, I show the non-generator implementation, so you can compare:

    function naturalNumbers() {
        let n = 0;
        return {
            [Symbol.iterator]() {
                return this;
            },
            next() {
                return { value: n++ };
            }
        }
    }
Array-inspired iterable combinators: map, filter

Arrays can be transformed via the methods map and filter. Those methods can be generalized to have iterables as input and iterables as output.

This is the generalized version of map:

    function* map(iterable, mapFunc) {
        for (let x of iterable) {
            yield mapFunc(x);
        }
    }
    
    // Works with infinite iterables!
    let arr = [...take(4, map(naturalNumbers(), x => x * x))];
        // [0, 1, 4, 9]

This is the generalized version of filter:

    function* filter(iterable, filterFunc) {
        for (let x of iterable) {
            if (filterFunc(x)) {
                yield x;
            }
        }
    }
    
    // Works with infinite iterables!
    let arr = [...take(4, filter(naturalNumbers(), x => (x % 2) === 0))];
        // [0, 2, 4, 6]

Generators for lazy evaluation

The next two examples show how generators can be used to process a stream of characters.

  • The input is a stream of characters.

  • Step 1 – tokenize (characters → words): The characters are grouped into words, strings that match the regular expression /^[A-Za-z0-9]$/. Non-word characters are ignored, but they separate words. The input of this step is a stream of characters, the output a stream of words.

  • Step 2 – extract numbers (words → numbers): only keep words that match the regular expression /^[0-9]+$/ and convert them to numbers.

  • Step 3 – add numbers (numbers → number): produce a single number by computing the total of all numbers in a stream.

The neat thing is that everything is computed lazily (incrementally and on demand): computation starts as soon as the first character arrives. For example, we don’t have to wait until we have all characters to get the first word.

Lazy pull (generators as iterators)

Step 1 – tokenize. The following trick makes the code a bit simpler: the end-of-sequence iterator result (whose property done is false) is converted into the sentinel value END_OF_SEQUENCE.

    function* tokenize(chars) {
        let iterator = chars[Symbol.iterator]();
        let ch;
        do {
            ch = getNextItem(iterator);
            if (isWordChar(ch)) {
                let word = '';
                do {
                    word += ch;
                    ch = getNextItem(iterator);
                } while (isWordChar(ch));
                yield word;
            }
            // Ignore all other characters
        } while (ch !== END_OF_SEQUENCE);
    }
    const END_OF_SEQUENCE = Symbol();
    function getNextItem(iterator) {
        let item = iterator.next();
        return item.done ? END_OF_SEQUENCE : item.value;
    }
    function isWordChar(ch) {
        return typeof ch === 'string' && /^[A-Za-z0-9]$/.test(ch);
    }

Let’s try out tokenization. Note that the spaces and the dot are non-words. They are ignored, but they separate words. We use the fact that strings are iterables over characters (Unicode code points). The final result is an iterable over words, which we turn into an array via the spread operator (...).

    > [...tokenize('2 apples and 5 oranges.')]
    [ '2', 'apples', 'and', '5', 'oranges' ]

Step 2 – extract numbers: This step is relatively simple, we only yield words that contain nothing but digits, after converting them to numbers via Number().

    function* extractNumbers(words) {
        for (let word of words) {
            if (/^[0-9]+$/.test(word)) {
                yield Number(word);
            }
        }
    }

The following example shows the transformation steps: characters → words → numbers.

    > [...extractNumbers(tokenize('2 apples and 5 oranges.'))]
    [ 2, 5 ]

Step 3 – add numbers: This last step is performed by a normal function that pulls the results and reports their total.

    function summarize(numbers) {
        let result = 0;
        for (let n of numbers) {
            result += n;
        }
        return result;
    }

The final result shows us that there are 7 things in the input sentence.

    > summarize(extractNumbers(tokenize('2 apples and 5 oranges.')))
    7
Lazy push (generators as observables)

Not much work is needed to convert the previous pull-based algorithm into a push-based one. The steps are the same. But instead of finishing via pulling, we start via pushing (in both cases, non-generator functions are used).

As previously explained, if generators receive input via yield, the first invocation of next() on the generator object doesn’t do anything. That’s why we again use the following helper function:

    /**
     * Returns a function that, when called,
     * returns a generator object that is immediately
     * ready for input via `next()`
     */
    function coroutine(generatorFunction) {
        return function (...args) {
            let generatorObject = generatorFunction(...args);
            generatorObject.next();
            return generatorObject;
        };
    }

The following function send() wraps the chain of generators. Its parameter receiver holds the generator object of the first generator in the chain. send() pushes the contents of iterable to the receiver, via next().

    function send(iterable, receiver) {
        for (let x of iterable) {
            receiver.next(x);
        }
        receiver.return(); // signal end of stream
    }

When a generator processes a stream, it needs to be aware of the end of the stream, so that it can clean up properly. For pull, we did this via a special end-of-stream sentinel. For push, the end-of-stream is signaled via return().

Let’s test send() via a generator that simply outputs everything it receives:

    const logItems = coroutine(function* () {
        try {
            while (true) {
                let item = yield;
                console.log(item);
            }
        } finally {
            console.log('DONE');
        }
    });

Let’s send logItems() three “characters” via a string (which is an iterable over Unicode code points).

    > send('abc', logItems());
    a
    b
    c
    DONE

Step 1 – tokenize. Note how this generator reacts to the end of the stream (as signaled via return()) in two finally clauses. We depend on return() being sent to either one of the two yields. Otherwise, the generator would never terminate, because the infinite loop starting in line (A) would never terminate.

    const tokenize = coroutine(function* (receiver) {
        try {
            while (true) { // (A)
                let ch = yield;
                if (isWordChar(ch)) {
                    // A word has started
                    let word = '';
                    try {
                        do {
                            word += ch;
                            ch = yield;
                        } while (isWordChar(ch));
                    } finally {
                        // The word is finished.
                        // We get here if
                        // (a) the loop terminates normally
                        // (b) the loop is terminated via `return()`
                        receiver.next(word);
                    }
                }
                // Ignore all other characters
            }
        } finally {
            // We only get here if the infinite loop is terminated
            // via `return()`. Forward `return()` to `receiver` so
            // that it is also aware of the end of stream.
            receiver.return();
        }
    });
    
    function isWordChar(ch) {
        return /^[A-Za-z0-9]$/.test(ch);
    }

tokenize() demonstrates that generators work well as implementations of linear state machines. In this case, the machine has two states: “inside a word” and “not inside a word”.

Let’s tokenize a string:

    > send('2 apples and 5 oranges.', tokenize(logItems()));
    2
    apples
    and
    5
    oranges

Step 2 – extract numbers: This step is straightforward.

    const extractNumbers = coroutine(function* (receiver) {
        try {
            while (true) {
                let word = yield;
                if (/^[0-9]+$/.test(word)) {
                    receiver.next(Number(word));
                }
            }
        } finally {
            // Only reached via `return()`, forward.
            receiver.return();
        }
    });

Let’s log the numbers appearing in a string:

    > send('2 apples and 5 oranges.', tokenize(extractNumbers(logItems())));
    2
    5
    DONE

Step 3 – add numbers: This time, we react to the end of the stream by pushing a single value and then closing the receiver.

    const addNumbers = coroutine(function* (receiver) {
        let result = 0;
        try {
            while (true) {
                result += yield;
            }
        } finally {
            // We received an end-of-stream
            receiver.next(result);
            receiver.return(); // signal end of stream
        }
    });

Let’s sum up the numbers appearing inside a string:

    > send('2 apples and 5 oranges.', tokenize(extractNumbers(addNumbers(logItems()))));
    7
    DONE

As mentioned before, one benefit of a push-based approach is that it allows you to process data that you receive asynchronously.

Cooperative multi-tasking via generators

Pausing long-running tasks

In this example, we create a counter that is displayed on a web page. We improve an initial version until we have a cooperatively multitasked version that doesn’t block the main thread and the user interface.

This is the part of the web page in which the counter should be displayed:

    <body>
        Counter: <span id="counter"></span>
    </body>

This function displays a counter that counts up forever (well, until the number overflows):

    function countUp(start = 0) {
        const counterSpan = document.querySelector('#counter');
        while (true) {
            counterSpan.textContent = String(start);
            start++;
        }
    }

If you ran this function, it would completely block the user interface thread in which it runs and its tab would become unresponsive.

Let’s implement the same functionality via a generator that periodically pauses via yield (a scheduling function for running this generator is shown at the end):

    function* countUp(start = 0) {
        const counterSpan = document.querySelector('#counter');
        while (true) {
            counterSpan.textContent = String(start);
            start++;
            yield; // pause
        }
    }

Let’s add one small improvement. We move the update of the user interface to another generator, displayCounter, which we call via yield*. As it is a generator, it can also take care of pausing.

    function* countUp(start = 0) {
        while (true) {
            start++;
            yield* displayCounter(start);
        }
    }
    function* displayCounter(counter) {
        const counterSpan = document.querySelector('#counter');
        counterSpan.textContent = String(counter);
        yield; // pause
    }

Lastly, this is a scheduling function that we can use to run countUp(). Each execution step of the generator is handled by a separate task, which is created via setTimeout(). That means that the user interface can schedule other tasks in between and will remain responsive.

    function run(generatorObject) {
        if (!generatorObject.next().done) {
            // Add a new task to the event queue
            setTimeout(function () {
                run(generatorObject);
            }, 1000);
        }
    }

With the help of run, we get a (nearly) infinite count-up that doesn’t block the user interface:

    run(countUp());
Cooperative multitasking with generators and Node.js-style callbacks

If you call a generator function (or method), it does not have access to its generator object; its this is the this it would have if it were a non-generator function. A work-around is to pass the generator object into the generator function via yield.

The following Node.js script uses this technique, but wraps the generator object in a callback (next, line (A)). It must be run via babel-node [8].

    import {readFile} from 'fs';
    
    let fileNames = process.argv.slice(2);
    
    console.log(fileNames, readFile);
    
    run(function* () {
        let next = yield; // (A)
        for (let f of fileNames) {
            let contents = yield readFile(f, { encoding: 'utf8' }, next);
            console.log('-------------', f);
            console.log(contents);
        }
    });

In line (A), we get a callback that we can use with functions that follow Node.js callback conventions. The callback uses the generator object to wake up the generator, as you can see in the implementation of run():

    function run(generatorFunction) {
        let generatorObject = generatorFunction();
    
        // Step 1: Proceed to first `yield`
        generatorObject.next();
    
        // Step 2: Pass in a function that the generator can use as a callback
        let nextFunction = createNextFunction(generatorObject);
        generatorObject.next(nextFunction);
    
        // Subsequent invocations of `next()` are triggered by `nextFunction`
    }
    
    function createNextFunction(generatorObject) {
        return function(error, result) {
            if (error) {
                generatorObject.throw(error);
            } else {
                generatorObject.next(result);
            }
        };
    }
Communicating Sequential Processes (CSP)

The library js-csp brings Communicating Sequential Processes (CSP) to JavaScript, a style of cooperative multitasking that is similar to ClojureScript’s core.async and Go’s goroutines. js-csp has two abstractions:

  • Processes: are cooperatively multitasked tasks and implemented by handing a generator function to the scheduling function go().
  • Channels: are queues for communication between processes. Channels are created by calling chan().

As an example, let’s use CSP to handle DOM events, in a manner reminiscent of Functional Reactive Programming. The following code uses the function listen() (which is shown later) to create a channel that outputs mousemove events. It then continuously retrieves the output via take, inside an infinite loop. Thanks to yield, the process blocks until the channel has output.

    import csp from 'js-csp';
    
    csp.go(function* () {
        let element = document.querySelector('#uiElement1');
        let channel = listen(element, 'mousemove');
        while (true) {
            let event = yield csp.take(channel);
            let x = event.layerX || event.clientX;
            let y = event.layerY || event.clientY;
            element.textContent = `${x}, ${y}`;
        }
    });

listen() is implemented as follows.

    function listen(element, type) {
        let channel = csp.chan();
        element.addEventListener(type,
            event => {
                csp.putAsync(channel, event);
            });
        return channel;
    }

This example is taken from the blog post “Taming the Asynchronous Beast with CSP Channels in JavaScript” by James Long. Consult this blog post for more information on CSP.

Inheritance

This is a diagram of how various objects are connected in ECMAScript 6 (it is based on Allen Wirf-Brock’s diagram in the ECMAScript specification):

Legend:

  • The white (hollow) arrows express the has-prototype relationship (inheritance) between objects. In other words: a white arrow from x to y means that Object.getPrototypeOf(x) === y.
  • Parentheses indicate that an object exists, but is not accessible via a global variable.
  • An instanceof arrow from x to y means that x instanceof y.
    • Remember that o instanceof C is equivalent to C.prototype.isPrototypeOf(o).
  • A prototype arrow from x to y means that x.prototype === y.

The diagram reveals two interesting facts:

First, a generator function g works very much like a constructor (you can even invoke it via new): The generator objects it creates are instances of it, methods added to g.prototype become prototype methods, etc.:

    > function* g() {}
    > g.prototype.hello = function () { return 'hi!'};
    > let obj = g();
    > obj instanceof g
    true
    > obj.hello()
    'hi!'

Second, if you want to make methods available for all generator objects, it’s best to add them to (Generator.object). One way of accessing that object is as follows:

    > let Generator_prototype = Object.getPrototypeOf(function* () {}).prototype;
    > Generator_prototype.hello = function () { return 'hi!'};
    > let generatorObject = (function* () {})();
    > generatorObject.hello()
    'hi!'

IteratorPrototype

There is no (Iterator) in the diagram, because no such object exists. But, given how instanceof works and because (IteratorPrototype) is a prototype of g1(), you could still say that g1() is an instance of Iterator.

All iterators in ES6 have (IteratorPrototype) in their prototype chain. That object is iterable, because it has the following method. Therefore, all ES6 iterators are iterable (as a consequence, you can apply for-of etc. to them).

    [System.iterator]() {
        return this;
    }

The specification recommends to use the following code to access (IteratorPrototype):

    const proto = Object.getPrototypeOf.bind(Object);
    let IteratorPrototype = proto(proto([][Symbol.iterator]()));

You could also use:

    let IteratorPrototype = proto(proto(function* () {}.prototype));

Quoting the ECMAScript 6 specification:

ECMAScript code may also define objects that inherit from IteratorPrototype. The IteratorPrototype object provides a place where additional methods that are applicable to all iterator objects may be added.

IteratorPrototype will probably become directly accessible in an upcoming version of ECMAScript and contain tool methods such as map() and filter() (source).

The value of this in generators

A generator function combines two concerns:

  1. It is a function that sets up and returns a generator object.
  2. It contains the code that the generator object steps through.

That’s why it’s not immediately obvious what the value of this should be inside a generator.

In function calls and method calls, this is what it would be if gen() wasn’t a generator function, but a normal function:

    function* gen() {
        'use strict'; // just in case
        yield this;
    }
    
    // Retrieve the yielded value via destructuring
    let [functionThis] = gen();
    console.log(functionThis); // undefined
    
    let obj = { method: gen };
    let [methodThis] = obj.method();
    console.log(methodThis === obj); // true

If you access this in a generator that was invoked via new, you get a ReferenceError (source: ES6 spec):

    function* gen() {
        console.log(this); // ReferenceError
    }
    new gen();

We have previously seen a simple work-around: wrap the generator in a normal function that hands the generator its generator object via yield.

Style consideration: whitespace before and after the asterisk

Reasonable – and legal – variations of formatting the asterisk are:

  • A space before and after it:
    function * foo(x, y) { ... }

  • A space before it:
    function *foo(x, y) { ... }

  • A space after it:
    function* foo(x, y) { ... }

  • No whitespace before and after it:
    function*foo(x, y) { ... }

Let’s figure out which of these variations make sense for which constructs and why.

Generator function declarations and expressions

Here, the star is only used because generator (or something similar) isn’t available as a keyword. If it were, then a generator function declaration would look like this:

    generator foo(x, y) {
        ...
    }

Instead of generator, ECMAScript 6 marks the function keyword with an asterisk. Thus, function* can be seen as a synonym for generator, which suggests writing generator function declarations as follows.

    function* foo(x, y) {
        ...
    }

Anonymous generator functions would be formatted like this:

    const foo = function* (x, y) {
        ...
    }

Concise generator method definitions

When writing a concise generator method definitions, I recommend to format the asterisk as follows.

    let obj = {
        * generatorMethod(x, y) {
            ...
        }
    };

There are three arguments in favor of writing a space after the asterisk.

First, the asterisk shouldn’t be part of the method name. On one hand, it isn’t part of the name of a generator function. On the other hand, the asterisk is only mentioned when defining a generator, not when using it.

Second, a concise generator method definition is an abbreviation for the following syntax. (To make my point, I’m redundantly giving the function expression a name, too.)

    let obj = {
        generatorMethod: function* generatorMethod(x, y) {
            ...
        }
    };

If concise method definitions are about omitting the function keyword then the asterisk should probably be followed by a space.

Third, generator method definitions are syntactically similar to getters and setters (which are already available in ECMAScript 5):

    let obj = {
        get foo() {
            ...
        }
        set foo(value) {
            ...
        }
    }

The keywords get and set can be seen as modifiers of a normal concise method definition. Arguably, an asterisk is also such a modifier.

Recursive yield

The following is an example of a generator function yielding its own yielded values recursively:

    function* foo(x) {
        ...
        yield* foo(x - 1);
        ...
    }

The asterisk marks a different kind of yield operator, which is why the above way of writing it makes sense.

Documenting generator functions and methods

Kyle Simpson (@getify) proposed something interesting: Given that we often append parentheses when we write about functions and methods such as Math.max(), wouldn’t it make sense to prepend an asterisk when writing about generator functions and methods? For example: should we write *foo() to refer to the generator function in the previous subsection? Let me argue against that.

When it comes to writing a function that returns an iterable, a generator is only one of the several options. I think it is better to not give away this implementation detail via marked function names.

Furthermore, you don’t use the asterisk when calling a generator function, but you do use parentheses.

Lastly, the asterisk doesn’t provide useful information – yield* can also be used with functions that return an iterable. But it may make sense to mark the names of functions and methods that return iterables (including generators). For example, via the suffix Iter.

Conclusion

I hope that this blog post convinced you that generators are a useful and versatile tool.

I like that generators let you implement cooperatively multitasked tasks that block while making asynchronous function calls. In my opinion that’s the right mental model for async calls. I hope that JavaScript goes further in this direction in the future. If one generator async-calls another generator, the indirection via promises is not needed. It could be avoided by the generator-based async functions that have been proposed for ECMAScript 2016.

References

Acknowledgement: items 1–3 are sources of this blog post.

  1. Async Generator Proposal” by Jafar Husain
  2. A Curious Course on Coroutines and Concurrency” by David Beazley
  3. Why coroutines won’t work on the web” by David Herman
  4. Exploring ES6: Upgrade to the next version of JavaScript”, book by Axel
  5. Iterables and iterators in ECMAScript 6
  6. Destructuring and parameter handling in ECMAScript 6
  7. The spread operator (...)”, a section in the blog post “Destructuring and parameter handling in ECMAScript 6”.
  8. Using the ES6 transpiler Babel on Node.js
  9. ECMAScript 6 promises (1/2): foundations [explains the event loop and more]
  10. ECMAScript 6 promises (2/2): the API
  11. Classes in ECMAScript 6 (final semantics)

Source:: 2ality

15 Must-Know Chrome DevTools Tips and Tricks

By Danny Markov

15-chrome-devtools-tips-and-tricks

Google Chrome is the most popular web browser used by web developers today. With a quick six week release cycle and a powerful set of ever expanding developer features turned the browser into a must have tool. Most of you are probably familiar with many of its features like live-editing CSS, using the console and the debugger. In this article we’re going to share with you 15 cool tips and tricks that will improve your workflow even more.

1. Quick file switching

If you’ve used Sublime Text, you probably can’t live without it’s “Go to anything” overlay. You will be happy to hear that dev tools has it too. Press Ctrl + P (Cmd + P on Mac) when DevTools is opened, to quickly search for, and open any file in your project.

2. Search within source code

But what about if you wish to search within source code? To search in all files loaded on the page for a specific string, hit Ctrl + Shift + F (Cmd + Opt + F). This method of searching supports Regular expressions as well.

2.SearchAll

3. Go to line

After you’ve opened a file in the Sources tab, DevTools allows you to easily jump to any line in it. To do so press Ctrl + G for Windows and Linux, (or Cmd + L for Mac), and type in your line number.

3.JumpToLine

Another way to do this is to press Ctrl + O, and instead of searching for a file, enter “:” followed by a line number.

4. Selecting elements in console

The DevTools console supports some handy magic variables and functions selecting DOM elements:

  • $() – Short for document.querySelector(). Returns the first element, matching a CSS selector ( e.g. $(‘div’) will return the first div element in the page).
  • $$() – Short for document.querySelectorAll(). Returns an array of elements that match the given CSS selector.
  • $0 – $4 – A history of the five most recent DOM elements that you’ve selected in the elements panel, $0 being the latest.

4.$$()

To learn more Console commands read the Command Line API

5. Use multiple carets & selections

Another killer Sublime Text feature makes its appearance. While editing a file you can set multiple carets by holding Ctrl (Cmd for Mac) and clicking where you want them to be, thus allowing you to write in many places at once.

5.MultipleSelectClick

6. Preserve Log

By checking the Preserve Log option in the Console Tab, you can make the DevTools Console persist the log instead of clearing it on every page load. This is handy when you want to investigate bugs that show up just before the page is unloaded.

6.PreserveLog

7. Pretty Print {}

Chrome’s Developer Tools has a built-in code beautifier that will help you return minimized code to its humanly readable format. The Pretty Print button is located in the bottom left of your currently opened file in the Sources tab.

7.PrettyPrint

8. Device mode

DevTools includes a powerful mode for developing mobile friendly pages. This video from Google goes through most of its main features such as screen resizing, touch emulation and bad network connections simulator.

9. Device emulation sensors

Another cool feature of Device Mode is the option to simulate mobile devices’ sensors like touch screens and accelerometers. You can even spoof your geographic location. The feature is located in the bottom part of the Elements tab under Emulation -> Sensors.

9.Sensors

10. Color Picker

When selecting a color in the Styles editor you can click on the color preview and a picker will pop up. While the color picker is opened, if you hover over your page, your mouse pointer will transform into a magnifying glass for selecting colors with pixel accuracy.

10.ColorPicker

11. Force element state

DevTools has a feature that simulates CSS states like :hover and :focus on elements, making it easy to style them. It is available from the CSS editor.

11.SimulateHover

12. Visualize the shadow DOM

Web browsers construct things like textboxes, buttons and inputs out of other basic elements which are normally hidden from view. However, you can go to Settings -> General and toggle Show user agent shadow DOM, which will display them in the elements tab. You can even style them individually, which gives you a great deal of control.

12.ShadowDOM

13. Select next occurrence

If you press Ctrl + D (Cmd + D) while editing files in the Sources Tab, the next occurrence of the current word will be selected as well, helping you edit them simultaneously.

13.MultiSelect

14. Change color format

Use Shift + Click on the color preview to alter between rgba, hsl and hexadecimal formatting.

14.ColorFormat

15. Editing local files through workspaces

Workspaces are a powerful Chrome DevTools feature, which turns it into a real IDE. Workspaces match the files in the Sources tab to your local project files, so now you can edit and save directly, without having to copy/paste your changes into an external text editor.

To configure Workspaces, go to the Sources tab and right click anywhere in the left panel (where the sources are) and choose Add Folder To Worskpace, or just drag and drop your whole project folder into Developer Tools. Now, the chosen folder, its sub directories and all the files in them will be available for editing no matter what page you are on. To make it even more useful, you can then map files in your folder to those used by the page, allowing for live editing and easy saving.

You can learn more about Workspaces here.

Further reading

Chrome Keyboard Shortcuts

A long list of tips and tricks in the Google Chrome docs

Source:: Tutorialzine.com

How Your Favorite Websites Changed Over the Years

By Danny Markov

favorite-webistes

Web design has come a long way since the internet came into existence in the 1990s. With only a handful of web-safe fonts, rudimentary CSS, and tables for layout, designers were very limited with what they could do. Fast forward to today, and the situation is quite different. Now multi-megabyte pages with large cover photos are the norm, and we have plenty of frameworks, icons and fonts to chose from.

It is fun to compare how popular web sites looked all those years ago. Thanks to the magic of way back machine, we have gathered a timeline of vintage screenshots for your enjoyment.

Apple.com

Apple has been around for quite a long time and their web page has been up for almost two decades. The company is known for its great design, but you wouldn’t guess it by looking at the early versions of its website.

Youtube.com

Youtube was founded in 2005 and only a year later it was bought by Google for $1.65 billion. Once the home of the best cat videos, YouTube now features 4K playback, channels from big entertainment companies, and being a “youtuber” can make you a ton of money or get you a place in a South Park episode.

Microsoft.com

One of the most highly ranked websites on the net, Microsoft.com has always adhered to a flat look. The recent iterations of its design feature large images and lots of whitespace.

Twitter.com

Twitter’s success has been rapid and since their launch in 2006 they’ve reached an active user base of more than 250 million people. Around 10,000 tweets get sent every second! Some of twitter’s popularity is due to us being on it – @Tutorialzine

Skype.com

Skype’s homepage has had major overhauls almost every year, and judging by the oldest version of their site, it certainly needed them.

Yahoo.com

Yahoo! was founded more than 20 years ago and ever since has been one of the most popular websites in the english-speaking world. Their homepage has always maintained a text-heavy look.

Google.com

Today Google is an enormous multinational corporation but in 1998 it was just a search engine. And their HTML wasn’t even aligned properly! Through the years Google kept the original simplicity of their webpage as well as their brand colors. On special occasions, they turn their logo into playful animated doodles.

Facebook.com

A billion people every month log in to Facebook. With so many eye balls, every little bug is noticed and thoroughly complained about. The first version of the social network went live in 2004 and has gradually evolved into what it is today.

Ebay.com

Ebay’s homepage has had the same layout for about 15 years – categories on the left, ads and pictures on the right. Just now in 2015 they’ve come up with a new design that involves a drop-down menu positioned on the top of the page and big photos and banners for the rest.

Myspace

Ten years ago, Myspace was the most popular social network in the world and it had an yearly revenue of more then half a billion dollars. However, with the rise of Facebook and changes in the company’s management, Myspace’s popularity plummeted. Redesigns and relaunches haven’t been able to save the company, and it’s most recent price on the market was a fraction of the original deal in 2005.

Pizzahut

Pizzahut.com welcomed 1999 with a surprisingly modern-looking design. Today, its website is adorned with lots of tasty product shots.

Lego

Lego is a company that is loved by generations of fans from around the world. Every year they release sets, games and movies. Their website reflects their playful character and features games and animations.

IGN

IGN is a popular video game website that has gone through a lot of transformations since it was founded in 1996. These days it covers much more than just games.

Guess

Fashion websites are a good way to judge the popular trends in web design of the time. Here is one of the oldest.

Lamborghini

Every boy’s dream car maker has had a surprisingly disappointing web presence for most of the internet’s existence. However things are looking great today!

Nike

Nike is a multi-billion dollar sports goods manufacturer. But you wouldn’t have guessed that from their 1998 homepage.

Gizmodo

One of the first gadget websites that is still around, Gizmodo, has had frequent redesigns over the years, but has kept its overall aesthetic.

Further reading

Web Design Trends 2004-2015
The Evolution of the Web
Internet Archive: Wayback Machine

Source:: Tutorialzine.com

Using the ES6 transpiler Babel on Node.js

By Axel Rauschmayer

This blog post explains how to use the ES6 transpiler Babel with Node.js. You can download the code shown in this post on GitHub. For further information on ECMAScript 6, consult the ebook “Exploring ES6”.

Warning: The approach explained in this post is convenient for experiments and development. But it uses on-the-fly transpilation, which may be too slow for your production code. Then you can transpile as a build step (as explained in the Babel documentation).

Running normal Node.js code via Babel

The npm package babel brings Babel support to Node.js:

    $ npm install --global babel

This package contains the shell script babel-node, which is a Babel-ified version of node. It compiles everything from ES6 to ES5 that is run or required. For example, you can start a REPL via the following shell command:

    $ babel-node

In the REPL, you can use ES6:

    > [1,2,3].map(x => x * x)
    [ 1, 4, 9 ]

babel-node also lets you run Node.js scripts such as the following one.

    // point.js
    export class Point {
        constructor(x, y) {
            this.x = x;
            this.y = y;
        }
    }
    if (require.main === module) {
        let pt = new Point(7,4);
        console.log(`My point: ${JSON.stringify(pt)}`);
    }

The following shell command runs point.js:

    $ babel-node point.js 
    My point: {"x":7,"y":4}

The package babel has many more features, which are all documented on the Babel website. For example, from within a normal Node module, you can install a “require hook”, which compiles all required modules via Babel (except, by default, modules in node_modules).

Running Jasmine unit tests via Babel

Another npm package, babel-jest, is a preprocessor for the Jasmine-based unit testing tool Jest.

One way to install babel-jest is by mentioning it in the devDependencies of your package.json:

    {
      "devDependencies": {
        "babel-jest": "*",
        "jest-cli": "*"
      },
      "scripts": {
        "test": "jest"
      },
      "jest": {
        "scriptPreprocessor": "<rootDir>/node_modules/babel-jest",
        "testFileExtensions": ["js"],
        "moduleFileExtensions": ["js", "json"],
        "testDirectoryName": "spec"
      }
    }

Afterwards, you only need to execute the following command inside the directory of package.json and both babel-jest and a command line interface (CLI) for Jest will be installed.

    npm install

The configuration options for Jest are documented on its website. I have used testDirectoryName to specify that the tests are inside the directory spec (the default is __tests__). Let’s add the following test file to that directory:

    // spec/point.spec.js
    jest.autoMockOff();
    import { Point } from '../point';
    
    describe('Point', function() {
        it('sets up instance properties correctly', function() {
            let p = new Point(1, 5);
            console.log(JSON.stringify(p));
            expect(p.x).toBe(1);
            expect(p.y).toBe(5);
        });
    });

Because we have specified scripts.test in package.json, we can run all tests inside spec/ via the following command:

    npm test

Source:: 2ality

The destructuring algorithm in ECMAScript 6

By Axel Rauschmayer

This blog post looks at destructuring from a different angle: as a recursive matching algorithm. At the end, I’ll use this new knowledge to explain one especially tricky case of destructuring.

You may want to read the blog post “Destructuring and parameter handling in ECMAScript 6” beforehand.

Destructuring assignment

The following is a destructuring assignment.

    pattern = value

We want to use pattern to extract data from value. In the following sections, I describe an algorithm for doing so. It is known in functional programming as matching. The previous destructuring assignment is processed via

    pattern ← value

That is, the operator (“match against”) matches pattern against value. The algorithm is specified via recursive rules that take apart both operands of the operator. The declarative notation may take some getting used to, but it makes the specification of the algorithm more concise. Each rule has two parts:

  • The head specifies which operands are handled by the rule.
  • The body specifies what to do next.

I only show the algorithm for destructuring assignment. Destructuring variable declarations and destructuring parameter definitions work similarly.

Patterns

A pattern is either:

  • A variable: x
  • An object pattern: {"properties»}
  • An array pattern: ["elements»]

Each of the following sections covers one of these three cases.

Variables

  • x ← value (including undefined and null)

        x = value
    

Object patterns

  • {"properties»} ← undefined

        throw new TypeError();
    
  • {"properties»} ← null

        throw new TypeError();
    
  • {key: pattern, "properties»} ← obj

        pattern ← obj.key
        {"properties»} ← obj
    
  • {key: pattern = default_value, "properties»} ← obj

        let tmp = obj.key;
        if (tmp !== undefined) {
            pattern ← tmp
        } else {
            pattern ← default_value
        }
        {"properties»} ← obj
    
  • {} ← obj (done)

Array patterns

The sub-algorithm in this section starts with an array pattern and an iterable and continues with the elements of the pattern and an iterator (obtained from the iterable). The helper functions isIterable() and getNext() are defined at the end of this section.

  • ["elements»] ← iterable

        if (!isIterable(iterable)) {
            throw new TypeError();
        }
        let iterator = iterable[Symbol.iterator]();
        "elements» ← iterator
    
  • pattern, "elements» ← iterator

        pattern ← getNext(iterator)
        "elements» ← iterator
    
  • pattern = default_value, "elements» ← iterator

        let tmp = getNext(iterator);
        if (tmp !== undefined) {
            pattern ← tmp
        } else {
            pattern ← default_value
        }
        "elements» ← iterator
    
  • , "elements» ← iterator (hole, elision)

        getNext(iterator); // skip
        "elements» ← iterator
    
  • ...pattern ← iterator (always last part!)

        let tmp = [];
        for (let elem of iterator) {
            tmp.push(elem);
        }
        pattern ← tmp
    
  • ← iterator (no elements left, nothing to do)

    function getNext(iterator) {
        let n = iterator.next();
        if (n.done) {
            return undefined;
        } else {
            return n.value;
        }
    }
    function isIterable(value) {
        return (value !== null
            && typeof value === 'object'
            && typeof value[Symbol.iterator] === function);
    }

Example

The following function definition is used to make sure that both of the named parameters x and y have default values and can be omitted. Additionally, = {} enables us to omit the object literal, too (see last function call below).

    function move({x=0, y=0} = {}) {
        return [x, y];
    }
    move({x: 3, y: 8}); // [3, 8]
    move({x: 3}); // [3, 8]
    move({}); // [3, 8]
    move(); // [3, 8]

But why would you define the parameters as in the previous code snippet? Why not as follows – which is also completely legal ECMAScript 6?

    function move({x, y} = { x: 0, y: 0 }) {
        return [x, y];
    }

Using solution 2

Actual parameters (inside function calls) are matched against formal parameters (inside function definitions). Therefore, move() sets up the parameters x and y as follows:

    [{x, y} = { x: 0, y: 0 }] ← []

The only array element on the left-hand side does not have a match on the right-hand side, which is why the default value is used:

    {x, y} ← { x: 0, y: 0 }

The left-hand side is a property value shorthand, an abbreviation for {x: x, y: y}:

    {x: x, y: y} ← { x: 0, y: 0 }

This destructuring leads to the following two assignments.

    x = 0;
    y = 0;

However, this is the only case in which the default value is used. As soon as there is an array element at index 0 on the right-hand side, the default value is ignored:

    [{x, y} = { x: 0, y: 0 }] ← [{z:3}]

Afterwards, the next step is:

    {x, y} ← { z: 3 }

That leads to both x and y being set to undefined, which is not what we want.

Using solution 1

Let’s try solution 1:

    [{x=0, y=0} = {}] ← []

Again, we don’t have an array element at index 0 on the right-hand side and use the default value:

    {x=0, y=0} ← {}

The left-hand side is a property value shorthand, which means that this destructuring is equivalent to:

    {x: x=0, y: y=0} ← {}

Neither the property x nor the property y have a match on the right-hand side. Therefore, the following destructurings are performed next:

    x ← 0
    y ← 0

That leads to the following assignments:

    x = 0
    y = 0

Source:: 2ality

HTML5 Music Player

By Danny Markov

how-to-create-html5-music-player1

This time we want to share with you a cool experiment that we made. It is a music player that lives in your browser. It uses the powerful HTML5 File Reader and Audio APIs. As a result, you can just drag and drop mp3 files from your computer into the browser, and they are automatically added to your playlist.

Features

  • You can load mp3 files from your computer by dragging them and dropping them in the browser. Chrome users can drop whole folders as well.
  • It doesn’t use any kind of server code (so no need for PHP or node.js) – the player is a single HTML file.
  • Nothing is uploaded – the mp3 files are kept in your browser.
  • Cool audio visualization and audio playback thanks to Wavesurfer.js.
  • Select and search songs in a playlist.
  • Cover art and ID3 tags with JavaScript ID3 Reader.
  • Shuffle and Repeat options.
  • No internet dependencies – works just as well if run locally offline.
  • Responsive design.

HTML5 Music Player

How to use it

The application is a simple HTML file that you open in your browser. You only need to download our zip file from the button near the beginning of the article, and unzip it somewhere on your computer. Unfortunately, due to security restrictions in modern browsers it won’t work if you just double click the index.html file. You will have to open it through a locally running web server like Apache or Nginx and access it through localhost. Or you can just use our demo, nothing is uploaded so your music is safe.

How it works

The app listens for JavaScript drag and drop events. When you drop a mp3 file, it extracts information like song and artist name, if they are available, from the file’s ID3 tags. Each song is placed in an array, which represents our playlist. The application then initializes the Wavesurfer.js audio player, which generates the awesome wave visualization for every song and plays it.

From there on we can do everything you would expect from a native audio player – play next/previous, pause, pick songs and so on. Our playlist section also gives users the option to remove songs from the player or search for a particular track, album or artist.

You can learn more about how the player works by reading the /assets/js/script.js file in our source code. It is well commented and easy to follow.

Design

  • The main layout of the app is made via flexbox. This allowed us to evenly position all bars and buttons without having to worry about responsiveness. Read more about flexbox here.
  • The pop up effect for the playlist and other highlights and small animations were done via CSS by manipulating classes with jQuery.
  • All of the icons we needed for this music player were already available in Font Awesome – thanks guys!

Further Reading

Source:: Tutorialzine.com

No promises: asynchronous JavaScript with only generators

By Axel Rauschmayer

Two ECMAScript 6 [1] features enable an intriguing new style of asynchronous JavaScript code: promises [2] and generators [3]. This blog post explains this new style and presents a way of using it without promises.

Overview

Normally, you make a function call like this:

    let result = asyncFunc('http://example.com');
    "next_steps»

If asyncFunc() makes an asynchronous computation (such as downloading a file from the internet), you want execution to pause until asyncFunc() returns with a result. Before ECMAScript 6, you couldn’t pause and resume execution, but you could simulate it, by putting next_steps into a callback (a so-called continuation [4]), which is triggered by asyncFunc(), once it is done:

    asyncFunc('http://example.com', result => {
        "next_steps»
    });

Promises [2] are basically a smarter way of managing callbacks:

    asyncFunc('http://example.com')
    .then(result => {
        "next_steps»
    });

In ECMAScript 6, you can use generator functions [3], which can be paused and resumed. With a library, a generator-based solution looks almost like our ideal code:

    Q.spawn(function* () {
        let result = yield asyncFunc('http://example.com');
        "next_steps»
    });

However, asyncFunc() needs to be implemented using promises:

    function asyncFunc(url) {
        return new Promise((resolve, reject) => {
            otherAsyncFunc(url,
                result => resolve(result));
        });
    }

However, with a small library shown later, you can run the initial code like Q.spawn() does, but implement asyncFunc() like this:

    function* asyncFunc(url) {
        const caller = yield; // (A)
        otherAsyncFunc(url,
            result => caller.success(result));
    }

Line A is how the library provides asyncFunc() with callbacks.

Code

Let’s look at two examples before looking at the code of the library.

Example 1: echo()

echo() is an asynchronous function, implemented via a generator:

    function* echo(text, delay = 0) {
        const caller = yield;
        setTimeout(() => caller.success(text), delay);
    }

In the following code, echo() is used three time, sequentially:

    run(function* echoes() {
        console.log(yield echo('this'));
        console.log(yield echo('is'));
        console.log(yield echo('a test'));
    });

The parallel version of this code looks as follows.

    run(function* parallelEchoes() {
        let startTime = Date.now();
        let texts = yield [
            echo('this', 1000),
            echo('is', 900),
            echo('a test', 800)
        ];
        console.log(texts); // ['this', 'is', 'a test']
        console.log('Time: '+(Date.now()-startTime));
    });

As you can see, the library performs the asynchronous calls in parallel if you yield an array of generator invocations.

This code takes about 1000 milliseconds.

Example 2: httpGet()

The following code demonstrates how you can implement a function that gets a file via XMLHttpRequest:

    function* httpGet(url) {
        const caller = yield;
    
        var request = new XMLHttpRequest();
        request.onreadystatechange = function () {
            if (this.status === 200) {
                caller.success(this.response);
            } else {
                // Something went wrong (404 etc.)
                caller.failure(new Error(this.statusText));
            }
        }
        request.onerror = function () {
            caller.failure(new Error(
                'XMLHttpRequest Error: '+this.statusText));
        };
        request.open('GET', url);
        request.send();    
    }

Let’s use httpGet() sequentially:

    run(function* downloads() {
        let text1 = yield httpGet('https://localhost:8000/file1.html');
        let text2 = yield httpGet('https://localhost:8000/file2.html');
        console.log(text1, text2);
    });

Using httpGet() in parallel looks like this:

    run(function* parallelDownloads() {
        let [text1,text2] = yield [
            httpGet('https://localhost:8000/file1.html'),
            httpGet('https://localhost:8000/file2.html')
        ];
        console.log(text1, text2);
    });

The library

The library profits from the fact that calling a generator function does not execute its body, but returns a generator object.

    /**
     * Run the generator object `genObj`,
     * report results via the callbacks in `callbacks`.
     */
    function runGenObj(genObj, callbacks = null) {
        handleOneNext();
    
        /**
         * Handle one invocation of `next()`:
         * If there was a `prevResult`, it becomes the parameter.
         * What `next()` returns is what we have to run next.
         * The `success` callback triggers another round,
         * with the result assigned to `prevResult`.
         */
        function handleOneNext(prevResult = null) {
            try {
                let yielded = genObj.next(prevResult); // may throw
                if (!yielded.done) {
                    setTimeout(runYieldedValue, 0, yielded.value);
                }
            }
            // Catch unforeseen errors in genObj
            catch (error) {
                if (callbacks) {
                    callbacks.failure(error);
                } else {
                    throw error;
                }
            }
        }
        function runYieldedValue(yieldedValue) {
            if (yieldedValue === undefined) {
                // If code yields `undefined`, it wants callbacks
                handleOneNext(callbacks);
            } else if (Array.isArray(yieldedValue)) {
                runInParallel(yieldedValue);
            } else {
                // Yielded value is a generator object
                runGenObj(yieldedValue, {
                    success(result) {
                        handleOneNext(result);
                    },
                    failure(err) {
                        genObj.throw(err);
                    },
                });
            }
        }
    
        function runInParallel(genObjs) {
            let resultArray = new Array(genObjs.length);
            let resultCountdown = genObjs.length;
            for (let [i,genObj] of genObjs.entries()) {
                runGenObj(genObj, {
                    success(result) {
                        resultArray[i] = result;
                        resultCountdown--;
                        if (resultCountdown <= 0) {
                            handleOneNext(resultArray);
                        }
                    },
                    failure(err) {
                        genObj.throw(err);
                    },
                });
            }
        }
    }
    
    function run(genFunc) {
        runGenObj(genFunc());
    }

Conclusion: asynchronous JavaScript via coroutines

Couroutines [5] are a single-threaded version of multi-tasking: Each coroutine is a thread, but all coroutines run in a single thread and they explicitly relinquish control via yield. Due to the explicit yielding, this kind of multi-tasking is also called cooperative (versus the usual preemptive multi-tasking).

Generators are shallow co-routines [6]: their execution state is only preserved within the generator function: It doesn’t extend further backwards than that and recursively called functions can’t yield.

The code for asynchronous JavaScript without promises that you have seen in this blog post is purely a proof of concept. It is completely unoptimized and may have other flaws preventing it from being used in practice. But I think that coroutines are the right mental model when thinking about asynchronous computation in JavaScript. They seem like an interesting avenue to explore in ECMAScript 2016 (ES7) or later. As we have seen, not much would need to be added to generators to make this work:

  • let called = yield is a kludge.
  • Similarly, having to report results and errors via callbacks is unfortunate. It’d be nice if mechanisms as elegant as return and throw could be used, but those don’t work inside callbacks.

What about streams?

When it comes to asynchronous computation, there are two fundamentally different needs:

  1. The results of a single computation: One popular way of performing those are promises.
  2. A series of results: Asynchronous Generators [7] have been proposed for ECMAScript 2016 for this use case.

For #1, coroutines are an interesting alternative. For #2, David Nolen has suggested [8] that CSP (Communicating Sequential Processes) work well. For binary data, WHATWG is working on Streams [9].

Further reading

  1. Exploring ES6: Upgrade to the next version of JavaScript”, book by Axel
  2. ECMAScript 6 promises (2/2): the API
  3. Iterators and generators in ECMAScript 6
  4. Asynchronous programming and continuation-passing style in JavaScript
  5. Coroutine” on Wikipedia
  6. Why coroutines won’t work on the web” by David Herman
  7. Async Generator Proposal” by Jafar Husain
  8. ES6 Generators Deliver Go Style Concurrency” by David Nolen
  9. Streams: Living Standard”, edited by Domenic Denicola and Takeshi Yoshino

Source:: 2ality