Monthly Archives: January 2017

Node.js Weekly Update - 27 Jan, 2017

By Ferenc Hamori

Node.js Weekly Update - 27 Jan, 2017

Read the most important Node.js weekly news & updates: Event Sourcing, Fixing npm’s package hell, Parsing anything in JavaScript using Early algorithm, Certified developer program + Individual Membership Director election from the Node Foundation.

If you’d like to stay up-to-date on a daily basis, I recommend to check out our hand-curated Node.js news page and its Twitter feed!

The 6 must-read Node.js articles/projects of this Week:

○ Event Sourcing with Examples – Node.js at Scale

Event Sourcing is a powerful architectural pattern to handle complex application states that may need to be rebuilt, re-played, audited or debugged.

From this article you can learn what Event Sourcing is, and when should you use it. We’ll also take a look at some Event sourcing examples with code snippets.

○ The wheels of open-source: we’ve got many of them

In 2015 I have published an NPM package “sister”. Sister is an event emitter: you are able to attach event listeners and emit events. If it sounds like a familiar pattern it’s because it is. In the README.md I have included a list of 200 similar libraries. The package is a satire of course.

According to Gajus, the change needs to happen from the top to bottom. As an established open-source community we have to improve the collaboration. We couldn’t agree more.

○ Parsing absolutely anything in JavaScript using Earley algorithm

Let me start by saying — I was surprised how easy it was to write grammar for an Earley parser. I have been using regular expressions for over a decade. And I am used to parse things using regular expressions. Its fragile, its not always possible, etc. But it is fast and for the most part it serves the purpose.

Familiarising with parsing algorithms changed this attitude forever.

○ Volunteers Needed for the Next Phase of the Node.js Certified Developer Program

The certification program aims to establish a baseline competency in Node.js. While not an expert in all areas, developers who pass the certification will be able to hit the ground running with Node.js.

Currently we are working with the community to determine specific questions that will be asked on the exam. To contribute to the Node.js Foundation Certification Development Item Writing Workshop Sessions, fill out this application.

○ Node.js Foundation Individual Membership Director election closes on January 30.

The Node.js Foundation is a member-supported organization. The Node.js Foundation Individual Director is the Node.js project’s community voice on the board. There are two individual directors that sit on the Node.js Foundation board and they serve a two-year term.

The Individual Membership Director is responsible for soliciting feedback and data that represents the wishes of other individual members and the community at large. They have been entrusted with the duty to make decisions based on the information they receive to best represent the community, and can gather input for proposals when relevant and granted permission to do so.


Previously in the Node.js Weekly Update

In the previous Node.js Weekly Update we read fantastic articles about performance optimization killers, using serverless, Async best practices, preventing ReDos attacks and many more..

Source:: risingstack.com

Quick Tip: Display Dates Relatively in Laravel

By johnkariuki

Absolute dates

We have previously looked at the Carbon package by Brian Nesbitt as well as all the functionality it adds to PHP’s DateTime class.

In this article, we will take a look at how Laravel takes advantage of this package.

Introduction

As already mentioned, The carbon class does not rebuild PHP’s DateTime class from scratch, it builds upon it.

<?php
namespace Carbon;

class Carbon extends DateTime
{
    // code here
}

This means that you can access the default functionality of PHP’s DateTime class on top of the awesomeness that is Carbon.

Laravel already includes the Carbon class by default so we do not need to install it separately. To get started with Carbon in Laravel, simply create a new project using the laravel command.

$ laravel new scotch-dates

Carbon Dating in Laravel

See what I did there? Turns out Brian Nesbitt had the same idea in mind while creating the Carbon package.

Now by default, if the timestamps variable in a Laravel model class is not explicitly set to false, then it is expected that it’s corresponding table should have the created_at and updated_at columns.

We can however go ahead and add our own date columns such as activated_at, dob or any other depending on the type of application or the nature of the Laravel model we are working on.

But how does laravel know that these fields should be cast to date?

Simple. Add all the date fields to the protected dates variable in the model class.

<?php

namespace App;

use IlluminateDatabaseEloquentModel;

class User extends Model
{
    /**
     * The attributes that should be mutated to dates.
     *
     * @var array
     */
    protected $dates = [
        'created_at',
        'updated_at',
        'activated_at',
        'dob'
    ];
}

Setup

In our scotch-dates application that we just created, we already have a our first migration setup for the user’s table. We will need to have some user records to work with, so let’s seed some data in our database with the factory helper method on tinker.

$ php artisan tinker
>>> factory('AppUser', 10)->create()

To get started, we’ll go ahead and create a new UserController class to get all the user’s from the user’s table into a view and add a /users route.

/routes/web.php

Route::get('/', function () {
    return view('welcome');
});

Route::get('/users', 'UserController@users');

I have gone ahead to add a few helper variables courtesy of the Carbon class to give me access to a few dates such as now, yesterday, today and tomorrow.

/app/Http/Controllers/UserController.php

<?php

namespace AppHttpControllers;

use IlluminateHttpRequest;
use AppUser;
use CarbonCarbon;

class UserController extends Controller
{
    public function users()
    {
        return view('users', [
            'users' => User::all(),
            'now' => Carbon::now(),
            'yesterday' => Carbon::yesterday(),
            'today' => Carbon::today(),
            'tomorrow' => Carbon::tomorrow()
        ]);
    }
}

Let’s play around with Carbon dates on the user's view.

Displaying Absolute Dates

Having setup our user’s details and passed them on to the users view, we can now display each of the users with the following blade template.

<table>
    <tr>
        <th>Name</th>
        <th>Created</th>
    </tr>

    @foreach($users as $user)
        <tr>
            <td>{{ $user->name }}</td>
            <td>{{ $user->created_at }}</td>
        </tr>
    @endforeach
</table>

With this, we should have the following.

Displaying Dates Relatively

Displaying dates relatively is quite popular since it is easier for humans to read out a post as created 30 minutes ago as opposed to 2017-01-08 19:15:20.

Let’s play around with the Carbon class to see how we can display the dates relatively in different ways.

When comparing a value in the past to default now:

This comes in handy when you want to display a date in the past with reference to the current time. This would be something like:

  • A few seconds ago
  • 30 minutes ago
  • 2 days ago
  • 1 year ago

To achieve this, we simply use the diffForHumans method.

$user->created_at->diffForHumans()
// 1 hour ago

When comparing a value in the future to default now:

You’d probably want to use this in cases where you need to publish a post in the future or show an expiration date.

  • 1 hour from now
  • 5 months from now
$user->created_at->addDays(5)->diffForHumans() 
//5 days from now

When comparing a value in the past to another value:

  • 1 hour before
  • 5 months before
$yesterday->diffForHumans($today)
//1 day before

When comparing a value in the future to another value:

  • 1 hour after
  • 5 months after
$tomorrow->diffForHumans($today)
//1 day after

More About Diffs

While it may be nice to display to the user fully quallified diffs such as 2 hours ago, you may at times simply want to show the user the value without the text.

This may come in handy where you have many comments coming in and the text is just too repetitive. diffInSeconds() in particular can be used in cases where the difference in time between two entities, say, lap times, is of significance.

This can achieved by the diffInYears, diffInMonths(), diffInWeeks(), diffInDays(), diffInWeekdays(), diffInWeekendDays()
diffInHours(), diffInMinutes() and diffInSeconds() methods.

$user->created_at->diffInHours(); //2
$user->created_at->diffInMinutes(); //134
$user->created_at->diffInSeconds(); //8082

The carbon class also come with methods that return the number of seconds since midnight or to the end of the day (midnight) which can be used to create a countdown, say, for a product sale.

$now->secondsSinceMidnight() //77825
$now->secondsUntilEndOfDay() //8574

Conclusion

There’s alot you can achieve with the Carbon class and sometimes you will not find out about a functionality that it provides until you need it.

Take a look at the documentation here. Happy Carbon dating!

Source:: scotch.io

Working with JSON in MySQL

By nomanurrehman

JFti976TQaVbpi0ZoYIy_scotch-featured-image-guidelines.jpg

SQL databases tend to be rigid.

If you have worked with them, you would agree that database design though it seems easier, is a lot trickier in practice. SQL databases believe in structure, that is why it’s called structured query language.

On the other side of the horizon, we have the NoSQL databases, also called schema-less databases that encourage flexibility. In schema-less databases, there is no imposed structural restriction, only data to be saved.

Though every tool has it’s use case, sometimes things call for a hybrid approach.

What if you could structure some parts of your database and leave others to be flexible?

MySQL version 5.7.8 introduces a JSON data type that allows you to accomplish that.

In this tutorial, you are going to learn.

  1. How to design your database tables using JSON fields.
  2. The various JSON based functions available in MYSQL to create, read, update, and delete rows.
  3. How to work with JSON fields using the Eloquent ORM in Laravel.

Why Use JSON

At this moment, you are probably asking yourself why would you want to use JSON when MySQL has been catering to a wide variety of database needs even before it introduced a JSON data type.

The answer lies in the use-cases where you would probably use a make-shift approach.

Let me explain with an example.

Suppose you are building a web application where you have to save a user’s configuration/preferences in the database.

Generally, you can create a separate database table with the id, user_id, key, and value fields or save it as a formatted string that you can parse at runtime.

However, this works well for a small number of users. If you have about a thousand users and five configuration keys, you are looking at a table with five thousand records that addresses a very small feature of your application.

Or if you are taking the formatted string route, extraneous code that only compounds your server load.

Using a JSON data type field to save a user’s configuration in such a scenario can spare you a database table’s space and bring down the number of records, which were being saved separately, to be the same as the number of users.

And you get the added benefit of not having to write any JSON parsing code, the ORM or the language runtime takes care of it.

The Schema

Before we dive into using all the cool JSON stuff in MySQL, we are going to need a sample database to play with.

So, let’s get our database schema out of the way first.

We are going to consider the use case of an online store that houses multiple brands and a variety of electronics.

Since different electronics have different attributes(compare a Macbook with a Vacuumn Cleaner) that buyers are interested in, typically the Entity–attribute–value model (EAV) pattern is used.

However, since we now have the option to use a JSON data type, we are going to drop EAV.

For a start, our database will be named e_store and has three tables only named, brands, categories, and products respectively.

Our brands and categories tables will be pretty similar, each having an id and a name field.

CREATE DATABASE IF NOT EXISTS `e_store`
DEFAULT CHARACTER SET utf8
DEFAULT COLLATE utf8_general_ci;

SET default_storage_engine = INNODB;

CREATE TABLE `e_store`.`brands`(
    `id` INT UNSIGNED NOT NULL auto_increment ,
    `name` VARCHAR(250) NOT NULL ,
    PRIMARY KEY(`id`)
);

CREATE TABLE `e_store`.`categories`(
    `id` INT UNSIGNED NOT NULL auto_increment ,
    `name` VARCHAR(250) NOT NULL ,
    PRIMARY KEY(`id`)
);

The objective of these two tables will be to house the product categories and the brands that provide these products.

While we are at it, let us go ahead and seed some data into these tables to use later.

/* Brands */
INSERT INTO `e_store`.`brands`(`name`)
VALUES
    ('Samsung');

INSERT INTO `e_store`.`brands`(`name`)
VALUES
    ('Nokia');

INSERT INTO `e_store`.`brands`(`name`)
VALUES
    ('Canon');

/* Types of electronic device */
INSERT INTO `e_store`.`categories`(`name`)
VALUES
    ('Television');

INSERT INTO `e_store`.`categories`(`name`)
VALUES
    ('Mobilephone');

INSERT INTO `e_store`.`categories`(`name`)
VALUES
    ('Camera');

The brands table

The categories table

Next, is the business area of this tutorial.

We are going to create a products table with the id, name, brand_id, category_id, and attributes fields.

CREATE TABLE `e_store`.`products`(
    `id` INT UNSIGNED NOT NULL AUTO_INCREMENT ,
    `name` VARCHAR(250) NOT NULL ,
    `brand_id` INT UNSIGNED NOT NULL ,
    `category_id` INT UNSIGNED NOT NULL ,
    `attributes` JSON NOT NULL ,
    PRIMARY KEY(`id`) ,
    INDEX `CATEGORY_ID`(`category_id` ASC) ,
    INDEX `BRAND_ID`(`brand_id` ASC) ,
    CONSTRAINT `brand_id` FOREIGN KEY(`brand_id`) REFERENCES `e_store`.`brands`(`id`) ON DELETE RESTRICT ON UPDATE CASCADE ,
    CONSTRAINT `category_id` FOREIGN KEY(`category_id`) REFERENCES `e_store`.`categories`(`id`) ON DELETE RESTRICT ON UPDATE CASCADE
);

Our table definition specifies foreign key constraints for the brand_id and category_id fields, specifying that they reference the brands and categories table respectively. We have also specified that the referenced rows should not be allowed to delete and if updated, the changes should reflect in the references as well.

The attributes field’s column type has been declared to be JSON which is the native data type now available in MySQL. This allows us to use the various JSON related constructs in MySQL on our attributes field.

Here is an entity relationship diagram of our created database.

The e_store database

Our database design is not the best in terms of efficiency and accuracy. There is no price column in the products table and we could do with putting a product into multiple categories. However, the purpose of this tutorial is not to teach database design but rather how to model objects of different nature in a single table using MySQL’s JSON features.

The CRUD Operations

Let us look at how to create, read, update, and delete data in a JSON field.

Create

Creating a record in the database with a JSON field is pretty simple.

All you need to do is add valid JSON as the field value in your insert statement.

/* Let's sell some televisions */
INSERT INTO `e_store`.`products`(
    `name` ,
    `brand_id` ,
    `category_id` ,
    `attributes`
)
VALUES(
    'Prime' ,
    '1' ,
    '1' ,
    '{"screen": "50 inch", "resolution": "2048 x 1152 pixels", "ports": {"hdmi": 1, "usb": 3}, "speakers": {"left": "10 watt", "right": "10 watt"}}'
);

INSERT INTO `e_store`.`products`(
    `name` ,
    `brand_id` ,
    `category_id` ,
    `attributes`
)
VALUES(
    'Octoview' ,
    '1' ,
    '1' ,
    '{"screen": "40 inch", "resolution": "1920 x 1080 pixels", "ports": {"hdmi": 1, "usb": 2}, "speakers": {"left": "10 watt", "right": "10 watt"}}'
);

INSERT INTO `e_store`.`products`(
    `name` ,
    `brand_id` ,
    `category_id` ,
    `attributes`
)
VALUES(
    'Dreamer' ,
    '1' ,
    '1' ,
    '{"screen": "30 inch", "resolution": "1600 x 900 pixles", "ports": {"hdmi": 1, "usb": 1}, "speakers": {"left": "10 watt", "right": "10 watt"}}'
);

INSERT INTO `e_store`.`products`(
    `name` ,
    `brand_id` ,
    `category_id` ,
    `attributes`
)
VALUES(
    'Bravia' ,
    '1' ,
    '1' ,
    '{"screen": "25 inch", "resolution": "1366 x 768 pixels", "ports": {"hdmi": 1, "usb": 0}, "speakers": {"left": "5 watt", "right": "5 watt"}}'
);

INSERT INTO `e_store`.`products`(
    `name` ,
    `brand_id` ,
    `category_id` ,
    `attributes`
)
VALUES(
    'Proton' ,
    '1' ,
    '1' ,
    '{"screen": "20 inch", "resolution": "1280 x 720 pixels", "ports": {"hdmi": 0, "usb": 0}, "speakers": {"left": "5 watt", "right": "5 watt"}}'
);

The products table after adding televisions

Instead of laying out the JSON object yourself, you can also use the built-in JSON_OBJECT function.

The JSON_OBJECT function accepts a list of key/value pairs in the form JSON_OBJECT(key1, value1, key2, value2, ... key(n), value(n)) and returns a JSON object.

/* Let's sell some mobilephones */
INSERT INTO `e_store`.`products`(
    `name` ,
    `brand_id` ,
    `category_id` ,
    `attributes`
)
VALUES(
    'Desire' ,
    '2' ,
    '2' ,
    JSON_OBJECT(
        "network" ,
        JSON_ARRAY("GSM" , "CDMA" , "HSPA" , "EVDO") ,
        "body" ,
        "5.11 x 2.59 x 0.46 inches" ,
        "weight" ,
        "143 grams" ,
        "sim" ,
        "Micro-SIM" ,
        "display" ,
        "4.5 inches" ,
        "resolution" ,
        "720 x 1280 pixels" ,
        "os" ,
        "Android Jellybean v4.3"
    )
);

INSERT INTO `e_store`.`products`(
    `name` ,
    `brand_id` ,
    `category_id` ,
    `attributes`
)
VALUES(
    'Passion' ,
    '2' ,
    '2' ,
    JSON_OBJECT(
        "network" ,
        JSON_ARRAY("GSM" , "CDMA" , "HSPA") ,
        "body" ,
        "6.11 x 3.59 x 0.46 inches" ,
        "weight" ,
        "145 grams" ,
        "sim" ,
        "Micro-SIM" ,
        "display" ,
        "4.5 inches" ,
        "resolution" ,
        "720 x 1280 pixels" ,
        "os" ,
        "Android Jellybean v4.3"
    )
);

INSERT INTO `e_store`.`products`(
    `name` ,
    `brand_id` ,
    `category_id` ,
    `attributes`
)
VALUES(
    'Emotion' ,
    '2' ,
    '2' ,
    JSON_OBJECT(
        "network" ,
        JSON_ARRAY("GSM" , "CDMA" , "EVDO") ,
        "body" ,
        "5.50 x 2.50 x 0.50 inches" ,
        "weight" ,
        "125 grams" ,
        "sim" ,
        "Micro-SIM" ,
        "display" ,
        "5.00 inches" ,
        "resolution" ,
        "720 x 1280 pixels" ,
        "os" ,
        "Android KitKat v4.3"
    )
);

INSERT INTO `e_store`.`products`(
    `name` ,
    `brand_id` ,
    `category_id` ,
    `attributes`
)
VALUES(
    'Sensation' ,
    '2' ,
    '2' ,
    JSON_OBJECT(
        "network" ,
        JSON_ARRAY("GSM" , "HSPA" , "EVDO") ,
        "body" ,
        "4.00 x 2.00 x 0.75 inches" ,
        "weight" ,
        "150 grams" ,
        "sim" ,
        "Micro-SIM" ,
        "display" ,
        "3.5 inches" ,
        "resolution" ,
        "720 x 1280 pixels" ,
        "os" ,
        "Android Lollypop v4.3"
    )
);

INSERT INTO `e_store`.`products`(
    `name` ,
    `brand_id` ,
    `category_id` ,
    `attributes`
)
VALUES(
    'Joy' ,
    '2' ,
    '2' ,
    JSON_OBJECT(
        "network" ,
        JSON_ARRAY("CDMA" , "HSPA" , "EVDO") ,
        "body" ,
        "7.00 x 3.50 x 0.25 inches" ,
        "weight" ,
        "250 grams" ,
        "sim" ,
        "Micro-SIM" ,
        "display" ,
        "6.5 inches" ,
        "resolution" ,
        "1920 x 1080 pixels" ,
        "os" ,
        "Android Marshmallow v4.3"
    )
);

The products table after adding mobilephones

Notice the JSON_ARRAY function which returns a JSON array when passed a set of values.

If you specify a single key multiple times, only the first key/value pair will be retained. This is called normalizing the JSON in MySQL’s terms. Also, as part of normalization, the object keys are sorted and the extra white-space between key/value pairs is removed.

Another function that we can use to create JSON objects is the JSON_MERGE function.

The JSON_MERGE function takes multiple JSON objects and produces a single, aggregate object.

/* Let's sell some cameras */
INSERT INTO `e_store`.`products`(
    `name` ,
    `brand_id` ,
    `category_id` ,
    `attributes`
)
VALUES(
    'Explorer' ,
    '3' ,
    '3' ,
    JSON_MERGE(
        '{"sensor_type": "CMOS"}' ,
        '{"processor": "Digic DV III"}' ,
        '{"scanning_system": "progressive"}' ,
        '{"mount_type": "PL"}' ,
        '{"monitor_type": "LCD"}'
    )
);

INSERT INTO `e_store`.`products`(
    `name` ,
    `brand_id` ,
    `category_id` ,
    `attributes`
)
VALUES(
    'Runner' ,
    '3' ,
    '3' ,
    JSON_MERGE(
        JSON_OBJECT("sensor_type" , "CMOS") ,
        JSON_OBJECT("processor" , "Digic DV II") ,
        JSON_OBJECT("scanning_system" , "progressive") ,
        JSON_OBJECT("mount_type" , "PL") ,
        JSON_OBJECT("monitor_type" , "LED")
    )
);

INSERT INTO `e_store`.`products`(
    `name` ,
    `brand_id` ,
    `category_id` ,
    `attributes`
)
VALUES(
    'Traveler' ,
    '3' ,
    '3' ,
    JSON_MERGE(
        JSON_OBJECT("sensor_type" , "CMOS") ,
        '{"processor": "Digic DV II"}' ,
        '{"scanning_system": "progressive"}' ,
        '{"mount_type": "PL"}' ,
        '{"monitor_type": "LCD"}'
    )
);

INSERT INTO `e_store`.`products`(
    `name` ,
    `brand_id` ,
    `category_id` ,
    `attributes`
)
VALUES(
    'Walker' ,
    '3' ,
    '3' ,
    JSON_MERGE(
        '{"sensor_type": "CMOS"}' ,
        '{"processor": "Digic DV I"}' ,
        '{"scanning_system": "progressive"}' ,
        '{"mount_type": "PL"}' ,
        '{"monitor_type": "LED"}'
    )
);

INSERT INTO `e_store`.`products`(
    `name` ,
    `brand_id` ,
    `category_id` ,
    `attributes`
)
VALUES(
    'Jumper' ,
    '3' ,
    '3' ,
    JSON_MERGE(
        '{"sensor_type": "CMOS"}' ,
        '{"processor": "Digic DV I"}' ,
        '{"scanning_system": "progressive"}' ,
        '{"mount_type": "PL"}' ,
        '{"monitor_type": "LCD"}'
    )
);

The products table after adding cameras

There is a lot happening in these insert statements and it can get a bit confusing. However, it is pretty simple.

We are only passing objects to the JSON_MERGE function. Some of them have been constructed using the JSON_OBJECT function we saw previously whereas others have been passed as valid JSON strings.

In case of the JSON_MERGE function, if a key is repeated multiple times, it’s value is retained as an array in the output.

A proof of concept is in order I suppose.

/* output: {"network": ["GSM", "CDMA", "HSPA", "EVDO"]} */
SELECT JSON_MERGE(
    '{"network": "GSM"}' ,
    '{"network": "CDMA"}' ,
    '{"network": "HSPA"}' ,
    '{"network": "EVDO"}'
);

We can confirm all our queries were run successfully using the JSON_TYPE function which gives us the field value type.

/* output: OBJECT */
SELECT JSON_TYPE(attributes) FROM `e_store`.`products`;

Add attributes are JSON objects

Read

Right, we have a few products in our database to work with.

For typical MySQL values that are not of type JSON, a where clause is pretty straight-forward. Just specify the column, an operator, and the values you need to work with.

Heuristically, when working with JSON columns, this does not work.

/* It's not that simple */
SELECT
    *
FROM
    `e_store`.`products`
WHERE
    attributes = '{"ports": {"usb": 3, "hdmi": 1}, "screen": "50 inch", "speakers": {"left": "10 watt", "right": "10 watt"}, "resolution": "2048 x 1152 pixels"}';

When you wish to narrow down rows using a JSON field, you should be familiar with the concept of a path expression.

The most simplest definition of a path expression(think JQuery selectors) is it’s used to specify which parts of the JSON document to work with.

The second piece of the puzzle is the JSON_EXTRACT function which accepts a path expression to navigate through JSON.

Let us say we are interested in the range of televisions that have atleast a single USB and HDMI port.

SELECT
    *
FROM
    `e_store`.`products`
WHERE
    `category_id` = 1
AND JSON_EXTRACT(`attributes` , '$.ports.usb') > 0
AND JSON_EXTRACT(`attributes` , '$.ports.hdmi') > 0;

Selecting records by JSON attributes

The first argument to the JSON_EXTRACT function is the JSON to apply the path expression to which is the attributes column. The $ symbol tokenizes the object to work with. The $.ports.usb and $.ports.hdmi path expressions translate to “take the usb key under ports” and “take the hdmi key under ports” respectively.

Once we have extracted the keys we are interested in, it is pretty simple to use the MySQL operators such as > on them.

Also, the JSON_EXTRACT function has the alias -> that you can use to make your queries more readable.

Revising our previous query.

SELECT
    *
FROM
    `e_store`.`products`
WHERE
    `category_id` = 1
AND `attributes` -> '$.ports.usb' > 0
AND `attributes` -> '$.ports.hdmi' > 0;

Update

In order to update JSON values, we are going to use the JSON_INSERT, JSON_REPLACE, and JSON_SET functions. These functions also require a path expression to specify which parts of the JSON object to modify.

The output of these functions is a valid JSON object with the changes applied.

Let us modify all mobilephones to have a chipset property as well.

UPDATE `e_store`.`products`
SET `attributes` = JSON_INSERT(
    `attributes` ,
    '$.chipset' ,
    'Qualcomm'
)
WHERE
    `category_id` = 2;

Updated mobilephones

The $.chipset path expression identifies the position of the chipset property to be at the root of the object.

Let us update the chipset property to be more descriptive using the JSON_REPLACE function.

UPDATE `e_store`.`products`
SET `attributes` = JSON_REPLACE(
    `attributes` ,
    '$.chipset' ,
    'Qualcomm Snapdragon'
)
WHERE
    `category_id` = 2;

Updated mobilephones

Easy peasy!

Lastly, we have the JSON_SET function which we will use to specify our televisions are pretty colorful.

UPDATE `e_store`.`products`
SET `attributes` = JSON_SET(
    `attributes` ,
    '$.body_color' ,
    'red'
)
WHERE
    `category_id` = 1;

Updated televisions

All of these functions seem identical but there is a difference in the way they behave.

The JSON_INSERT function will only add the property to the object if it does not exists already.

The JSON_REPLACE function substitutes the property only if it is found.

The JSON_SET function will add the property if it is not found else replace it.

Delete

There are two parts to deleting that we will look at.

The first is to delete a certain key/value from your JSON columns whereas the second is to delete rows using a JSON column.

Let us say we are no longer providing the mount_type information for cameras and wish to remove it for all cameras.

We will do it using the JSON_REMOVE function which returns the updated JSON after removing the specified key based on the path expression.

UPDATE `e_store`.`products`
SET `attributes` = JSON_REMOVE(`attributes` , '$.mount_type')
WHERE
    `category_id` = 3;

Cameras after removing mount_type property

For the second case, we also do not provide mobilephones anymore that have the Jellybean version of the Android OS.

DELETE FROM `e_store`.`products`
WHERE `category_id` = 2
AND JSON_EXTRACT(`attributes` , '$.os') LIKE '%Jellybean%';

We do not sell Jellybeans anymore!

As stated previously, working with a specific attribute requires the use of the JSON_EXTRACT function so in order to apply the LIKE operator, we have first extracted the os property of mobilephones(with the help of category_id) and deleted all records that contain the string Jellybean.

A Primer for Web Applications

The old days of directly working with a database are way behind us.

These days, frameworks insulate developers from lower-level operations and it almost feels alien for a framework fanatic not to be able to translate his/her database knowledge into an object relational mapper.

For the purpose of not leaving such developers heartbroken and wondering about their existence and purpose in the universe, we are going to look at how to go about the business of JSON columns in the Laravel framework.

We will only be focusing on the parts that overlap with our subject matter which deals with JSON columns. An in-depth tutorial on the Laravel framework is beyond the scope of this piece.

Creating the Migrations

Make sure to configure your Laravel application to use a MySQL database.

We are going to create three migrations for brands, categories, and products respectively.

$ php artisan make:migration create_brands
$ php artisan make:migration create_categories
$ php artisan make:migration create_products

The create_brands and create_categories migrations are pretty similar and and a regulation for Laravel developers.

/* database/migrations/create_brands.php */

<?php

use IlluminateSupportFacadesSchema;
use IlluminateDatabaseSchemaBlueprint;
use IlluminateDatabaseMigrationsMigration;

class CreateBrands extends Migration
{
    /**
     * Run the migrations.
     *
     * @return void
     */
    public function up()
    {
        Schema::create('brands', function(Blueprint $table){
            $table->engine = 'InnoDB';
            $table->increments('id');
            $table->string('name');
            $table->timestamps();
        });
    }

    /**
     * Reverse the migrations.
     *
     * @return void
     */
    public function down()
    {
        Schema::drop('brands');
    }
}

/* database/migrations/create_categories.php */

<?php

use IlluminateSupportFacadesSchema;
use IlluminateDatabaseSchemaBlueprint;
use IlluminateDatabaseMigrationsMigration;

class CreateCategories extends Migration
{
    /**
     * Run the migrations.
     *
     * @return void
     */
    public function up()
    {
        Schema::create('categories', function(Blueprint $table){
            $table->engine = 'InnoDB';
            $table->increments('id');
            $table->string('name');
            $table->timestamps();
        });
    }

    /**
     * Reverse the migrations.
     *
     * @return void
     */
    public function down()
    {
        Schema::drop('categories');
    }
}

The create_products migration will also have the directives for indexes and foreign keys.

/* database/migrations/create_products */

<?php

use IlluminateSupportFacadesSchema;
use IlluminateDatabaseSchemaBlueprint;
use IlluminateDatabaseMigrationsMigration;

class CreateProducts extends Migration
{
    /**
     * Run the migrations.
     *
     * @return void
     */
    public function up()
    {
        Schema::create('products', function(Blueprint $table){
            $table->engine = 'InnoDB';
            $table->increments('id');
            $table->string('name');
            $table->unsignedInteger('brand_id');
            $table->unsignedInteger('category_id');
            $table->json('attributes');
            $table->timestamps();
            // foreign key constraints
            $table->foreign('brand_id')->references('id')->on('brands')->onDelete('cascade')->onUpdate('restrict');
            $table->foreign('category_id')->references('id')->on('categories')->onDelete('cascade')->onUpdate('restrict');
            // indexes
            $table->index('brand_id');
            $table->index('category_id');
        });
    }

    /**
     * Reverse the migrations.
     *
     * @return void
     */
    public function down()
    {
        Schema::drop('products');
    }
}

Pay attention to the $table->json('attributes'); statement in the migration.

Just like creating any other table field using the appropriate data type named method, we have created a JSON column using the json method with the name attributes.

Also, this only works for database engines that support the JSON data type.

Engines, such as older versions of MySQL will not be able to carry out these migrations.

Creating the Models

Other than associations, there is not much needed to set up our models so let’s run through them quickly.

/* app/Brand.php */

<?php

namespace App;

use IlluminateDatabaseEloquentModel;

class Brand extends Model
{
    // A brand has many products
    public function products(){
        return $this->hasMany('Product')
    }
}

/* app/Category.php */

<?php

namespace App;

use IlluminateDatabaseEloquentModel;

class Category extends Model
{
    // A category has many products
    public function products(){
        return $this->hasMany('Product')
    }
}

/* app/Product.php */

<?php

namespace App;

use IlluminateDatabaseEloquentModel;

class Product extends Model
{
    // Cast attributes JSON to array
    protected $casts = [
        'attributes' => 'array'
    ];

    // Each product has a brand
    public function brand(){
        return $this->belongsTo('Brand');
    }

    // Each product has a category
    public function category(){
        return $this->belongsTo('Category');
    }
}

Again, our Product model needs a special mention.

The $casts array which has the key attributes set to array makes sure whenever a product is fetched from the database, it’s attributes JSON is converted to an associated array.

We will see later in the tutorial how this facilitates us to update records from our controller actions.

Resource Operations

Creating a Product

Speaking of the admin panel, the parameters to create a product maybe coming in through different routes since we have a number of product categories. You may also have different views to create, edit, show, and delete a product.

For example, a form to add a camera requires different input fields than a form to add a mobilephone so they warrant separate views.

Moreoever, once you have the user input data, you will most probabaly run it through a request validator, separate for the camera, and the mobilephone each.

The final step would be to create the product through Eloquent.

We will be focusing on the camera resource for the rest of this tutorial. Other products can be addressed using the code produced in a similar manner.

Assuming we are saving a camera and the form fields are named as the respective camera attributes, here is the controller action.

// creates product in database
// using form fields
public function store(Request $request){
    // create object and set properties
    $camera = new AppProduct();
    $camera->name = $request->name;
    $camera->brand_id = $request->brand_id;
    $camera->category_id = $request->category_id;
    $camera->attributes = json_encode([
        'processor' => $request->processor,
        'sensor_type' => $request->sensor_type,
        'monitor_type' => $request->monitor_type,
        'scanning_system' => $request->scanning_system,
    ]);
    // save to database
    $camera->save();
    // show the created camera
    return view('product.camera.show', ['camera' => $camera]);
}

Fetching Products

Recall the $casts array we declared earlier in the Product model. It will help us read and edit a product by treating attributes as an associative array.

// fetches a single product
// from database
public function show($id){
    $camera = AppProduct::find($id);
    return view('product.camera.show', ['camera' => $camera]);
}

Your view would use the $camera variable in the following manner.

<table>
    <tr>
        <td>Name</td>
        <td>{{ $camera->name }}</td>
    </tr>
    <tr>
        <td>Brand ID</td>
        <td>{{ $camera->brand_id }}</td>
    </tr>
    <tr>
        <td>Category ID</td>
        <td>{{ $camera->category_id }}</td>
    </tr>
    <tr>
        <td>Processor</td>
        <td>{{ $camera->attributes['processor'] }}</td>
    </tr>
    <tr>
        <td>Sensor Type</td>
        <td>{{ $camera->attributes['sensor_type'] }}</td>
    </tr>
    <tr>
        <td>Monitor Type</td>
        <td>{{ $camera->attributes['monitor_type'] }}</td>
    </tr>
    <tr>
        <td>Scanning System</td>
        <td>{{ $camera->attributes['scanning_system'] }}</td>
    </tr>
</table>

Editing a Product

As shown in the previous section, you can easily fetch a product and pass it to the view, which in this case would be the edit view.

You can use the product variable to pre-populate form fields on the edit page.

Updating the product based on the user input will be pretty similar to the store action we saw earlier, only that instead of creating a new product, you will fetch it first from the database before updating it.

Searching Based on JSON Attributes

The last piece of the puzzle that remains to discuss is querying JSON columns using the Eloquent ORM.

If you have a search page that allows cameras to be searched based on their specifications provided by the user, you can do so with the following code.

// searches cameras by user provided specifications
public function search(Request $request){
    $cameras = AppProduct::where([
        ['attributes->processor', 'like', $request->processor],
        ['attributes->sensor_type', 'like', $request->sensor_type],
        ['attributes->monitor_type', 'like', $request->monitor_type],
        ['attributes->scanning_system', 'like', $request->scanning_system]
    ])->get();
    return view('product.camera.search', ['cameras' => $cameras]);
}

The retrived records will now be available to the product.camera.search view as a $cameras collection.

Deleting a Product

Using a non-JSON column attribute, you can delete products by specifying a where clause and then calling the delete method.

For example, in case of an ID.

AppProduct::where('id', $id)->delete();

For JSON columns, specify a where clause using a single or multiple attributes and then call the delete method.

// deletes all cameras with the sensor_type attribute as CMOS
AppProduct::where('attributes->sensor_type', 'CMOS')->delete();
}

Curtains

We have barely scratched the surface when it comes to using JSON columns in MySQL.

Whenever you need to save data as key/value pairs in a separate table or work with flexible attributes for an entity, you should consider using a JSON data type field instead as it can heavily contribute to compressing your database design.

If you are interested in diving deeper, the MySQL documentation is a great resource to explore JSON concepts futher.

I hope you found this tutorial interesting and knowledgeable. Until my next piece, happy coding!

Source:: scotch.io

Laravel Random Keys with Keygen

By gladchinda

KuaKSaUGTZGUKCmMpq1f_post-cover-photo.jpg

When developing applications, it is usually common to see randomness come into play – and as a result, many programming languages have built-in random generation mechanisms. Some common applications include:

  • Generating a random numeric code for email confirmation or phone number verification service.
  • Password generation service that generates random alphanumeric password strings.
  • Generating random base64-encoded tokens or strings as API keys.
  • Generating random strings as password salts to hash user passwords.

When your application is required to generate very simple random character sequences like those enumerated above, then the Keygen package is a good option to go for.

Introducing the Keygen Package

Keygen is a PHP package for generating simple random character sequences of any desired length and it ships with four generators, namely: numeric, alphanumeric, token and bytes. It has a very simple interface and supports method chaining – making it possible to generate simple random keys with just one line of code. The Keygen package can save you some time trying to implement a custom random generation mechanism for your application. Here are some added benefits of the Keygen package:

  • Seamless key affixes: It’s very easy to add a prefix or suffix to the random generated string.
  • Key Transformations: You can process the random generated string through a queue of callables before it is finally outputted.
  • Key Mutations: You can control manipulations and mutations of multiple Keygen instances.

This tutorial provides a quick guide on how you can get started with the Keygen package and using it in your Laravel applications. For a complete documentation and usage guide of the Keygen package, see the README document at Github.

Getting Started

In this tutorial, we would be creating a simple REST API service. The API simply provides endpoints for creating user record, showing user record and generating a random password.

This tutorial assumes you already have a Laravel application running and the Composer tool is installed in your system and added to your system PATH. In this tutorial, I am using Laravel 5.3, which is the latest stable version at this time of writing. You can refer to the Laravel Installation guide if you don’t have Laravel installed.

Next, we would install the Keygen package as a dependency for our project using composer. The Keygen package is available on the Packagist repository as gladcodes/keygen.

composer require gladcodes/keygen

If it installed correctly, you should see a screen like the following screenshot.

Keygen Installation Screenshot

Creating an alias for the Keygen package

The functionality of the Keygen package is encapsulated in the KeygenKeygen class. For convenience, we would register an alias for this class, so that we can easily use it anywhere in our application. To create the alias, we would edit the config/app.php file and add a record for our alias after the last record in the aliases array as shown in the following snippet.

// config/app.php

'aliases' => [
    // ... other alias records
    'Keygen' => KeygenKeygen::class,
],

Now we can use the Keygen package anywhere in our application. Add the use Keygen directive in your code to use the Keygen package as shown in the following usage example code.

// usage example

<?php

use Keygen;

$id = Keygen::numeric(10)->generate();
echo $id; //2542831057

Creating the User Model

Next, we would create a database table called users to store our users records. The schema for the table is as follows:

  • id INT(11) NOT NULL PRIMARY
  • code CHAR(24) NOT NULL UNIQUE
  • firstname VARCHAR(32) NOT NULL
  • lastname VARCHAR(32) NOT NULL
  • email VARCHAR(80) NOT NULL UNIQUE
  • password_salt CHAR(64) NOT NULL
  • password_hash CHAR(60) NOT NULL

What about autoincrement?
For this tutorial, the id of our users table would be a unique random generated integer, just to demonstrate with the Keygen package. This choice is based on preference, and does not in anyway discourage the use of auto-incremented IDs.

If created correctly, it should be as shown in the following screenshot.

Users table Screenshot

Before you proceed, check the config/database.php file and .env file of your application to ensure that you have the correct configuration for your database.

Next, we would create a model for the users table using Laravel’s artisan command-line interface. Laravel ships with a built-in User model so we have to create our custom User model in a different location – app/Models folder, as shown in the following command.

php artisan make:model Models/User

We would modify the created User class in the app/Models/User.php file as shown in the following code to configure our model as required.

// app/Models/User.php

<?php

namespace AppModels;

use IlluminateDatabaseEloquentModel;

class User extends Model
{
    protected $table = 'users';

    public $timestamps = false;

    public $incrementing = false;

    public function setEmailAttribute($email)
    {
        // Ensure valid email
        if (!filter_var($email, FILTER_VALIDATE_EMAIL)) {
            throw new Exception("Invalid email address.");
        }

        // Ensure email does not exist
        elseif (static::whereEmail($email)->count() > 0) {
            throw new Exception("Email already exists.");
        }

        $this->attributes['email'] = $email;
    }
}

In the preceeding code, we set the timestamps property to false to disable Laravel’s timestamps features in our model. We also set the incrementing property to false to disable auto-incrementing of the primary key field.

Finally, defined a mutator for the email attribute of our model with email validation check and check to avoid duplicate email entries.

Defining Routes for the API

Next, we would define routes for the API endpoints. There are basically four endpoints:

  • GET /api/users
  • POST /api/users
  • GET /api/user/{id}
  • GET /api/password

// routes/web.php
// Add the following route definitions for API

Route::group(['prefix' => 'api'], function() {
    Route::get('/users', 'ApiController@showAllUsers');
    Route::post('/users', 'ApiController@createNewUser');
    Route::get('/user/{id}', 'ApiController@showOneUser');
    Route::get('/password', 'ApiController@showRandomPassword');
});

Next, we would create our ApiController using Laravel’s artisan command-line interface and then add the methods registered in the routes.

php artisan make:controller ApiController

The above command creates a new file app/Http/Controllers/ApiController.php that contains the ApiController class. We can go ahead to edit the class and add the methods registered in the routes.

// app/Http/Controllers/ApiController.php

<?php

namespace AppHttpControllers;

use IlluminateHttpRequest;

class ApiController extends Controller
{
    public function showAllUsers(Request $request)
    {}

    public function createNewUser(Request $request)
    {}

    public function showOneUser(Request $request, $id)
    {}

    public function showRandomPassword(Request $request)
    {}
}

Laravel ships with a middleware for CSRF verification on all web routes. We won’t require this for our API, so we will exclude our api routes from the CSRF verification service in the app/Http/Middleware/VerifyCsrfToken.php file.

// app/Http/Middleware/VerifyCsrfToken.php

<?php

namespace AppHttpMiddleware;

use IlluminateFoundationHttpMiddlewareVerifyCsrfToken as BaseVerifier;

class VerifyCsrfToken extends BaseVerifier
{
    /**
     * The URIs that should be excluded from CSRF verification.
     *
     * @var array
     */
    protected $except = [
        '/api/*',
    ];
}

Generate Unique ID for User

The Keygen package will be used to generate a unique 8-digit integer ID for the user. We will implement the unique ID generation mechanism in a new generateID() method. We will also add use directives for Hash, Keygen and AppModelsUser classes in our controller.

First let’s add a new generateNumericKey() method for generating random numeric keys of length 8 integers.

// app/Http/Controllers/ApiController.php

<?php

namespace AppHttpControllers;

use Hash;
use Keygen;
use AppModelsUser;
use IlluminateHttpRequest;

class ApiController extends Controller
{
    // ... other methods

    protected function generateNumericKey()
    {
        return Keygen::numeric(8)->generate(); 
    }
}

The Keygen package generates numeric keys by statically calling the numeric() method of the KeygenKeygen class. It takes an optional length argument which specifies the length of the numeric key and defaults to 16 if omitted or not a valid integer. In our case, the length of the generated numeric key is 8. The generate() method must be called to return the generated key.

Usually it is not desirable to have zeroes starting-off integers that will be stored in the database, especially IDs. The following snippet modifies the generation mechanism of the generateNumericKey() method by using the prefix() method provided by the Keygen package to add a non-zero integer at the beginning of the numeric key. This is known as an affix. The Keygen package also provides a suffix() method for adding characters at the end of generated keys.


// modified generateNumericKey() method
// Ensures non-zero integer at beginning of key

protected function generateNumericKey()
{
    // prefixes the key with a random integer between 1 - 9 (inclusive)
    return Keygen::numeric(7)->prefix(mt_rand(1, 9))->generate(true);
}

In the preceeding code, observe how we called numeric() with length 7. This is because we are adding a random non-zero integer as a prefix, making the length of the final generated numeric key to be 8 as is required.

And now let’s implement the generateID() method to generate unique user IDs.


// generateID() method

protected function generateID()
{
    $id = $this->generateNumericKey();

    // Ensure ID does not exist
    // Generate new one if ID already exists
    while (User::whereId($id)->count() > 0) {
        $id = $this->generateNumericKey();
    }

    return $id;
}

Generate Code for User

Now we will generate a random code of the form XXXX-XXXX-XXXX-XXXX-XXXX for the user such that X is a hexadecimal character and always in uppercase. We will use a feature provided by the Keygen package called Key Transformation to transform randomly generated bytes to our desired code.

What is a Key Transformation?
A transformation is simply a callable that can take the generated key as the first argument and returns a string. Each transformation is added to a queue and executed on the generated key before the key is returned.

Let’s create a new generateCode() method to handle the code generation logic.


protected function generateCode()
{
    return Keygen::bytes()->generate(
        function($key) {
            // Generate a random numeric key
            $random = Keygen::numeric()->generate();

            // Manipulate the random bytes with the numeric key
            return substr(md5($key . $random . strrev($key)), mt_rand(0,8), 20);
        },
        function($key) {
            // Add a (-) after every fourth character in the key
            return join('-', str_split($key, 4));
        },
        'strtoupper'
    );
}

Here we generated some random bytes by calling the byte() method of the Keygen package and then added three transformations to the randomly generated bytes as follows:

  • The first is a custom function that manipulates the randomly generated bytes, computes an MD5-hash and returns a substring of the hash that is 20 characters long.
  • The second is a custom function that adds a hyphen (-) after every fourth character of the substring from the previous transformation.
  • The last is the built-in strtoupper PHP function that makes the resulting string uppercase.

Creating a New User

Let’s write the implementation of the createNewUser() method in our ApiController to create record for new user.


public function createNewUser()
{
    $user = new User;

    // Generate unique ID
    $user->id = $this->generateID();

    // Generate code for user
    $user->code = $this->generateCode();

    // Collect data from request input
    $user->firstname = $request->input('firstname');
    $user->lastname = $request->input('lastname');
    $user->email = $request->input('email');

    $password = $request->input('password');

    // Generate random base64-encoded token for password salt
    $salt = Keygen::token(64)->generate();

    $user->password_salt = $salt;

    // Create a password hash with user password and salt
    $user->password_hash = Hash::make($password . $salt . str_rot13($password));

    // Save the user record in the database
    $user->save();

    return $user;
}

In the preceeding snippet, we have used Keygen::token() to generate a random base64-encoded token for our password salt, 64 characters long. We also used Laravel’s built-in Hash facade to make a bcrypt password hash using the user password and the password salt.

You can now create a user record through the route POST /api/users. I am using Postman to test the API endpoints. This is the JSON payload of my POST request:

{
    "firstname": "Jack",
    "lastname": "Bauer",
    "email": "jackbauer@movie24.net",
    "password": "f1gHtTerr0rIsts"
}

Here is the screenshot from Postman.

Creating a user

Implementing the remaining methods

Let’s write the implementation for the remaining methods in our controller.


// app/Http/Controllers/ApiController.php

public function showAllUsers(Request $request)
{
    // Return a collection of all user records
    return User::all();
}

public function showOneUser(Request $request, $id)
{
    // Return a single user record by ID
    return User::find($id);
}

public function showRandomPassword(Request $request)
{
    // Set length to 12 if not specified in request
    $length = (int) $request->input('length', 12);

    // Generate a random alphanumeric combination
    $password = Keygen::alphanum($length)->generate();

    return ['length' => $length, 'password' => $password];
}

In the showRandomPassword() method implementation, we are using Keygen::alphanum() to create a random combination of alphanumeric characters as the generated password. The length of the generated password is gotten from the length query parameter of the request if provided, else, it defaults to 12 as specified.

Testing the API

Let’s create another user record with the endpoint POST /api/users. I am using Postman to test the API endpoints. This is the JSON payload of my POST request:

{
    "firstname": "Glad",
    "lastname": "Chinda",
    "email": "gladxeqs@gmail.com",
    "password": "l0VeKOd1Ng"
}

Here is the screenshot from Postman.

Creating a new user

Now let’s get all the user records using the endpoint GET /api/users. Here is the screenshot from Postman.

Getting all users

Next, we would get the record for one user. I want to get the record for the user Glad Chinda, so I will use the endpoint GET /api/user/93411315. Here is the screenshot from Postman.

Getting one user

Finally, we would test the password generation endpoint to generate random passwords. First, we would call the endpoint without a length parameter to generate a password of length 12 i.e GET /api/password. Here is the screenshot from Postman.

Getting password default length

Next, we would call the endpoint with a length parameter, GET /api/password?length=8 to generate a password of length 8. Here is the screenshot from Postman.

Getting password of length 8 chars

Conclusion

In this article, we have been able to explore the basic random key generation techniques of the Keygen package and also wire them into our Laravel application. For a detailed usage guide and documentation of the Keygen package, see the Keygen repository on Github. For a code sample of this tutorial, checkout the laravel-with-keygen-demo repository on Github.

Source:: scotch.io

Build a Music Player with Angular & Electron II : Making the UI

By chris92

Z5LN2ClT0SOnhqYzEUGw_build-an-angular-electron-music-app.jpg

In the [previous]() post on building a music player with Angular and Electron, we were able to successfully setup an environment where our app can live. Angular was bootstrapped, Electron was loaded, and the app opens up displaying a test content.

We also discussed the different types of components which are presentation and container components. In this part of the series, we will build our presentation components which include the following:

  1. Search
  2. Details
  3. Player
  4. Progress
  5. Footer

App Structure

Based on the Angular Style Guide, we will structure our app in a manner that every presentation component is going to be in a folder. This folder will also contain the component’s HTML and CSS files. Starting from the app directory, our app structure should look like the following:

|--app
|----music
|------music-details
|--------music-details.component.css
|--------music-details.component.html
|--------music-details.component.ts
|------music-footer
|--------music-footer.component.css
|--------music-footer.component.html
|--------music-footer.component.ts
|------music-progress
|--------music-progress.component.ts
|--------...
|------music-progress
|--------music-progress.component.ts
|--------...
|------music-search
|--------music-search.component.ts
|--------...
|------shared
|--------api.service.ts
|--------music.service.ts
|------music.module.ts
|----app.component.css
|----app.component.html
|----app.component.ts

We will not touch the shared folder and the app component in this part of the post. What we will do is build the UI components and assemble them for export using the MusicModule.

UI Wireframe

The music player’s design will take the same structure as the of the React’s article and the diagram below shows a rough sketch of what we are up to:

Global Styles

Some styles like app background color, resets and tweaks for the music search text box needs to go into the global style which is located in the src folder:

/*
./src/styles.css
*/
*, *:before, *:after {
  box-sizing: border-box;
}

body, html {
  margin: 0;
  padding: 0;
}

body{
  background: #000;
}

.ui-autocomplete, .ui-inputtext {
  width: 100%;
  margin: 0;
}

.ui-inputtext {
  border-radius: 0;
  margin: 0;
  border: none;
  border-bottom: 2px solid rgb(21,96,150);
  outline: none;
  background: rgba(255, 255, 255, 0.8);
}

We need font-awesome fonts for our player controls. You can install the font via npm:

npm install --save font-awesome

…then add the font awesome url to the angular-cli.json styles array:

"styles": [
        "../node_modules/font-awesome/css/font-awesome.css",
        "styles.css"
      ],

Restart the build process by running npm start after adding the style so it can be loaded.

Components

We listed the UI components we need to create above. Let’s do that right away, one after the other.

Below, is the MusicModule which imports all the members of the music section:

// ./src/app/music/music.module.ts

// Third party imports
import { NgModule } from '@angular/core';
import { FormsModule } from '@angular/forms';
import { HttpModule } from "@angular/http";
import { CommonModule } from '@angular/common';
// PrimeNG autocomplete fro search
import { AutoCompleteModule } from 'primeng/primeng';

// Custom imports
import { MusicSearchComponent } from './music-search/music-search.component';
import { MusicPlayerComponent } from './music-player/music-player.component';
import { MusicDetailsComponent } from './music-details/music-details.component';
import { MusicProgressComponent } from './music-progress/music-progress.component';
import { MusicFooterComponent } from './music-footer/music-footer.component';
import { MusicService } from './shared/music.service';
import { ApiService } from './shared/api.service';

@NgModule({
    imports: [
      // Define imports
      FormsModule,
      AutoCompleteModule,
      HttpModule,
      CommonModule
    ],
    exports: [
      // Expose components
      MusicSearchComponent,
      MusicDetailsComponent,
      MusicPlayerComponent,
      MusicProgressComponent,
      MusicFooterComponent
    ],
    declarations: [
      // Declare components
      MusicSearchComponent,
      MusicDetailsComponent,
      MusicPlayerComponent,
      MusicProgressComponent,
      MusicFooterComponent
    ],
    providers: [
      // Services
      ApiService,
      MusicService
    ],
})
export class MusicModule { }

Do not panic at the errors because we are yet to create these members and we will start doing that right away.

1. Search Component

The search component is an autocomplete control so rather than walk the painful stress of building it, we can just make use of what PrimeNG. PrimeNG’s autocomplete is a drop and use component and very easy to setup.

First, we have to install primeng:

# Install PrimeNG
npm install primeng --save

When npm is done with the installation, import primeng to the MusicModule. We already did that:

import { AutoCompleteModule } from 'primeng/primeng';
// ...

@NgModule({
    imports: [
      // ...
      AutoCompleteModule,
      ]
})

One more thing to get done with installing PrimeNG is to install its themes and global css. You can do this the same way we installed font-awesome:

"styles": [
"../node_modules/primeng/resources/themes/omega/theme.css",
        "../node_modules/primeng/resources/primeng.min.css",
        "../node_modules/font-awesome/css/font-awesome.css",
        "styles.css"
      ],

Next, we add the search template with p-autocomplete which is the name AutoComplete‘s selector name:

SEARCH COMPONENT TEMPLATE

<!-- ./src/app/music/music-search/music-search.component.html -->
<p-autoComplete
  [ngModel]="track"
  [suggestions]="tracks"
  (completeMethod)="search($event)"
  (onSelect)="select($event)"
  field="title"
>    
  <template let-track>
    <div class="ui-helper-clearfix" style="border-bottom:1px solid #D5D5D5">
      <img src="{{track.artwork_url}}" class="artwork"/>
      <div class="text truncate">{{track.title}}</div>
    </div>
  </template>
</p-autoComplete>=

The autocomplete uses [ngModel] property to keep track of the value of the text box. [suggestions] is the list it should search in which is tracks. completeMethod is the event raised on keystrokes while onSelect is the event raised when the autocomplete suggestions item is clicked.

The values of these events and properties are passed down from the container component, and we will see how that is done when we discuss the container component.

template is used to provide a custom view for our tracks.

SEARCH COMPONENT CLASS
The following is the search component class:

// ./src/app/music/music-search/music-search.component.ts
import { Component, Output, EventEmitter, Input } from '@angular/core';

@Component({
  selector: 'music-search',
  templateUrl: './music-search.component.html',
  styleUrls: ['./music-search.component.css']
})
export class MusicSearchComponent {

  track: string;
  @Input() tracks: any[];

  @Output() update = new EventEmitter();
  @Output() query = new EventEmitter();

  search(event) {
    this.query.emit(event.query);
  }

  select(track) {
    this.update.emit(track);
  }
}

See how this component completely unaware of how the tracks created or even how the search and select events are handled. All it knows is that someday, it’s parent will send down tracks to it using Input decorator and when an event is raised within, it is delegated to the parent component with Output to handle.

We are beginning to see how UI/presentation components are dumb and unaware of a lot of things about our application.

SEARCH COMPONENT STYLES

/*
./src/app/music/music-search/music-search.component.css
*/
.truncate {
  width: 400px;
  white-space: nowrap;
  overflow: hidden;
  text-overflow: ellipsis;
}

.artwork{
  width:32px;
  display:inline-block;
  margin:5px 0 2px 5px;
}

.text {
  font-size:18px;
  float:right;
  margin:10px 10px 0 0;
}

2. Details Component

This component is a simple one because it just has one Input propety, a title which displays the song title:

DETAILS COMPONENT CLASS

// ./src/app/music/music-details/music-details.component.ts
import {Component, Input} from '@angular/core';

@Component({
  selector: 'music-details',
  templateUrl: './music-details.component.html',
  styleUrls: ['./music-details.component.css'],
})
export class MusicDetailsComponent {
  @Input() title: string;
}

The component expects just one value from it’s parent container for the title property.

DETAILS COMPONENT TEMPLATE

<!-- ./src/app/music/music-details/music-details.component.html -->
<div class="details">
  <h3>{{title}}</h3>
</div>

DETAILS COMPONENT STYLES

/*
./src/app/music/music-details/music-details.component.css
*/
.details h3{
  text-align: center;
  padding: 50px 10px;
  margin: 0;
  color: white;
}

3. Player Component

The player component has the most controls with the most events. The controls that control sound — play, pause, stop, forward, backward and random. The component also receives a boolean Input property to check if a song is playing or not so as to.

PLAYER COMPONENT CLASS

// ./src/app/music/music-player/music-player.component.ts
import { Component, Output, EventEmitter, Input } from '@angular/core';

@Component({
  selector: 'music-player',
  templateUrl: './music-player.component.html',
  styleUrls: ['./music-player.component.css'],
})
export class MusicPlayerComponent {
  // If song is paused or playing    
  @Input() paused;
  // Controls
  @Output() backward = new EventEmitter();
  @Output() pauseplay = new EventEmitter();
  @Output() forward = new EventEmitter();
  @Output() random = new EventEmitter();
  @Output() stop = new EventEmitter();
}

PLAYER COMPONENT TEMPLATE

<!-- ./src/app/music/music-player/music-player.component.html -->
<div class="player">
  <div class="player__backward">
    <button (click)="backward.emit()"><i class="fa fa-backward"></i></button>
  </div>

  <div class="player__main">
    <button *ngIf="paused" (click)="pauseplay.emit()"><i class="fa fa-pause"></i></button>
    <button *ngIf="!paused" (click)="pauseplay.emit()"><i class="fa fa-play"></i></button>
    <button (click)="stop.emit()"><i class="fa fa-stop"></i></button>
    <button (click)="random.emit()"><i class="fa fa-random"></i></button>
  </div>

  <div class="player__forward">
    <button (click)="forward.emit()"><i class="fa fa-forward"></i></button>
  </div>
</div>

The snippet above shows how the events on the component class are emitted when each of the buttons is clicked. The paused property is used to toggle between the pause and the play buttons when the song is playing and paused respectively.

PLAYER COMPONENT STYLES

/*
./src/app/music/music-player/music-player.component.css
*/
.player{
  text-align: center;
  margin-top: 60px;
}

.player div{
  display: inline-block;
  margin-left: 10px;
  margin-right: 10px;
}

.player .player__backward button, .player .player__forward button{
  background: transparent;
  border: 1px solid rgb(21,96,150);
  color: rgb(24,107,160);
  width: 75px;
  height: 75px;
  border-radius: 100%;
  font-size: 35px;
  outline: none;
}

.player .player__backward button{
  border-left: none;
}

.player .player__forward button{
  border-right: none;
}

.player .player__main button:hover, .player .player__backward button:hover, .player .player__forward button:hover{
  color: rgba(24,107,160,0.7);
  border: 1px solid rgba(21,96,150,0.7);
}

.player .player__main {
  border: 1px solid rgb(21,96,150);
}

.player .player__main button {
  color: rgb(21,96,150);
  background: transparent;
  width: 75px;
  height: 75px;
  border: none;
  font-size: 35px;
  outline: none;
}

4. Progress Component

The progress component is responsible for displaying how far into a song we have played as well as the played time and the total time it takes the play the song. This one has no events to emit but just 3 Input properties to keep track of time.

PROGRESS COMPONENT CLASS

// ./src/app/music/music-progress/music-progress.component.ts
import {Component, Input} from '@angular/core';

@Component({
  selector: 'music-progress',
  templateUrl: './music-progress.component.html',
  styleUrls: ['./music-progress.component.css'],
})
export class MusicProgressComponent {
  // Played
  @Input() elapsed: string;
  // Total time
  @Input() total: string;
  // Current time for the progress bar
  @Input() current: number;
}

PROGRESS COMPONENT TEMPLATE

<!--./src/app/music/music-progress/music-progress.component.html-->
<div class="progress">
  <span class="player__time-elapsed">{{elapsed}}</span>
  <progress
    value="{{current}}"
    max="1"></progress>
  <span class="player__time-total">{{total}}</span>
</div>

PROGRESS COMPONENT STYLE

/*
./src/app/music/music-progress/music-progress.component.css
*/
.progress{
  text-align: center;
  margin-top: 100px;
  color: white;
}

.progress progress[value] {
  /* Reset the default appearance */
  -webkit-appearance: none;
  appearance: none;

  width: 390px;
  height: 20px;
  margin-left: 4px;
  margin-right: 4px;
}

.progress progress[value]::-webkit-progress-bar {
  background-color: #eee;
  border-radius: 2px;
  box-shadow: 0 2px 5px rgba(0, 0, 0, 0.25) inset;
}

.progress progress[value]::-webkit-progress-value {
  background-color: rgb(24,107,160);
  border-radius: 2px;
  background-size: 35px 20px, 100% 100%, 100% 100%;
}

5. Footer Component

This is just for brand sake because all that matters is the HTML content that displays a tiny text and image of Scotch and Soundcloud.

FOOTER COMPONENT CLASS

// ./src/app/music/music-progress/music-progress.component.ts
import { Component } from '@angular/core';

@Component({
  selector: 'music-footer',
  templateUrl: './music-footer.component.html',
  styleUrls: ['./music-footer.component.css'],
})
export class MusicFooterComponent {}

FOOTER COMPONENT TEMPLATE

<!-- ./src/app/music/music-progress/music-progress.component.html -->
<div class="footer">
  <p>Love from <img src="/assets/img/logo.png" class="logo"/>
    & <img src="/assets/img/soundcloud.png" class="soundcloud"/></p>
</div>

FOOTER COMPONENT STYLES

/*
./src/app/music/music-progress/music-progress.component.css
*/
.footer{
  color: white;
  position: absolute;
  bottom: 0px;
  width: 100%;
  background: #030C12;
}

.footer p{
  text-align: center;
}

.footer .logo{
  height: 25px;
  width: auto;
}
.footer .soundcloud{
  height: 25px;
  width: auto;
}

Up Next

We are making great progress. We have our UI almost done and in the next (and last) post, we will tie everything together using the container component.

Source:: scotch.io

ES proposal: Shared memory and atomics

By Axel Rauschmayer

The ECMAScript proposal “Shared memory and atomics” by Lars T. Hansen has reached stage 4 this week and will be part of ECMAScript 2017. It introduces a new constructor SharedArrayBuffer and a namespace object Atomics with helper functions. This blog post explains the details.

Parallelism vs. concurrency

Before we begin, let’s clarify two terms that are similar, yet distinct: “parallelism” and “concurrency”. Many definitions for them exist; I’m using them as follows:

  • Parallelism (parallel vs. serial): execute multiple tasks simultaneously
  • Concurrency (concurrent vs. sequential): execute several tasks during overlapping periods of time (and not one after another).

Both are closely related, but not the same:

  • Parallelism without concurrency: single instruction, multiple data (SIMD). Multiple computations happen in parallel, but only a single task (instruction) is executed at any given moment.
  • Concurrency without parallelism: multitasking via time-sharing on a single-core CPU.

However, it is difficult to use these terms precisely, which is why interchanging them is usually not a problem.

Models of parallelism

Two models of parallelism are:

  • Data parallelism: The same piece of code is executed several times in parallel. The instances operate on different elements of the same dataset. For example: MapReduce is a data-parallel programming model.

  • Task parallelism: Different pieces of code are executed in parallel. Examples: web workers and the Unix model of spawning processes.

A history of JS parallelism

  • JavaScript started as being executed in a single thread. Some tasks could be performed asynchronously: browsers usually ran those tasks in separate threads and later fed their results back into the single thread, via callbacks.

  • Web workers brought task parallelism to JavaScript: They are relatively heavweight processes. Each worker has its own global environment. By default, nothing is shared. Communication between workers (or between workers and the main thread) evolved:

    • At first, you could only send and receive strings.
    • Then, structured cloning was introduced: copies of data could be sent and received. Structured cloning works for most data (JSON data, Typed Arrays, regular expressions, Blob objects, ImageData objects, etc.). It can even handle cyclic references between objects correctly. However, error objects, function objects and DOM nodes cannot be cloned.
    • Transferables move data between workers: the sending party loses access as the receiving party gains access to data.
  • Computing on GPUs (which tend to do data parallelism well) via WebGL: It’s a bit of a hack and works as follows.

    • Input: your data, converted into an image (pixel by pixel).
    • Processing: OpenGL pixel shaders can perform arbitrary computations on GPUs. Your pixel shader transforms the input image.
    • Output: again an image that you can convert back to your kind of data.
  • SIMD (low-level data parallelism): is supported via the ECMAScript proposal SIMD.js. It allows you to perform operations (such as addition and square root) on several integers or floats at the same time.

  • PJS (codenamed River Trail): the plan of this ultimately abandoned project was to bring high-level data parallelism (think map-reduce via pure functions) to JavaScript. However, there was not enough interest from developers and engine implementers. Without implementations, one could not experiment with this API, because it can’t be polyfilled. On 2015-01-05, Lars T. Hansen announced that an experimental implementation was going to be removed from Firefox.

The next step: SharedArrayBuffer

What’s next? For low-level parallelism, the direction is quite clear: support SIMD and GPUs as well as possible. However, for high-level parallelism, things are much less clear, especially after the failure of PJS.

What is needed is a way to try out many approaches, to find out how to best bring high-level parallelism to JavaScript. Following the principles of the extensible web manifesto, the proposal “shared memory and atomics” (a.k.a. “Shared Array Buffers”) does so by providing low-level primitives that can be used to implement higher-level constructs.

Shared Array Buffers

Shared Array Buffers are a primitive building block for higher-level concurrency abstractions. They allow you to share the bytes of a SharedArrayBuffer object between multiple workers and the main thread (the buffer is shared, to access the bytes, wrap it in a Typed Array). This kind of sharing has two benefits:

  • You can share data between workers more quickly.
  • Coordination between workers becomes simpler and faster (compared to postMessage()).

Creating and sending a Shared Array Buffer

    // main.js
    
    const worker = new Worker('worker.js');
    
    // To be shared
    const sharedBuffer = new SharedArrayBuffer( // (A)
        10 * Int32Array.BYTES_PER_ELEMENT); // 10 elements
    
    // Share sharedBuffer with the worker
    worker.postMessage({sharedBuffer}); // clone
    
    // Local only
    const sharedArray = new Int32Array(sharedBuffer); // (B)

You create a Shared Array Buffer the same way you create a normal Array Buffer: by invoking the constructor and specifying the size of the buffer in bytes (line A). What you share with workers is the buffer. For your own, local, use, you normally wrap Shared Array Buffers in Typed Arrays (line B).

Warning: Cloning a Shared Array Buffer is the correct way of sharing it, but some engines still implement an older version of the API and require you to transfer it:

    worker.postMessage({sharedBuffer}, [sharedBuffer]); // transfer (deprecated)

In the final version of the API, transferring a Shared Array Buffer means that you lose access to it.

Receiving a Shared Array Buffer

The implementation of the worker looks as follows.

    // worker.js
    
    self.addEventListener('message', function (event) {
        const {sharedBuffer} = event.data;
        const sharedArray = new Int32Array(sharedBuffer); // (A)
    
        // ···
    });

We first extract the Shared Array Buffer that was sent to us and then wrap it in a Typed Array (line A), so that we can use it locally.

Accessing Shared Array Buffers

If you access shared memory like you access normal memory, you are facing two problems.

Problem 1: you may read intermediate results

Consider the following code where main.js and worker.js were set up like shown previously.

    // Initialization before sharing the Array
    sharedArray[0] = 1;
    
    // main.js
    sharedArray[0] = 2;
    
    // worker.js
    while (sharedArray[0] === 1) ; // (A)
    console.log(sharedArray[0]); // (B)

Propagating shared state takes time, which is why the values we read in lines A and B may be neither 1 nor 2.

Problem 2: writes may be reordered

Let’s assume reading intermediate results is not an issue. Then you face the problem that readers (such as worker.js in the following example) cannot rely on writes (by main.js) happening in a deterministic order.

    // Initialization before sharing the Array
    sharedArray[0] = 1;
    
    // main.js
    sharedArray[0] = 2;
    sharedArray[0] = 3;
    
    // worker.js
    let v;
    while ((v = sharedArray[0]) === 1);
    console.log(v); // either 2 or 3

worker.js cannot rely on main.js writing 2 first and then 3. The reason for that is that within the same thread, compilers (and even CPUs) may reorder writes if no (local) read depends on them.

How do we fix these two problems?

Solution: Atomics

You can’t use normal operations to read and write data to Shared Array Buffers, you need to use the functions provided via the namespace object Atomics. For example, if we rewrite the first of the previous two code fragments, we get:

    // Initialization before sharing the Array
    Atomics.store(sharedArray, 0, 1);
    
    // main.js
    Atomics.store(sharedArray, 0, 2);
    
    // worker.js
    while (Atomics.load(sharedArray, 0) === 1) ;
    console.log(Atomics.load(sharedArray, 0)); // 2

Atomics ensures two things:

  • Its operations are like database transactions in that they happen atomically: all of the writing is done in a single step; you can’t observe intermediate states.

  • Additionally, the order of writes is fixed; they will never be reordered.

Shared Array Buffers and the run-to-completion semantics of JavaScript

JavaScript has so-called run-to-completion semantics: every function can rely on not being interrupted by another thread until it is finished. Functions become transactions and can perform complete algorithms without anyone seeing the data they operate on in an intermediate state.

Shared Array Buffers break run to completion (RTC): data a function is working on can be changed by another thread during the runtime of the function. However, code has complete control over whether or not this violation of RTC happens: if it doesn’t use Shared Array Buffers, it is safe.

This is loosely similar to how async functions violate RTC. There, you opt into a blocking operation via the keyword await.

Shared Array Buffers and asm.js and WebAssembly

Shared Array Buffers enable emscripten to compile pthreads to asm.js. Quoting an emscripten documentation page:

[Shared Array Buffers allow] Emscripten applications to share the main memory heap between web workers. This along with primitives for low level atomics and futex support enables Emscripten to implement support for the Pthreads (POSIX threads) API.

That is, you can compile multithreaded C and C++ code to asm.js.

Discussion on how to best bring multi-threading to WebAssembly is ongoing. Given that web workers are relatively heavyweight, it is possible that WebAssembly will introduce lightweight threads. You can also see that threads are on the roadmap for WebAssembly’s future.

Sharing data other than integers

At the moment, only Arrays of integers (up to 32 bits long) can be shared. That means that the only way of sharing other kinds of data is by encoding them as integers. Tools that may help include:

  • TextEncoder and TextDecoder: The former converts strings to instances of Uint8Array. The latter does the opposite.
  • stringview.js: a library that handles strings as arrays of characters. Uses Array Buffers.

  • FlatJS: enhances JavaScript with ways of storing complex data structures (structs, classes and arrays) in flat memory (ArrayBuffer and SharedArrayBuffer). JavaScript+FlatJS is compiled to plain JavaScript. JavaScript dialects (TypeScript etc.) are supported.

  • TurboScript: is a JavaScript dialect for fast parallel programming. It compiles to asm.js and WebAssembly.

Eventually, there will probably be additional – higher-level – mechanisms for sharing data. And experiments will continue to figure out what these mechanisms should look like.

How much faster is code that uses Shared Array Buffers?

Lars T. Hansen has written two implementations of the Mandelbrot algorithm (as documented in his article “A Taste of JavaScript’s New Parallel Primitives” where you can try them out online): A serial version and a parallel version that uses multiple web workers. For up to 4 web workers (and therefore processor cores), speed-up improves almost linearly, from 6.9 frames per seconds (1 web worker) to 25.4 frames per seconds (4 web workers). More web workers bring additional performance improvements, but more modest ones.

Hansen notes that the speed-ups are impressive, but going parallel comes at the cost of the code being more complex.

Example

Let’s look at a more comprehensive example. Its code is available on GitHub, in the repository shared-array-buffer-demo. And you can run it online.

Using a shared lock

In the main thread, we set up shared memory so that it encodes a closed lock and send it to a worker (line A). Once the user clicks, we open the lock (line B).

    // main.js
    
    // Set up the shared memory
    const sharedBuffer = new SharedArrayBuffer(
        1 * Int32Array.BYTES_PER_ELEMENT);
    const sharedArray = new Int32Array(sharedBuffer);
    
    // Set up the lock
    Lock.initialize(sharedArray, 0);
    const lock = new Lock(sharedArray, 0);
    lock.lock(); // writes to sharedBuffer
    
    worker.postMessage({sharedBuffer}); // (A)
    
    document.getElementById('unlock').addEventListener(
        'click', event => {
            event.preventDefault();
            lock.unlock(); // (B)
        });

In the worker, we set up a local version of the lock (whose state is shared with the main thread via a Shared Array Buffer). In line B, we wait until the lock is unlocked. In lines A and C, we send text to the main thread, which displays it on the page for us (how it does that is not shown in the previous code fragment). That is, we are using self.postMessage() much like console.log() in these two lines.

    // worker.js
    
    self.addEventListener('message', function (event) {
        const {sharedBuffer} = event.data;
        const lock = new Lock(new Int32Array(sharedBuffer), 0);
    
        self.postMessage('Waiting for lock...'); // (A)
        lock.lock(); // (B) blocks!
        self.postMessage('Unlocked'); // (C)
    });

It is noteworthy that waiting for the lock in line B stops the complete worker. That is real blocking, which hasn’t existed in JavaScript until now (await in async functions is an approximation).

Implementing a shared lock

Next, we’ll look at an ES6-ified version of a Lock implementation by Lars T. Hansen that is based on SharedArrayBuffer.

In this section, we’ll need (among others) the following Atomics function:

  • Atomics.compareExchange(ta : TypedArray, index, expectedValue, replacementValue) : T
    If the current element of ta at index is expectedValue, replace it with replacementValue. Return the previous (or unchanged) element at index.

The implementation starts with a few constants and the constructor:

    const UNLOCKED = 0;
    const LOCKED_NO_WAITERS = 1;
    const LOCKED_POSSIBLE_WAITERS = 2;
    
    // Number of shared Int32 locations needed by the lock.
    const NUMINTS = 1;
    
    class Lock {
    
        /**
         * @param iab an Int32Array wrapping a SharedArrayBuffer
         * @param ibase an index inside iab, leaving enough room for NUMINTS
         */
        constructor(iab, ibase) {
            // OMITTED: check parameters
            this.iab = iab;
            this.ibase = ibase;
        }

The constructor mainly stores its parameters in instance properties.

The method for locking looks as follows.

    /**
     * Acquire the lock, or block until we can. Locking is not recursive:
     * you must not hold the lock when calling this.
     */
    lock() {
        const iab = this.iab;
        const stateIdx = this.ibase;
        var c;
        if ((c = Atomics.compareExchange(iab, stateIdx, // (A)
        UNLOCKED, LOCKED_NO_WAITERS)) !== UNLOCKED) {
            do {
                if (c === LOCKED_POSSIBLE_WAITERS // (B)
                || Atomics.compareExchange(iab, stateIdx,
                LOCKED_NO_WAITERS, LOCKED_POSSIBLE_WAITERS) !== UNLOCKED) {
                    Atomics.wait(iab, stateIdx, // (C)
                        LOCKED_POSSIBLE_WAITERS, Number.POSITIVE_INFINITY);
                }
            } while ((c = Atomics.compareExchange(iab, stateIdx,
            UNLOCKED, LOCKED_POSSIBLE_WAITERS)) !== UNLOCKED);
        }
    }

In line A, we change the lock to LOCKED_NO_WAITERS if its current value is UNLOCKED. We only enter the then-block if the lock is already locked (in which case compareExchange() did not change anything).

In line B (inside a do-while loop), we check if the lock is locked with waiters or not unlocked. Given that we are about to wait, the compareExchange() also switches to LOCKED_POSSIBLE_WAITERS if the current value is LOCKED_NO_WAITERS.

In line C, we wait if the lock value is LOCKED_POSSIBLE_WAITERS. The last parameter, Number.POSITIVE_INFINITY, means that waiting never times out.

After waking up, we continue the loop if we are not unlocked. compareExchange() also switches to LOCKED_POSSIBLE_WAITERS if the lock is UNLOCKED. We use LOCKED_POSSIBLE_WAITERS and not LOCKED_NO_WAITERS, because we need to restore this value after unlock() temporarily set it to UNLOCKED and woke us up.

The method for unlocking looks as follows.

    
        /**
         * Unlock a lock that is held.  Anyone can unlock a lock that
         * is held; nobody can unlock a lock that is not held.
         */
        unlock() {
            const iab = this.iab;
            const stateIdx = this.ibase;
            var v0 = Atomics.sub(iab, stateIdx, 1); // A
    
            // Wake up a waiter if there are any
            if (v0 !== LOCKED_NO_WAITERS) {
                Atomics.store(iab, stateIdx, UNLOCKED);
                Atomics.wake(iab, stateIdx, 1);
            }
        }
    
        // ···
    }

In line A, v0 gets the value that iab[stateIdx] had before 1 was subtracted from it. The subtraction means that we go (e.g.) from LOCKED_NO_WAITERS to UNLOCKED and from LOCKED_POSSIBLE_WAITERS to LOCKED.

If the value was previously LOCKED_NO_WAITERS then it is now UNLOCKED and everything is fine (there is no one to wake up).

Otherwise, the value was either LOCKED_POSSIBLE_WAITERS or UNLOCKED. In the former case, we are now unlocked and must wake up someone (who will usually lock again). In the latter case, we must fix the illegal value created by subtraction and the wake() simply does nothing.

Conclusion for the example

This gives you a rough idea how locks based on SharedArrayBuffer work. Keep in mind that multithreaded code is notoriously difficult to write, because things can change at any time. Case in point: lock.js is based on a paper documenting a futex implementation for the Linux kernel. And the title of that paper is “Futexes are tricky” (PDF).

If you want to go deeper into parallel programming with Shared Array Buffers, take a look at synchronic.js and the document it is based on (PDF).

The API for shared memory and atomics

SharedArrayBuffer

Constructor:

  • new SharedArrayBuffer(length)
    Create a buffer for length bytes.

Static property:

  • get SharedArrayBuffer[Symbol.species]
    Returns this by default. Override to control what slice() returns.

Instance properties:

  • get SharedArrayBuffer.prototype.byteLength()
    Returns the length of the buffer in bytes.

  • SharedArrayBuffer.prototype.slice(start, end)
    Create a new instance of this.constructor[Symbol.species] and fill it with the bytes at the indices from (including) start to (excluding) end.

Atomics

The main operand of Atomics functions must be an instance of Int8Array, Uint8Array, Int16Array, Uint16Array, Int32Array or Uint32Array. It must wrap a SharedArrayBuffer.

All functions perform their operations atomically. The ordering of store operations is fixed and can’t be reordered by compilers or CPUs.

Loading and storing
  • Atomics.load(ta : TypedArray, index) : T
    Read and return the element of ta at index.

  • Atomics.store(ta : TypedArray, index, value : T) : T
    Write value to ta at index and return value.

  • Atomics.exchange(ta : TypedArray, index, value : T) : T
    Set the element of ta at index to value and return the previous value at that index.

  • Atomics.compareExchange(ta : TypedArray, index, expectedValue, replacementValue) : T
    If the current element of ta at index is expectedValue, replace it with replacementValue. Return the previous (or unchanged) element at index.
Simple modification of Typed Array elements

Each of the following functions changes a Typed Array element at a given index: It applies an operator to the element and a parameter and writes the result back to the element. It returns the original value of the element.

  • Atomics.add(ta : TypedArray, index, value) : T
    Perform ta[index] += value and return the original value of ta[index].

  • Atomics.sub(ta : TypedArray, index, value) : T
    Perform ta[index] -= value and return the original value of ta[index].

  • Atomics.and(ta : TypedArray, index, value) : T
    Perform ta[index] &= value and return the original value of ta[index].

  • Atomics.or(ta : TypedArray, index, value) : T
    Perform ta[index] |= value and return the original value of ta[index].

  • Atomics.xor(ta : TypedArray, index, value) : T
    Perform ta[index] ^= value and return the original value of ta[index].

Waiting and waking

Waiting and waking requires the parameter ta to be an instance of Int32Array.

  • Atomics.wait(ta: Int32Array, index, value, timeout=Number.POSITIVE_INFINITY) : ('not-equal' | 'ok' | 'timed-out')
    If the current value at ta[index] is not value, return 'not-equal'. Otherwise go to sleep until we are woken up via Atomics.wake() or until sleeping times out. In the former case, return 'ok'. In the latter case, return 'timed-out'. timeout is specified in milliseconds. Mnemonic for what this function does: “wait if ta[index] is value”.

  • Atomics.wake(ta : Int32Array, index, count)
    Wake up count number of workers who are waiting at index of ta.

Miscellaneous
  • Atomics.isLockFree(size)
    This function lets you ask the JavaScript engine if operands with the given size (in bytes) can be manipulated without locking. That can inform algorithms whether they want to rely on built-in primitives (compareExchange() etc.) or use their own locking. Atomics.isLockFree(4) always returns true, because that’s what all currently relevant supports.

FAQ

What browsers support Shared Array Buffers?

At the moment, I’m aware of:

  • Firefox (50.1.0+): go to about:config and set javascript.options.shared_memory to true
  • Safari Technology Preview (Release 21+): enabled by default.
  • Chrome Canary (58.0+): There are two ways to switch it on.
    • Via chrome://flags/ (“Experimental enabled SharedArrayBuffer support in JavaScript”)
    • --js-flags=--harmony-sharedarraybuffer --enable-blink-feature=SharedArrayBuffer

Further reading

More information on Shared Array Buffers and supporting technologies:

Other JavaScript technologies related to parallelism:

Background on parallelism:

  • Concurrency is not parallelism” by Rob Pike [Pike uses the terms “concurrency” and “parallelism” slightly differently than I do in this blog post, providing an interesting complementary view]

Source:: 2ality

How I wrote the world's fastest JavaScript memoization library

By Caio Gondim

How I wrote the world's fastest JavaScript memoization library

In this article, I’ll show you how I wrote the world’s fastest JavaScript memoization library called fast-memoize.js – which is able to do 50 million operations / second.

We’re going to discuss all the steps and decisions I took in a detailed way, and I’ll also show you the code and benchmarks as proof.

As fast-memoize.js is an open source project, I’ll be delighted to read your comments and suggestions for this library!


A while ago I was playing around with some soon to be released features in V8 using the Fibonacci algorithm as a basis for a benchmark.

One of the benchmarks consisted a memoized version of the Fibonacci algorithm against a vanilla implementation, and the results showed a huge gap in performance between them.

After realizing this, I started poking around with different memoization libraries and benchmarking them (because… why not?). I was quite surprised to see a huge performance gap between them, since the memoization algorithm is quite straightforward.

But why?

How I wrote the world's fastest JavaScript memoization library

While taking a look at the lodash and underscore source code, I also realized that by default, they only could memoize functions that accept one argument (arity one). I was — again — curious, and wondering if I could make a fast enough memoization library that would accept N arguments.

(And, maybe, creating one more npm package in the world?)

Below I explain all the steps and decisions I took while creating the fast-memoize.js library.

Understanding the problem

From the Haskell language wiki:

“Memoization is a technique for storing values of a function instead of recomputing them each time.”

In other words, memoization is a cache for functions. It only works for deterministic
Algorithms though, for those that will always generate the same output for a given input.

Let’s break the problem into smaller pieces for better understanding and testability.

Breaking down the JavaScript memoization problem

I broke the memoization algorithm into 3 different pieces:

  1. cache: stores the previously computed values.
  2. serializer: takes the arguments as inputs and generates a string as an output that represents the given input. Think of it as a fingerprint for the arguments.
  3. strategy: glues together cache and serializer, and outputs the memoized function.

Now the idea is to implement each piece in different ways, benchmark each one and make the final algorithm as a combination of the fastest cache, serializer, and strategy.

The goal here is to let the computer do the heavy lifting for us!

#1 – Cache

As I just mentioned, the cache stores previously computed values.

Interface

To abstract implementation details, a similar interface to Map was created:

  • has(key)
  • get(key)
  • set(key, value)
  • delete(key)

This way we can replace the inner cache implementation without breaking it for consumers, as long we implement the same interface.

Implementations

One thing that needs to be done every time a memoized function is executed is to check if the output for the given input was already computed.

A good data structure for that is a hash table. Hash table has an O(1) time complexity in Big-O notation for checking the presence of a value. Under the hood, a JavaScript object is a Hash table (or something similar), so we can leverage this using the input as key for the hash table and the value as the function output.

// Keys represent the input of fibonacci function
// Values represent the output
const cache = {  
  5: 5,
  6: 8,
  7: 13
}

I used those different algorithms as a cache:

  1. Vanilla object
  2. Object without prototype (to avoid prototype lookup)
  3. lru-cache package
  4. Map

Below you can see a benchmark of all cache implementations. To run locally, do npm run benchmark:cache. The source for all different implementations can be found on the project’s GitHub page.

How I wrote the world's fastest JavaScript memoization library

The need for a serializer

There is a problem when a non-literal argument is passed since its string representation is not unique.

function foo(arg) { return String(arg) }

foo({a: 1}) // => '[object Object]'  
foo({b: 'lorem'}) // => '[object Object]'  

That is why we need a serializer, to create a fingerprint of arguments that will serve as key for the cache. It needs to be as fast as possible as well.

#2 – Serializer

The serializer outputs a string based on the given inputs. It has to be a deterministic algorithm, meaning that it will always produce the same output for the same input.

The serializer is used to create a string that will serve as a key for the cache and represent the inputs for the memoized functions.

Unfortunately, I could not find any library that came close, performance wise, to JSON.stringify — which makes sense, since it’s implemented in native code.

I tried to use JSON.stringify and a bound JSON.stringify hoping there would be one less lookup to be made, but no gains here.

To run locally, do npm run benchmark:serializer. The code for both implementations can be found on the project’s GitHub page.

How I wrote the world's fastest JavaScript memoization library

There is one piece left: the strategy.

#3 – Strategy

The strategy is the consumer of both serializer and cache. It orchestrates all pieces. For fast-memoize.js library, I spent most of the time here. Although a very simple algorithm, some gains were made in each iteration.

Those were the iterations I did in chronological order:

  1. Naive (first try)
  2. Optimize for single argument
  3. Infer arity
  4. Partial application

Let’s explore them one by one. I will try to explain the idea behind each approach, with as little code as possible. If my explanation is not enough and you want to dive deeper, the code for each iteration can be found in the project’s GitHub page.

To run locally, do npm run benchmark:strategy.

Naive

This was the first iteration and the simplest one. The steps:

  1. Serialize arguments
  2. Check if output for given input was already computed
  3. If true, get result from cache
  4. If false, compute and store value on cache

How I wrote the world's fastest JavaScript memoization library

With that first try, we could generate around 650,000 operations per second. That will serve as a basis for next iterations.

Optimize for single argument

One simple and effective technique while improving performance is to optimize the hot path. Our hot path here is a function which accepts one argument only (arity one) with primitive values, so we don’t need to run the serializer.

  1. Check if arguments.length === 1 and argument is a primitive value
  2. If true, no need to run serializer, as a primitive value already works as a key for the cache
  3. Check if output for given input was already computed
  4. If true, get result from cache
  5. If false, compute and store value on cache

How I wrote the world's fastest JavaScript memoization library

By removing the unnecessary call to the serializer, we can go much faster (on the hot path). Now running at 5.5 million operations per second.

Infer arity

function.length returns the number of expected arguments on a defined function. We can leverage this to remove the dynamic check for arguments.length === 1 and provide a different strategy for monadic (functions that receive one argument) and not-monadic functions.

function foo(a, b) {  
  Return a + b
}
foo.length // => 2  

How I wrote the world's fastest JavaScript memoization library

An expected small gain, since we are only removing one check on the if condition. Now we’re running at 6 million operations per second.

Partial application

It seemed to me that most of the time was being wasted on variable lookup (no data for this), and I had no more ideas on how to improve it. Then, I suddenly remembered that it’s possible to inject variables in a function through a partial application with the bind method.

function sum(a, b) {  
  return a + b
}
const sumBy2 = sum.bind(null, 2)  
sumBy2(3) // => 5  

The idea here is to create a function with some arguments fixed. Then I fixed the original function, cache and serializer through this method. Let’s give it a try!

How I wrote the world's fastest JavaScript memoization library

Wow. That’s a big win. I’m out of ideas again, but this time satisfied with the result. We are now running at 20 million operations per second.

The Fastest JavaScript Memoization Combination

We broke down the memoization problem into 3 parts.

For each part, we kept the other two parts fixed and ran a benchmark alternating only one. By alternating only one variable, we can be more confident the result was an effect of this change — no JS code is deterministic performance wise, due to unpredictable Stop-The-World pauses on VM.

V8 does a lot of optimizations on runtime based on how frequently a function is called, its shape, …

To check that we are not missing a massive performance optimization opportunity in any possible combination of the 3 parts, let’s run each part against the other, in all possible ways.

4 strategies x 2 serializers x 4 caches = 32 different combinations. To run locally, do npm run benchmark:combination. Below the top 5 combinations:

How I wrote the world's fastest JavaScript memoization library

Legend:

  1. strategy: Partial application, cache: Object, serializer: json-stringify
  2. strategy: Partial application, cache: Object without prototype, serializer: json-stringify
  3. strategy: Partial application, cache: Object without prototype, serializer: json-stringify-binded
  4. strategy: Partial application, cache: Object, serializer: json-stringify-binded
  5. strategy: Partial application, cache: Map, serializer: json-stringify

It seems that we were right. The fastest algorithm is a combination of:

  • strategy: Partial application
  • cache: Object
  • serializer: JSON.stringify

Benchmarking against popular libraries

With all the pieces of the algorithm in place, it’s time to benchmark it against the most popular memoization libraries. To run locally, do npm run benchmark. Below the results:

How I wrote the world's fastest JavaScript memoization library

fast-memoize.js is almost 3 times faster than the second fastest running at 27 million operations per second.

Future proof

V8 has a new and yet to be officially released new optimization compiler called TurboFan.

We should try it today to see how our code will behave tomorrow since TurboFan will be (very
likely) added to V8 shortly. To enable it pass the flag --turbo-fan to the Node.js binary. To run locally, do npm run benchmark:turbo-fan. Below the benchmark with TurboFan enabled:

How I wrote the world's fastest JavaScript memoization library

Almost a double gain in performance. We are now running at almost 50 million operations per second.

Seems the new fast-memoize.js version can be highly optimized with the soon to be released new compiler.

Conclusion

That was my take on creating a faster library on an already crowded market. Creating many solutions for each part, combining them, and letting the computer tell which one was the fastest based on statistically significant data. (I used benchmark.js for that).

Hope the process I used can be useful for someone else too.

Not because I’m the smartest programmer in the world, but because I will keep the algorithm up to date with findings from others. Pull requests are always welcome.

Benchmarking algorithms that runs on virtual machines can be very tricky, as explained by Vyacheslav Egorov, a former V8 engineer. If you see something wrong on how the tests were set up, please create an issue on GitHub.

The same goes for the library itself. Create an issue if you spotted anything wrong (issues with a failing test are appreciated).

Pull requests with improvements are super appreciated!

If you liked the library, please give it a star. That’s one of the few feedbacks we open source programmers have.

References

Let me know in the comments if you have any questions!

Source:: risingstack.com