Monthly Archives: February 2017

Node.js Weekly Update - 24 Feb, 2017

By Ferenc Hamori

Node.js Weekly Update - 24 Feb, 2017

Read the most important Node.js weekly news & updates:

1. Node.js – Quality with Speed

One of the key tenets of the Node.js community is to allow change at a rapid pace in order to foster innovation and to allow Node.js to be used in a growing number of use cases.

Our community looks for the path that allows us to maintain our rate of change while ensuring the required level of quality. Many of the activities undertaken by the community over the last year are in support of this goal. This is our take on how these activities fit together.

2. 10 Best Practices for Writing Node.js REST APIs

This article will definitely help to get things right to the developers who face issues with REST APIs.

In this post we cover best practices for writing Node.js RESTful APIs – including route naming, authentication, API testing or using proper cache headers.

3. Graphs, GraphDBs and JavaScript + Exploring Trumpworld

Before we start using Neo4j, we’ll consider the importance of graphs and the underlying data structure that allows GraphDBs to exist.

In this article, we’re going to take an in-depth look at Graph Databases and we’re going to use the world’s most popular graph database for a fun, data-driven investigation of Donald Trump’s various business interests.

4. Node 6 at Wikimedia: Stability and substantial memory savings

The greatest Node success story of this week was about how Wikimedia built significant Node.js services to complement the venerable wiki platform implemented in PHP.

Node 6 has delivered on stability and performance, setting a new benchmark for future releases. When combined with our shared library infrastructure and deployment processes, we are in a good spot with our Node platform: it lets our engineers focus on delivering reliable features for users and minimizes time spent on unexpected issues.

5. How China Does Node

In this episode of The Future of Node series Shiya Luo (Developer Evangelist @Autodesk) talks about

  • how China does Node,
  • translations of documentation and books from English to Chinese,
  • and the Great Firewall of China which makes it very difficult for the people of China to interact with the rest of the web.

Latest Node.js Releases

○ Node v4.8.0 (LTS)

  • child_process: add shell option to spawn()
  • deps:
    • v8: expose statistics about heap spaces
  • crypto:
    • add ALPN Support
    • allow adding extra certs to well-known CAs
  • fs: add the fs.mkdtemp() function.
  • process:
    • add externalMemory to process
    • add process.cpuUsage()

○ Node v6.10.0 (LTS)

The SEMVER-MINOR changes include:

  • crypto: allow adding extra certs to well-known CA
  • deps: Upgrade INTL ICU to version 58
  • process: add process.memoryUsage.external
  • src: add wrapper for process.emitWarning()

Notable SEMVER-PATCH changes include:

  • fs: cache non-symlinks in realpathSync.
  • repl: allow autocompletion for scoped packages

Node v7.6.0 (Current) with Async/Await

  • deps:
    • update V8 to 5.5
    • upgrade libuv to 1.11.0
    • add node-inspect 1.10.4
    • upgrade zlib to 1.2.11
  • lib: build node inspect into node
  • crypto: Remove expired certs from CNNIC whitelist
  • inspector: add –inspect-brk
  • fs: allow WHATWG URL objects as paths
  • src: support UTF-8 in compiled-in JS source files
  • url: extend url.format to support WHATWG URL

Previously in the Node.js Weekly Update

In the previous Node.js Weekly Update we read fantastic articles about Node.js testing & TDD, Heroku Production-ready checklist, Hacking Node Serialize, Native shared objects, and more..

Stay up-to-date with Node.js on a daily basis too. Check out our Node.js news page and its Twitter feed!

Source:: risingstack.com

Flex Those JavaScript Array Muscles

By SaraVieira

Sometimes we get so wrapped up in new frameworks, coding styles, and tools we forget to stop and get some of the basics back into our head. This article is going to be about that, we are going to look at some of the available array methods and how we can use these methods to sort, filter and reduce an array to get the outcome we want to use.

To get more into the basics of JavaScript, we also have our Getting Started with JS for Web Development course you can check out!

Introduction

Before we start looking at the methods, we need a deep understanding of what arrays are in JavaScript.

Well, arrays are a list-like object that include methods for us to perform mutations on them, these methods are in the prototype of any array if you console log any array and open its prototype you will see a huge list like this:

This lists is all the methods you can use in any array to transform it or get values from it.
You also need to know that nothing in an array is fixed and can be changed at any time and for this arrays are by definition not immutable, of course, you could choose to make them immutable, but by definition, all arrays can change in contents and length.

Sort and Reverse

I’m going to start with these two because, in my opinion, they are the easiest ones to get to know and use in our daily life, I can not tell you how many times I have used any of these two function in an array.

For these methods we are going to use a simple number array like so:

const array = [1, 8, 3, 5, 9, 7];

Imagine you were handed this array, but you wanted to show it starting from the lowest number to the highest one.
What sort does is sort any array given ascending so in this case, if you did this:

let sortedArray = array.sort();

It would return a new array with the values:

[1, 3, 5, 7, 8, 9]

This sort method also works with names and string characters so if you would like to alphabetically order an array of names or anything involving strings it would do that without any parameters.

You can also pass one parameter to it, and that parameter is the sorting function, by default what javascript does is this:

array.sort((a, b) => {
    // a = current item in array
    // b = next item in array
    return a < b;
});

What this function does in concrete is that it goes through every element in the array and sees if the current one is smaller than the next one and in that way organizes your new array. So if you wanted to this backward all you would have to do is see if the current item is bigger than the next one, like so:

array.sort((a, b) => {
    return a > b;
});

// [9, 8, 7, 5, 3, 1]

There is also another way to reverse an array, and this is where the reverse function comes in, all this function does is reverse the order of an array, so instead, we can do this:

array.sort().reverse();

// [9, 8, 7, 5, 3, 1]

And it is the same as what we did above in the function we passed inside the sort. This function doesn’t take any parameters and what you see is what you get.

.filter() the Array

After we saw sort and reverse it’s time to learn how to filter an array using filter and for this exercise and the ones after this one we are going to use the following array of people:

const people = [
  {
    name: 'Sara',
    age: 25,
    gender: 'f',
    us: true
  },
  {
    name: 'Mike',
    age: 18,
    gender: 'm',
    us: false,
  },
   {
    name: 'Peter',
    age: 17,
    gender: 'm',
    us: false,
  },
   {
    name: 'Ricky',
    age: 27,
    gender: 'm',
    us: false,
  },
  {
    name: 'Martha',
    age: 20,
    gender: 'f',
    us: true
  }
];

Having this array imagine you only wanted to get the women in here. Basically what you wanted to do is filter this array and only get the people that have the gender key equal to f.

As you can see I used the word filter a lot because that is exacly the function we are going to use in order to get this new array of elements. The filter function takes one argument and that argument is the function we want to run for each element in the array.

This function we pass into the filter also takes one argument and that one is the current element of the array the filter is in.

Get the Women

To use this to get the women we would do something like this:

// Filter every element of this array
let women = people.filter(function(person) {
    // only return the objects that have the key gender equal to f
    return person.gender === 'f';
});

We can actually improve this a lot using ES6 and arrow functions:

let women = people.filter((person) => person.gender === 'f');

We are missing the return because when using arrow functions in one line javascript will assume you want to return whatever is in that line.

Cool tip: In cases like this you can use console.table instead of console.log to get a table as the output instead of getting the object references.

So if you head over to the console and console.table this we will get the following:

Getting People Old Enough to Drink

Now to get into something a little more tricky using filter, imagine you only want to return the people who are old enough to drink, and this defers whether you are from the US or not, for simplicity sake let’s assume everyone outside the US has a legal drinking age of 18, in this case we first need to check if the person is in the US or not and then either return people who are older then 21 or if they are not from the US return anyone older than 18.

For this we need a small if statement that will check if the person is in the US or not and return it according to it, like so:

let legal = people.filter((person) => {
    // If the person in the US only return it if they are older or 21
    // If not return the person if the are 18 or above
    return person.us ? person.age >= 21 : person.age >= 18;
});

If you now console.table the legal variable you will see this:

As you can see by these two small examples the filter function is really handy for filtering arrays only to get the results you actually want the user to see, this can be very handy when handling website filters.

.map() the Array

We already created an array that only has the people who have the legal age to drink but wouldn’t it be better to add legal as a key of every object and set it to true or false?

This type of transformation is what the map function does, it runs a function for every element in the array we give it and it returns our new array.

In this case, half our work is already done, all this function needs to do is the same conditional statement we just ran in the filter function, the only change it needs to have is get a parameter that will be the object, so we need a function like this:

let legalFunction = (person) => person.us ? person.age >= 21 : person.age >= 18;

This function either returns true or false depending on the location of the person.
Let’s move on to the map function and run this legalFunction everytime map loops the array we give it:

let legalFunction = (person) => person.us ? person.age >= 21 : person.age >= 18;

let legalIncluded = people.map((person) => {
    // set the new legal key equal to the return of the function we just set
    person.legal = legalfunction(person);

    // return this person but with the legal key added
    return person;
});

Again if we console.table() this we get:

If you check, all the values are correct and our new array has all the information we need.
This is of course a simple example of what the map function actually does, if you can run a function for every element in an array you can literally do anything, so if you wanted add 10 years to each person you could simply do:

let increaseAge = people.map((person) => {
    // set the age key equal it plus 10
    person.age = person.age + 10;
    return person;
});

In my opnion map is the most flexible transform function you have because of how it allows you to change every element and also add or remove keys with great ease.

.reduce() the Array

In all these cases we started and ended with the same array structure, we may have added or changed the elements inside it but the structure was still the same and sometimes this doesn’t cut it.

Sometimes we need information from an array and transform it into a completly diffrent array, an object or even a number. Let’s say we want to get all the combined ages of the people in the array, there is no way we could do this with how it’s structured, this needs to be a number and not an array.

This is where reduce stands out, it allows you to return a whole new element, this function also takes two arguments, the first one is the function we want to run inside the reduce and the second argument is what we want to start with, bascially our starting structure.

The function inside the reduce also takes two parameters, the starter (in this case the number 0) and the current element we are looping through. If you see a reduce function it should look something like this:

// the two parameters in the function are the 0 and the person we are in
let age = people.reduce((starter, person) => {
    // return something to add to the starter
}, 0);

Having these in mind what we really want to do is add the age of the current person the starter of 0 everytime we loop through the array, so what we need is this.:

let age = people.reduce((starter, person) => {
    // add the person age to the starter 
    // As it loops t
    return starter + person.age;
}, 0);

// This will return 107

As simple as this example may be it shows us the power of the reduce, in here we got a huge array and changed in such a way that in the end all we had was the number 107. This means you can start with anything, let’s use this in order to create an object like this:

{men: 3, woman: 2}

So if we want to build this we need to start with an object that has two keys, one for woman and one for man that both have the value of 0 and from there we need to see what the gender of the person is and increase the correct key.
That would look something like:

let obj = people.reduce((starter, person) => {
    // check if the person is male, if it is increment the value in the starter object with the men key
    // if not do that for the woman key
    person.gender === 'm' ? starter.men++ : starter.woman++;

    // return the modified starter
    return starter;
}, {men: 0, woman: 0}); // our starter structure and values

if we console.log this we will get: {men: 3, woman: 2} as we would expect.
In these cases where you want some data and you want to change the structure of it to return something completly diffrent from what you received in the beggining the reduce function is the way to go.

Other Important Array Methods

While .filter(), .map(), and .reduce() are some of the more commonly used, there are many more:

  • .forEach(): Do something for each item in the array
  • .find(): Like filter but only returns a single item
  • .push(): Add new elements to the end of the array
  • .pop(): Removes the last element of an array
  • .join(): Join the elements of an array into a string
  • .concat(): Join two or more arrays (returns a copy)

To find all the methods, check out the reference on w3schools.

Conclusion

This all I have for you today when it comes to array manipulation and transformation. There is a lot more methods we can cover when it comes to arrays and changing them to fit our needs if you would like a second part here let me know in the comments.
In the meantime I hoped this article got you back to the basics of learning the language or if you are just learning it now I hope you got a good grasp on how to modify arrays to fit your everyday needs.

Source:: scotch.io

Build a REST API with Django – A Test Driven Approach: Part 2

By jee

The precondition to freedom is security – Rand Beers

Authentication is a pivotal part of the security of an API.

But first, some recap.

In part 1 of this series, we learnt about how to create a bucketlist API using the TDD approach. We covered writing tests in Django and also learnt a lot about the Django Rest Framework.

We’ll be covering complementary topics in part 2 of our series. For the most part, we’ll delve deeper into authenticating and authorizing users in the Django-driven bucketlist API.
If you haven’t checked part 1 yet, now is the chance to do so before we start crushing it.

Ok… back to business!

Authentication vs Authorization

Authentication is usually confused with authorization. They are not the same thing.

You can think of authentication as a way to verify someone’s identify. (username, password, tokens, keys et cetera) and authorization as a method that determines the level of access a verified user should be granted.

When we look at our bucketlist API, it works for the most part. It however lacks capabilities such as knowing who created a bucketlist, whether a given user is authenticated in the first place or even whether the they have the right to effect changes onto a bucketlist.

We need to fix that.

We’ll implement authentication first and later drop in some authorization features.

Implementing it

Implementing authentication in a DRF API can be done. And the starting point is easy – You start by keeping track of the user.

So how do we achieve this?
Django provides a default User model that we can play around with.

Ok. Let’s get it done.

We’re going to create an owner field on the Bucketlist model. Here’s why: A user can create a bucketlist – which means that a bucketlist has an owner. Therefore, we’ll simply add a field definition of a user inside our bucketlist model.

# rest_api/models.py

from django.db import models

class Bucketlist(models.Model):
    """This class represents the bucketlist model."""
    name = models.CharField(max_length=255, blank=False, unique=True)
    owner = models.ForeignKey('auth.User',  # ADD THIS FIELD
    related_name='bucketlists', 
    on_delete=models.CASCADE) 
    date_created = models.DateTimeField(auto_now_add=True)
    date_modified = models.DateTimeField(auto_now=True)

    def __str__(self):
        """Return a human readable representation of the model instance."""
        return "{}".format(self.name)

The owner fieild uses a ForeignKey class that accepts a number of arguments. The first one auth.User simply points to the model class we wish to create a relationship with.

The foreign key will come from the model class auth.User to enable the relationship between the User and the Bucketlist models.

After this is done, we’ll have to run our migrations to reflect the model changes in our database.

We’ll run

python3  manage.py  makemigrations  rest_api

A point to note: When writing new fields on existing tables, you might encouter this:

The database complains that we are trying to add a non-nullable field which should not be null or lacking a value. We need a value for it since we have pre-existing data on the database.
A simple hack when under a development environment would be to delete the migrations folder inside your app and the db.sqlite3 file. This will get rid of the bucketlist we created last. We can always create a new one.
However, you should never do this on a production environment because you’ll lose all your DB data. A cleaner way to fix it is to provide a one-off default value. But if you have no records on your db, feel free to go with the deletion fix.

After doing this, we’ll commit the changes to our DB using the migrate command:

python3 manage.py migrate

Refactoring Our Tests

So far, we haven’t written any tests that work with the new user authentication. We’ll therefore have to refactor the existing test cases.

But first, we’ve got to know what to write.

Let’s do some analysis. The changes we need to factor in are:

  • Bucketlist ownership by users – which points to integrating the default Django User model
  • Ensure requests made are made by authenticated users – which means we’ll enforce authentication before sending HTTP requests
  • Restrict bucketlist(s) creation to only authenticated users
  • Restrict existing bucketlist(s) to be accessed only by their owner

These points will go a long way in guiding us to refactor our tests.

Refactoring the ModelTestCase

We’ll import the default User model(django.contrib.auth.User) into our test module to create a user.

# rest_api/tests.py
from django.contrib.auth.models import User

The user will help us test for the owner of the bucketlist.
We’ll create the User in our setUp method so that we don’t have to create it every time we want to use it.

class ModelTestCase(TestCase):
    """This class defines the test suite for the bucketlist model."""

    def setUp(self):
        """Define the test client and other test variables."""
        user = User.objects.create(username="nerd") # ADD THIS LINE
     self.name = "Write world class code"
        # specify owner of a bucketlist
        self.bucketlist = Bucketlist(name=self.name, owner=user) # EDIT THIS TOO

Inside the setup method, we’ve just defined a test user by creating a user with a username. Then, we’ve added the instance of the user into the bucketlist class. The user is now the owner of that bucketlist.

Refactoring the ViewsTestCase

Since views deals with mainly making requests, we’ll ensure only authenticated and authorized users have access to the bucketlist API.

Let’s write some code for it

# rest_api/tests.py
# import fall here

# Model Test Case is here

class ViewTestCase(TestCase):
    """Test suite for the api views."""

    def setUp(self):
        """Define the test client and other test variables."""
        user = User.objects.create(username="nerd")

        # Initialize client and force it to use authentication
        self.client = APIClient()
        self.client.force_authenticate(user=user)

        # Since user model instance is not serializable, use its Id/PK
        self.bucketlist_data = {'name': 'Go to Ibiza', 'owner': user.id}
        self.response = self.client.post(
            reverse('create'),
            self.bucketlist_data,
            format="json")

    def test_api_can_create_a_bucketlist(self):
        """Test the api has bucket creation capability."""
        self.assertEqual(self.response.status_code, status.HTTP_201_CREATED)

    def test_authorization_is_enforced(self):
        """Test that the api has user authorization."""
        new_client = APIClient()
        res = new_client.get('/bucketlists/', kwargs={'pk': 3}, format="json")
        self.assertEqual(res.status_code, status.HTTP_401_UNAUTHORIZED)

    def test_api_can_get_a_bucketlist(self):
        """Test the api can get a given bucketlist."""
        bucketlist = Bucketlist.objects.get(id=1)
        response = self.client.get(
            '/bucketlists/',
            kwargs={'pk': bucketlist.id}, format="json")

        self.assertEqual(response.status_code, status.HTTP_200_OK)
        self.assertContains(response, bucketlist)

    def test_api_can_update_bucketlist(self):
        """Test the api can update a given bucketlist."""
        bucketlist = Bucketlist.objects.get()
        change_bucketlist = {'name': 'Something new'}
        res = self.client.put(
            reverse('details', kwargs={'pk': bucketlist.id}),
            change_bucketlist, format='json'
        )
        self.assertEqual(res.status_code, status.HTTP_200_OK)

    def test_api_can_delete_bucketlist(self):
        """Test the api can delete a bucketlist."""
        bucketlist = Bucketlist.objects.get()
        response = self.client.delete(
            reverse('details', kwargs={'pk': bucketlist.id}),
            format='json',
            follow=True)
        self.assertEquals(response.status_code, status.HTTP_204_NO_CONTENT)

We initialized the ApiClient and forced it to use authentication. This enforces the API’s security. The bucketlist ownership has been factored in as well.
Also, notice how we consistently use self.client in each test method instead of creating new ones? This is to ensure that we reuse the authenticated client. Reusability is good practice. 🙂
Great!

Run the tests. They should fail for now.

python3  manage.py  test  rest_api

The next step is to refactor our code to make these failing tests pass.

How To Pass Those Tests!

First, Integrate the User

For the most part, any changes you make in your model should be reflected in your serializers too. This is because serializers interface directly with the model which aids in changing weird-looking query sets to json and vice versa.

Let’s edit our bucketlist serializer. We’ll simply jump into the serializers.py file and write a custom field that we’ll preferably call owner. This is the owner of a bucketlist.

# rest_api/serializers.py

class BucketlistSerializer(serializers.ModelSerializer):
    """Serializer to map the model instance into json format."""

    owner = serializers.ReadOnlyField(source='owner.username') # ADD THIS LINE

    class Meta:
        """Map this serializer to a model and their fields."""
        model = Bucketlist
        fields = ('id', 'name', 'owner', 'date_created', 'date_modified') # ADD 'owner'
        read_only_fields = ('date_created', 'date_modified')

The owner field is read-only so that a user using our api cannot alter the owner of a bucketlist. Don’t forget to add the owner into the fields as directed above.

Let’s run this and see if it works:
Start the server python3 manage.py runserver

When we access it from localhost, we should see something like this:

Now, we need to make a way to save the owner when a new bucketlist is created.
Saving a bucketlist is done in a class called CreateView that we defined in views.py.
We’ll edit our CreateView class by adding a perform_create(self, serializer) method.
This method gives us control on how to save our serializer.

# rest_api/views.py

# We are inside the CreateView class
...

    def perform_create(self, serializer):
            """Save the post data when creating a new bucketlist."""
            serializer.save(owner=self.request.user) # Add owner=self.request.user

The serializer.save() accepts field arguments. Here, we specified the owner argument. Why? Because our serializer has it as a field – which means that we can specify the owner in a serializer’s save method that will then save the bucketlist with a user as its owner.

We should now get an error that looks like this when we try to create a bucketlist

The DB complains. Why a Value Error? Good question – It’s simply because we are trying to save a bucketlist from the browser without specifying the owner!

Our new non-nullable owner field needs a value before the serializer can validate and save a bucketlist.

Let’s fix that right away.

In our urls.py, we’ll add a route for helping the user to log in to our api before creating a bucketlist. We do this to allow a bucketlist to have an owner, that is if the logged in user decides to create one.

# rest_api/urls.py
# imports fall here

urlpatterns = {
    url(r'^auth/', include('rest_framework.urls', # ADD THIS URL
                               namespace='rest_framework')), 
    url(r'^bucketlists/$', CreateView.as_view(), name="create"),
    url(r'^bucketlists/(?P<pk>[0-9]+)/$',
        DetailsView.as_view(), name="details"),
}

urlpatterns = format_suffix_patterns(urlpatterns)

This new line includes the DRF routes that provides a default login template to authenticate a user. You can call the route anything you want apart from auth.

Save the file. It will automatically refresh the running server instance.

You should now see a login button on the top right of the screen when you access

http://localhost/bucketlists/

Clicking the button will redirect to a login template.

Let’s create a super-user for which to log in with.

python3 manage.py createsuperuser

Logging in should be a breeze with the username and password we just specified.

Authorization: Adding permissions

Right now, any user can view and edit any bucketlist. We’d want to tie the user to their bucketlist so that only the owner can effect changes like editing and deletion to it.

A default permission check

We can use the default permission package to restrict bucketlist access to authenticated users only.

In views.py we’ll import the permission classes

from rest_framework import permissions

Then inside our CreateView class we’ll add the permission class IsAuthenticated.

# rest_api/views.py

class CreateView(generics.ListCreateAPIView):
    """This class handles the GET and POSt requests of our rest api."""
    queryset = Bucketlist.objects.all()
    serializer_class = BucketlistSerializer
    permission_classes = (permissions.IsAuthenticated,) # ADD THIS LINE

The permission class IsAuthenticated will deny permission to any unauthenticated user, and allow permission otherwise. We could have used IsAuthenticatedOrReadOnly which permits unauthenticated users if the request is one of the “safe” methods (GET, HEAD and OPTIONS). But we want full security – we’ll stick to IsAuthenticated.

Custom Permission

Right now, any authenticated user can see the other user’s bucketlists. To implement the full concept of ownership, we’ll have to create a custom permission.

Let’s create a file called permissions.py inside the rest_api directory. Inside this file, we write the following code:

from rest_framework.permissions import BasePermission
from .models import Bucketlist

class IsOwner(BasePermission):
    """Custom permission class to allow only bucketlist owners to edit them."""

    def has_object_permission(self, request, view, obj):
        """Return True if permission is granted to the bucketlist owner."""
        if isinstance(obj, Bucketlist):
            return obj.owner == request.user
        return obj.owner == request.user

The class above implements a permission which holds by this truth – The user has to be the owner to have that object’s permission. If they are indeed the owner of that bucketlist, it returns True, else False.

We just have to add it inside our permission_classes tuple and we are set. For clarity, the updated view.py should now look like this:

# rest_api/views.py

from rest_framework import generics, permissions
from .permissions import IsOwner
from .serializers import BucketlistSerializer
from .models import Bucketlist

class CreateView(generics.ListCreateAPIView):
    """This class handles the GET and POSt requests of our rest api."""
    queryset = Bucketlist.objects.all()
    serializer_class = BucketlistSerializer
    permission_classes = (
        permissions.IsAuthenticated, IsOwner)

    def perform_create(self, serializer):
        """Save the post data when creating a new bucketlist."""
        serializer.save(owner=self.request.user)

class DetailsView(generics.RetrieveUpdateDestroyAPIView):
    """This class handles GET, PUT, PATCH and DELETE requests."""

    queryset = Bucketlist.objects.all()
    serializer_class = BucketlistSerializer
    permission_classes = (
        permissions.IsAuthenticated,
        IsOwner)

If we log out and try to get the bucketlists, we’ll be hit by a HTTP 403 Forbidden response. This means that our authentication and authorization is actually working!

Awesome!

Finally, we run our tests and see whether they’ll pass:

python3 manage.py test

Moving on swiftly.

What about Token-based Authentication?

Token authentication is appropriate for client–server setups especially when the consumption clients are native desktop or native mobile.

This is how it works – A user requests a security token from the server. The server generates the token and associates it with that user. After sending the token, the server waits for the user to request for resources using that specific token. The user can then use the token to authenticate and prove to the server that he/she is indeed a valid user.

For us to use token authentication in our API, we’ll have to set up some configurations on the settings.py file.

Let’s add rest_framework.authtoken in our list of installed apps – like this:

# project/settings.py

INSTALLED_APPS = (
    'django.contrib.admin',
    'django.contrib.auth',
    'django.contrib.contenttypes',
    'django.contrib.sessions',
    'django.contrib.messages',
    'django.contrib.staticfiles',
    'rest_framework',
    'rest_api', # note the comma (if you lack it, errors! the horror!)
    'rest_framework.authtoken' # ADD THIS LINE
)

Every time we create a user, we’d like to also create a security token for them.
But how do we ensure that a user creation will also trigger a token creation?

Enter signals.

Django comes packed with a signal dispatcher. A dispatcher is like a messanger sent forth to notify others about an event that just happened. When a user is created, a post_save signal will be emitted by the User model. A receiver(which is simply a function) will then help us catch this post_save signal and immediately create the token.

Our receiver will live in our models.py file.
A couple of imports to add: the post_save signal, the default User model, the Token model and the receiver:

from django.db.models.signals import post_save
from django.contrib.auth.models import User
from rest_framework.authtoken.models import Token
from django.dispatch import receiver

Then write the receiver at the bottom of the file like this:

# rest_api/models.py
from django.db.models.signals import post_save
from django.contrib.auth.models import User
from rest_framework.authtoken.models import Token
from django.dispatch import receiver

class Bucketlist(models.Model):
    """This class represents the bucketlist model."""
    name = models.CharField(max_length=255, blank=False, unique=True)
    owner = models.ForeignKey(
        'auth.User',
        related_name='bucketlists',
        on_delete=models.CASCADE)
    date_created = models.DateTimeField(auto_now_add=True)
    date_modified = models.DateTimeField(auto_now=True)

    def __str__(self):
        """Return a human readable representation of the model instance."""
        return "{}".format(self.name)

# This receiver handles token creation immediately a new user is created.
@receiver(post_save, sender=User)
def create_auth_token(sender, instance=None, created=False, **kwargs):
    if created:
        Token.objects.create(user=instance)

Note that the receiver is NOT indented inside the Bucketlist model class. It’s a common mistake to indent it inside the class.

We also need to provide a way for the user to obtain the token. A url will serve the purpose.
Write the following lines of code on the urls.py:

# rest_api/urls.py
from rest_framework.authtoken.views import obtain_auth_token # add this import

urlpatterns = {
    url(r'^bucketlists/$', CreateView.as_view(), name="create"),
    url(r'^bucketlists/(?P<pk>[0-9]+)/$',
        DetailsView.as_view(), name="details"),
    url(r'^auth/', include('rest_framework.urls',
                           namespace='rest_framework')),
    url(r'^users/$', UserView.as_view(), name="users"),
    url(r'users/(?P<pk>[0-9]+)/$',
        UserDetailsView.as_view(), name="user_details"),
    url(r'^get-token/', obtain_auth_token), # Add this line
}

urlpatterns = format_suffix_patterns(urlpatterns)

The rest framework is so powerful that it provides a built-in view which handles obtaining the token when a user posts their username and password.

We’ll go ahead with making migrations and migrate the changes to the database so that our app can tap the power of this built-in view.

python3 manage.py makemigrations &&  python3 manage.py migrate

Finally, we add some configs to the settings so that our app can authenticate with both BasicAuthentication and TokenAuthentication.

# project/settings.py

REST_FRAMEWORK = {
    'DEFAULT_PERMISSION_CLASSES': (
        'rest_framework.permissions.IsAuthenticated',
    ),
    'DEFAULT_AUTHENTICATION_CLASSES': (
        'rest_framework.authentication.BasicAuthentication',
        'rest_framework.authentication.TokenAuthentication',
    )
}

The DEFAULT_AUTHENTICATION_CLASSES config tells the app that we wish to configure more than one ways of authenticating the user. We specify the ways by referencing the built-in authentication classes inside this tuple.

Run it

Once saved, the server will automagically restart with the added changes if it’s already running. However, it’s good to just rerun the server with python3 manage.py runserver

To visually test whether our api still stands, we’ll make the HTTP requests on Postman

Postman Step 1: Obtain that token

For clients to authenticate, the token obtained should be included in the Authorization HTTP header. We prepend the word Token followed by a space character. The header should look like this:

Authorization: Token 2777b09199c62bcf9418ad846dd0e4bbdfc6ee4b

Don’t forget to put the space in between.

We’ll make a post request to http://localhost:8000/get-token/, specifying the username and password in the process.

Postman Step 2: Use obtained token in Authorization header

For the subsequent requests, we’ll have to include the Authorization header if we ever want to access the API resources.

A common mistake that might cause errors here is inputing an incorrect format for the Authorization header. Here’s a common error message from the server:

{
  "detail": "Authentication credentials were not provided."
}

Ensure you input this format instead– Token . If you want to have a different keyword in the header, such as Bearer, simply subclass TokenAuthentication and set the keyword class variable.

Let’s try sending a GET request. It should yield something like this:

Feel free to play around with your now well secured API.

Conclusion

If you’ve read this to the end, you are awesome!

We’ve covered quite a lot!
From implementing user authentication to creating custom permissions for implementing authorization, we’ve covered most of securing a Django API.

We also conveniently defined a token-based authentication layer so that mobile and desktop clients can securely consume our API. But the most important thing is that we refactored our tests to accomodate the changes. This is paramount to anything else and remains the heart of Test Driven Development.

If you get to that point where you ask yourself, “what is going on here?“, I highly recommend you take a look at Part 1 of this series which aptly provides a detailed tutorial on building a bucketlist API the TDD way.

Happy coding!

Source:: scotch.io

Handling File Uploads with Hapi.js

By jecelyn

File uploading is a common feature that almost every website needs. We will go through step by step on how to handle single and multiple file(s) upload with Hapi, save it to database (LokiJs), and retrieve the saved file for viewing.

The complete sourcecode is available here: https://github.com/chybie/file-upload-hapi.

We will be using Typescript throughout this tutorial.

Install Required Dependencies

I am using Yarn for package management. However, you can use npm if you like.

Dependencies

Run this command to install required dependencies

// run this for yarn
yarn add hapi boom lokijs uuid del

// or using npm
npm install hapi boom lokijs uuid del --save

Notes:-

  • hapi: We will develop our API using HapiJs
  • boom: A plugin for Hapi, HTTP-friendly error objects
  • loki: LokiJs, a fast, in-memory document-oriented datastore for node.js, browser and cordova
  • uuid: Generate unique id
  • del: Delete files and folders

Development Dependencies

Since we are using Typescript, we need to install typings files in order to have auto-complete function (intellesense) during development.

// run this for yarn
yarn add typescript @types/hapi @types/boom @types/lokijs @types/uuid @types/del --dev

// or using npm
npm install typescript @types/hapi @types/boom @types/lokijs @types/uuid @types/del --save-dev

Setup

A couple of setup steps to go before we start.

Typescript Configuration

Add a typescript configuration file. To know more about Typescript configuration, visit https://www.typescriptlang.org/docs/handbook/tsconfig-json.html.

// tsconfig.json

{
    "compilerOptions": {
        "module": "commonjs",
        "moduleResolution": "node",
        "target": "es6",
        "noImplicitAny": false,
        "sourceMap": true,
        "outDir": "dist"
    }
}

Notes:-

  1. The compiled javascript code will be output to dist folder.
  2. Since Node Js 7.5+ support ES6 / 2015, we will set the target as es6.

Start Script

Add the following scripts.

// package.json

{
    ...
    "scripts": {
        "prestart": "tsc",
        "start": "node dist/index.js"
    }
    ...
}

Later on we can run yarn start or npm start to start our application.

Notes:-

  1. When we run yarn start, it will trigger prestart script first. The command tsc will read the tsconfig.json file and compile all typescript files to javascript in dist folder.
  2. Then, we will run the compiled index file dist/index.js.

Starting Hapi Server

Let’s start creating our Hapi server.

// index.ts

import * as Hapi from 'hapi';
import * as Boom from 'boom';
import * as path from 'path'
import * as fs from 'fs';
import * as Loki from 'lokijs';

// setup
const DB_NAME = 'db.json';
const COLLECTION_NAME = 'images';
const UPLOAD_PATH = 'uploads';
const fileOptions = { dest: `${UPLOAD_PATH}/` };
const db = new Loki(`${UPLOAD_PATH}/${DB_NAME}`, { persistenceMethod: 'fs' });

// create folder for upload if not exist
if (!fs.existsSync(UPLOAD_PATH)) fs.mkdirSync(UPLOAD_PATH);

// app
const app = new Hapi.Server();
app.connection({
    port: 3001, host: 'localhost',
    routes: { cors: true }
});

// start our app
app.start((err) => {

    if (err) {
        throw err;
    }
    console.log(`Server running at: ${app.info.uri}`);
});

The code is pretty expressive itself. We set the connection port to 3001, allow Cross-Origin Resource Sharing (CORS), and start the server.

Upload a Single File

Let’s create our first route. We will create a route to allow users to upload their profile avatar.

Route

// index.ts
...
import {
    loadCollection, uploader
} from './utils';
...

app.route({
    method: 'POST',
    path: '/profile',
    config: {
        payload: {
            output: 'stream',
            allow: 'multipart/form-data' // important
        }
    },
    handler: async function (request, reply) {
        try {
            const data = request.payload;
            const file = data['avatar']; // accept a field call avatar

            // save the file
            const fileDetails = await uploader(file, fileOptions);

            // save data to database
            const col = await loadCollection(COLLECTION_NAME, db);
            const result = col.insert(fileDetails);
            db.saveDatabase();

            // return result
            reply({ id: result.$loki, fileName: result.filename, originalName: result.originalname });

        } catch (err) {
            // error handling
            reply(Boom.badRequest(err.message, err));
        }
    }
});

Notes:

  1. This is a HTTP POST function.
  2. We configure the payload to allow multipart/form-data and receive the data as stream.
  3. We will read the field avatar for file upload.
  4. We will call uploader function (we will create soon) to save the input file.
  5. Then, we will load the LokiJs images table / collection (we will create loadCollection next) and create a new record.
  6. Save the database.
  7. Return result.

Load LokiJs Collection

A generic function to retrieve a LokiJs collection if exists, or create a new one if it doesn’t.

// utils.ts

import * as del from 'del';
import * as Loki from 'lokijs';
import * as fs from 'fs';
import * as uuid from 'uuid;

const loadCollection = function (colName, db: Loki): Promise<LokiCollection<any>> {
    return new Promise(resolve => {
        db.loadDatabase({}, () => {
            const _collection = db.getCollection(colName) || db.addCollection(colName);
            resolve(_collection);
        })
    });
}

export { loadCollection }

Uploader Function

Our uploader will handle single file upload and multiple file upload (will create later).

// utils.ts
...

const uploader = function (file: any, options: FileUploaderOption) {
    if (!file) throw new Error('no file(s)');

    return _fileHandler(file, options);
}

const _fileHandler = function (file: any, options: FileUploaderOption) {
    if (!file) throw new Error('no file');

    const orignalname = file.hapi.filename;
    const filename = uuid.v1();
    const path = `${options.dest}${filename}`;
    const fileStream = fs.createWriteStream(path);

    return new Promise((resolve, reject) => {
        file.on('error', function (err) {
            reject(err);
        });

        file.pipe(fileStream);

        file.on('end', function (err) {
            const fileDetails: FileDetails = {
                fieldname: file.hapi.name,
                originalname: file.hapi.filename,
                filename,
                mimetype: file.hapi.headers['content-type'],
                destination: `${options.dest}`,
                path,
                size: fs.statSync(path).size,
            }

            resolve(fileDetails);
        })
    })
}

...

export { loadCollection, uploader }

Notes:

  1. We will read the uploaded file name.
  2. We will generate a random UUID as the new file name to avoid name conflicts.
  3. We will then stream and write the file to the defined folder. It’s uploads folder for our case.

Run Our Application

You may run the application with yarn start. I try to call the locahost:3001/profile API with (Postman)[https://www.getpostman.com/apps], an GUI application for API testing.

When I upload a file, you can see that a new file created in uploads folder and the database file db.json is created as well.

When I issue a call without passing in avatar, error will be returned.

Upload single file

Filter File Type

We can handle file upload successfully now. Next, we need to limit the file type to image only. To do this, let’s create a filter function that will test the file extensions, then modify the our _fileHandler to apply accept an optional filter option.

// utils.ts

...

const imageFilter = function (fileName: string) {
    // accept image only
    if (!fileName.match(/.(jpg|jpeg|png|gif)$/)) {
        return false;
    }

    return true;
};

const _fileHandler = function (file: any, options: FileUploaderOption) {
    if (!file) throw new Error('no file');

    // apply filter if exists
    if (options.fileFilter && !options.fileFilter(file.hapi.filename)) {
        throw new Error('type not allowed');
    }

    ...
}

...

export { imageFilter, loadCollection, uploader }

Apply the Image Filter

We need to tell the uploader to apply our image filter function. Add it in fileOptions variable.

// index.ts
import {
    imageFilter, loadCollection, uploader
} from './utils';

..
// setup
...

const fileOptions: FileUploaderOption = { dest: `${UPLOAD_PATH}/`, fileFilter: imageFilter };

...

Restart the application, try to upload a non-image file and you should get error.

Upload Multiple Files

Let’s proceed to handle multiple files upload now. We will create a new route to allow user upload their photos.

Route

...

app.route({
    method: 'POST',
    path: '/photos/upload',
    config: {
        payload: {
            output: 'stream',
            allow: 'multipart/form-data'
        }
    },
    handler: async function (request, reply) {
        try {
            const data = request.payload;
            const files = data['photos'];

            const filesDetails = await uploader(files, fileOptions);
            const col = await loadCollection(COLLECTION_NAME, db);
            const result = [].concat(col.insert(filesDetails));

            db.saveDatabase();
            reply(result.map(x => ({ id: x.$loki, fileName: x.filename, originalName: x.originalname })));
        } catch (err) {
            reply(Boom.badRequest(err.message, err));
        }
    }
});

...

The code is similar to single file upload, except we accept a field photos instead of avatar, accept an array of files as input and reply result as array.

Modify Uploader Function

We need to modify our uploader function to handle multiple files upload.

// utils.ts
...

const uploader = function (file: any, options: FileUploaderOption) {
    if (!file) throw new Error('no file(s)');

    // update this line to accept single or multiple files
    return Array.isArray(file) ? _filesHandler(file, options) : _fileHandler(file, options);
}

const _filesHandler = function (files: any[], options: FileUploaderOption) {
    if (!files || !Array.isArray(files)) throw new Error('no files');

    const promises = files.map(x => _fileHandler(x, options));
    return Promise.all(promises);
}

...

Retrieve List of Images

Next, create a route to retrieve all images.

// index.ts
...

app.route({
    method: 'GET',
    path: '/images',
    handler: async function (request, reply) {
        try {
            const col = await loadCollection(COLLECTION_NAME, db)
            reply(col.data);
        } catch (err) {
            reply(Boom.badRequest(err.message, err));
        }
    }
});

...

The code is super easy to understand.

Retrieve Image by Id

Next, create a route to retrieve an image by id.

// index.ts
...

app.route({
    method: 'GET',
    path: '/images/{id}',
    handler: async function (request, reply) {
        try {
            const col = await loadCollection(COLLECTION_NAME, db)
            const result = col.get(request.params['id']);

            if (!result) {
                reply(Boom.notFound());
                return;
            };

            reply(fs.createReadStream(path.join(UPLOAD_PATH, result.filename)))
                .header('Content-Type', result.mimetype); // important
        } catch (err) {
            reply(Boom.badRequest(err.message, err));
        }
    }
});

...

Notes:-

  1. We will return 404 if image not exist in database.
  2. We will stream the file as output, set the content-type correctly so our client or browser know how to handle it.

Run the Application

Now restart the application, upload a couple of images, and retrieve it by id. You should see the image is return as image instead of json object.

Get image by id

Clear All Data When Restart

Sometimes, you might want to clear all the images and database collection during development. Here’s a helper function to do so.

// utils.ts

....

const cleanFolder = function (folderPath) {
    // delete files inside folder but not the folder itself
    del.sync([`${folderPath}/**`, `!${folderPath}`]);
};

...

export { imageFilter, loadCollection, cleanFolder, uploader }
// index.ts

// setup
...

// optional: clean all data before start
cleanFolder(UPLOAD_PATH);
if (!fs.existsSync(UPLOAD_PATH)) fs.mkdirSync(UPLOAD_PATH);

...

Summary

Handling file(s) uploads with Hapi is not as hard as you (I) thought.

The complete sourcecode is available here: https://github.com/chybie/file-upload-hapi.

That’s it. Happy coding.

Source:: scotch.io

Graphs, GraphDBs and JavaScript + Exploring Trumpworld

By Carlos Justiniano

Graphs, GraphDBs and JavaScript + Exploring Trumpworld

In this article, we’re going to take an in-depth look at Graph Databases and we’re going to use the world’s most popular graph database for a fun, data-driven investigation of Donald Trump’s various business interests.

Before we start using Neo4j, we’ll consider the importance of graphs and the underlying data structure that allows GraphDBs to exist.

Let’s get started!


Undoubtedly you’re familiar with graphs – those charts showing colored bars, pie slices and points along a line. They’re great data visualization tools designed to quickly convey information. However, those are not the types of graphs we’ll consider. The graphs we’re interested in consists of circles and lines and are commonly known as network graphs.

Graphs, GraphDBs and JavaScript + Exploring Trumpworld

This is the same graph defined in scientific terms, i.e. mathematics and computer science.

Graphs, GraphDBs and JavaScript + Exploring Trumpworld

A “thing” is represented by a vertex and a “link” is referred to as an edge. We can think of the vertices as representing nodes and the edges as the relationships between them. From here on out we’ll simply refer to them as nodes and links.

Graphs can take on real world meaning, such as revealing the relationships between people. For example, in this graph, Tom knows Alex but doesn’t directly know Bill or even his neighbors, Susan and Jane. If Tom wanted to meet Susan, he could ask Alex to introduce them.

Graphs, GraphDBs and JavaScript + Exploring Trumpworld

When lots of nodes and links exist, graphs can become quite complex, such as in the web of social and business relationships found on Facebook and LinkedIn.

Graphs, GraphDBs and JavaScript + Exploring Trumpworld

Graphs revealed

Graph diagrams made their debut in a paper written by Leonard Euler a Swiss-born mathematician who is regarded as the most prolific mathematician of all time.

Graphs, GraphDBs and JavaScript + Exploring Trumpworld

In 1735, from his home in Saint Petersburg, Euler turned his attention to a problem debated by the people of the nearby town of Königsberg – which is now the Russian city of Kaliningrad. During a time of prosperity, the people of Königsberg constructed seven bridges across the Pregel River to connect two islands to the surrounding landscape. The town’s people later pondered whether it was possible to cross the seven bridges without crossing one twice.

In his short paper entitled “The solution of a problem relating to the geometry of position”, Euler offered a proof that such a path could not exist. We won’t get into the proof here because it isn’t the proof that we’re interested in, but rather the way that Euler approached the problem.

Graphs, GraphDBs and JavaScript + Exploring Trumpworld

Euler represented land masses as nodes and used links to represents bridges. He then assigned each node a letter from A to D. With this, Euler, unknowingly founded an extensive branch of mathematics called graph theory.

Graphs are everywhere

Hundreds of years later, researchers are using graphs to explore topics such as biodiversity, terrorist networks, and the global spread of epidemics.

Here is a graph that links 40 of the earliest known AIDS patients by sexual contact.

Graphs, GraphDBs and JavaScript + Exploring Trumpworld

On a lighter note, you may have recently taken a train ride. Did you enjoy riding a graph?

If you consider a map of the New York City subway system – or any subway in the world for that matter – and if you label the train stations as nodes and the routes connecting stations as links – you’ll quickly see a graph emerge.

Graphs, GraphDBs and JavaScript + Exploring Trumpworld

Nodes are sometimes referred to as hubs when more than one path (or link) converges.

The New York City subway system has hubs at 34th and 42nd Street, which allow one to switch trains and travel other parts of the subway’s network graph. In the map below, at 42nd Street and Times Square, we can switch to the N, Q, R, S, W, 1, 2, 3, or 7 trains.

Graphs, GraphDBs and JavaScript + Exploring Trumpworld

A look at cities throughout the world reveals airports, and in larger cities – airport hubs – which connect flights to other flights and destinations around the globe. Yes, the paths of air and ocean travel, also form a network graph.

Graphs, GraphDBs and JavaScript + Exploring Trumpworld

If you look closely, you can see where lots of lines converge indicating airport hubs.

Consider 3D games, the characters and terrains are built from wire frame models called meshes, which are essentially graphs.

Graphs, GraphDBs and JavaScript + Exploring Trumpworld

In fact, the process of applying a texture to a wire frame model involves mapping an image onto the surface area within vertices and edges – a process known as texture mapping.

Ever wonder how computer game characters find their way within a game world? Dijkstra’s algorithm, employed in computer game AI, uses a weighted graph to find routes.

Turning our attention to nature, trees and plants also exhibit graphs. In a tree, the points where branches split into two or more branches can be considered nodes, and the branches themselves – links between nodes.

Graphs, GraphDBs and JavaScript + Exploring Trumpworld

The roots of a tree are almost identical to the branches as shown here in this plant.

Graphs, GraphDBs and JavaScript + Exploring Trumpworld

Upon even closer examination – the leaves of a tree reveal a network of passages which deliver water and nutrients to vibrant leafy greens.

Graphs, GraphDBs and JavaScript + Exploring Trumpworld

If you recall your high school biology class, then this image might seem similar to textbook diagrams illustrating our nervous system and arteries!

In truth, we need reflect no further than our own thoughts to realize that the neurons in our brains form a network graph.

Graphs, GraphDBs and JavaScript + Exploring Trumpworld

Indeed graphs are everywhere.

Wet-ware

Not only do our own bodies consist of graphs, it turns out that graphs are fundamental to how we actually think!

Since infancy, we catalog objects and assign properties to them, then we map objects to one another based on their relationship. This process continues in our minds throughout our lives.

Graphs, GraphDBs and JavaScript + Exploring Trumpworld

Think about any complex topic you’ve had to learn. Perhaps you began by reading introductory material that provided you with a high-level overview. During that process, you were exposed to new terms. And as you learned more about them you associated characteristics or properties to those terms.

Our minds organize information by creating the mental graphs we call memories. In fact, one way of improving memory is to build more mental graphs by creating new links (or associations) to existing memories.

It turns out that our brains are a sort of graph database.

Graph databases

This all brings us to Graph Databases – software tools for building and working with graphs.

Rather than organize data as collections of tables, rows, and columns – or even as collections of documents – graph databases allow us to model data and relationships in ways that closely mirror how we naturally think about them.

Let’s take a closer look. In this graph, we have nodes and links that have associated properties. This type of graph is often referred to as a property graph. We have age and interest properties associated with each person, and we could have easily added other personal characteristics. In the relationship links, we’ve stored information about when a relationship began.

Graphs, GraphDBs and JavaScript + Exploring Trumpworld

Such a graph could become the basis for an intelligent contact management application.

Enter Neo4j

There are many graph databases to choose from. Additionally, some products offer Graph Database functionality combined with document and key/value stores, such as OrientDB and ArangoDB. During the past decade, we’ve seen an increase of interest in the graph database space. One such project is Microsoft Research’s Trinity project, which is now named Graph Engine.

In this article, we’re going to use the world’s most popular graph database, Neo4j. Affectionately referred to by fans, as Neo.

Getting started with Neo is easier than with most database products. You can try Neo without installing it by simply provisioning a free instance using the Neo4j Sandbox. It comes complete with user guides and sample datasets. This would have been an invaluable resource when I first embarked on Neo several years ago. Back then, setting up Neo4j involved working with the correct version of the Java JVM and tweaking operating system file handles.

If you’d rather have a local instance of Neo4j running on your laptop you can download and install a free copy. However, being a big fan of Docker, I prefer to download and run Neo4j from a Docker container.

$ docker pull neo4j:3.1.0
$ docker run -d -p 7474:7474 -p 7687:7687 -v ~/data:/data --name neo4j neo4j:3.1.0

Neo4j dashboard

Neo4j comes with a web-based dashboard that allows you to interact with Neo. It’s a great way to learn about Neo and later create and test your data models. The dashboard is an indispensable tool and a real pleasure to use.

Here we see a dashboard view which allows us to enter queries and graphically see the results. Looking closely at the screenshot below you can see many of the concepts we’ve encountered earlier in this article.

Graphs, GraphDBs and JavaScript + Exploring Trumpworld

Connecting to the dashboard is as simple as pointing your browser to http://localhost:7474

Neo4j queries

Neo4j has a declarative query language called Cypher. Cypher queries consist of statements that use patterns to specify paths within a graph.

In Cypher syntax, a node is represented inside of parentheses and links are referred to by lines and square brackets. Node and link properties are specified using curly braces.

For example:

 (NODE)        [RELATIONSHIP]          (NODE)
(Person)-[:KNOWS {since: "20120225"}]-(Person)

So in addition to queries being declarative, they’re also visually descriptive.

Let’s take a closer look.

We can locate the graph node representing Alex with this query:

MATCH (p:Person {name: "Alex"})  
RETURN p;  

There are a few important characteristics in the query shown. On the first line, we see that we’re trying to match a node, represented by a query enclosed in parentheses. The p:Person fragment says “map a variable called p with a label of Person”. So here we learn that nodes can have labels (Person) and that we can assign them to variables (p). On line two we simply return the contents of p.

We can enhance our queries by specifying the use of properties and values and listing them within curly braces. So, {name: "Alex"} says we’re interested in only matching nodes which have a name property containing the value of “Alex”.

If we wanted to return all the people in our graph, our query would be even simpler:

MATCH (p:Person)  
RETURN p;  

Alex is connected to Susan by a relationship link with a label of Knows. That link also has a property called since. We could write a query that includes the Knows relationship by using square brackets:

MATCH (p1:Person {name: "Alex"})-[r:Knows]-(p2:Person {name: "Susan"})  
RETURN p1, r, p2;  

Notice that we assign the variable r to the relationship link. We also use the label Knows to specify the type of link we’re interested in. The label could have been something else such as workedwith or hiredby.

Let’s say that Alex is planning a party and would like to invite his closest acquaintances. Here we omit the query fragment for the Person’s name property, so we match any person that Alex directly knows.

MATCH (p1:Person {name: "Alex"})-[r:Knows]-(p2:Person)  
RETURN p1, r, p2;  

Now let’s say that Alex is at a bar and is feeling pretty good. Perhaps better than usual. He yells out to the bartender “The next round is on me!”.

Here we omit the Knows relationship label because it’s unlikely that Alex knows everyone in the bar.

MATCH (p1:Person)-[]-(p2:Person)  
RETURN p1, p2;  

Let’s consider another example. Susan is planning to open her first dance studio and needs business advice. She doesn’t immediately know a person with an interest in business, but her dad Bill does.

Here’s one way to write the query:

MATCH (p1:Person {name: "Susan"})-[r:Knows*2]-(p2:Person {interest: "business"})  
RETURN p1, r, p2;  

The new bit is the syntax -[r:Knows*2]-. This is referred to as a variable length relationship. Here we’re saying “Match a Person node with the property name=”Susan” with one or two Knows relationships to a person with an interest in “business”. Specifying the length is important to limit the depth (or hops) that the query traverses to find a match. In a large graph, a long traversal might take longer than we’d like.

Referring back to our graph, if Jane was looking for a chess player we’d have to specify -[r:Knows*3]- or three hops to get to Tom – following the green path shown below.

Graphs, GraphDBs and JavaScript + Exploring Trumpworld

You may also notice that there is a red path from Jane leading to Tom, which involves four hops. Neo4j returns the shorter of the two paths.

The ability to traverse a network of relationships is one of the great strengths of Graph Databases. You can ask questions, such as find a friend of a friend (or more) who matches a particular criteria.

This is also where relational database systems and their use of joins becomes far less than ideal at scale. Such queries are also how recommendation engines can be used to promote new products. For example: when Amazon lists products also purchased in conjunction with a product you happen to be considering.

Accessing Neo4j from JavaScript

Neo4j has an HTTP restful API that makes it possible for remote clients to connect to it. You can find a number of libraries on NPM which essentially act as wrappers for Neo’s restful endpoints.

In fact, I wrote a limited and opinionated Node library that facilitates working with Neo4j and optionally caching results using Redis. You can find it on NPM under the name of Neo4j-redis.

Neo Technologies, the company behind Neo4j, has created the now official Neo4j Driver for Javascript. That’s the library we’ll use in this article.

Installing

Installing the Neo4j driver for JavaScript involves a single command. In this example, we create a test project folder called neo-test and then use the NodeJS npm command to initialize a test project. Lastly, we install the neo4j-driver package.

$ mkdir neo-test; cd neo-test
$ npm init -y
$ npm install neo4j-driver

Our project Github repo was initialized in this way.

Connecting to Neo

Here is the alex.js example from the Github repo associated with this article. We begin by defining the location of our neo4j database instance. I’m running mine on my laptop, so I specify localhost. The bolt:// portion tells Neo that we’d like to use the faster binary connection protocol, instead of the HTTP version.

You can find out more about bolt here.

We then require the neo4j-driver and prepare an auth object to pass to the neo4j.driver setup. With a driver created we define an error handler.

const database = 'bolt://localhost';  
const neo4j = require('neo4j-driver').v1;  
const auth = neo4j.auth.basic('neo4j', 'omega16');  
const driver = neo4j.driver(database, auth);

driver.onError = (error) => {  
  console.log('Driver instantiation failed', error);
};

Next, we create a driver session and run (execute) a Cypher query. Note that the run function accepts two parameters and returns a JavaScript promise. The first parameter to the run function is the query template and the second is an object with the query parameters. This allows Neo to cache query plans (template) for added efficiency. We then use the .then and .catch functions to handle the promise resolve or reject cases.

let session = driver.session();  
session  
  .run(
    'MATCH (p:Person {name: {nameParam}}) RETURN p.name, p.age, p.interest',
    {nameParam: 'Alex'}
  )
  .then((result) => {
    result.records.forEach((record) => {
      console.log(`Name: ${record.get('p.name')}`);
      console.log(`Age: ${record.get('p.age')}`);
      console.log(`Interest: ${record.get('p.interest')}`);
    });
  })
  .catch((err) => {
    console.log('err', err);
  })
  .then(() => {
    session.close();
    driver.close();
  });

Here is the output from the previous code. We see the information returned from the Cypher query.

$ node alex.js
Name: Alex  
Age: 34  
Interest: parties  

To learn more about the neo4j-driver check out the project documentation.

In this next example, we run the query where Susan is checking her network for a person who has an interest in business. She knows Bill who is her dad and a retired Harvard professor, but she doesn’t directly know Jane who took Bill’s game theory course at Harvard.

Graphs, GraphDBs and JavaScript + Exploring Trumpworld

Our query attempts to find a path from Susan to a person with an interest in business. That person turns out to be Jane.

const database = 'bolt://localhost';  
const neo4j = require('neo4j-driver').v1;  
const auth = neo4j.auth.basic('neo4j', 'omega16');  
const driver = neo4j.driver(database, auth);

driver.onError = (error) => {  
  console.log('Driver instantiation failed', error);
};

let session = driver.session();  
session  
  .run(`
    MATCH (p1:Person {name: {seeker}})-[r:Knows*2]-(p2:Person {interest: {interest}})
    RETURN (p1.name + " discovered " + p2.name) AS output`,
    {seeker: 'Susan', interest: 'business'}
  )
  .then((result) => {
    result.records.forEach((record) => {
      console.log(record._fields[0]);
    });
  })
  .catch((err) => {
    console.log('err', err);
  })
  .then(() => {
    session.close();
    driver.close();
  });

And the output is:

$ node business.js
Susan discovered Jane  

Using the code patterns we’ve seen you’d be able to perform insert, update and delete operations to build more complex applications. Neo4j is really quite approachable.

Exploring TrumpWorld

As we close out our exploration into Graphs and GraphDBs, I’d like to share a practical example of how graphs are being used in the context of our current political climate here in the United States.

No, I’m not referring to the intelligence community – but rather about the power of data in the hands of journalists and citizens armed with technology.

On January 15th 2017, as many New Yorkers were resting on a cold and lazy Sunday morning, social news and entertainment media company, BuzzFeed, posted an article entitled Help Us Map TrumpWorld which compiled a listing of 1,500 people and organizations associated with, in one way or another, to Donald Trump’s varied business interests. In the article, the authors asked the public to help validate and contribute to the existing and quickly emerging list.

The data was compiled into a Google spreadsheet making it difficult to clearly see the rats nest of underlying interconnections.

Later that day, Sanchez Castro posted a tweet asking @Neo4j to help make sense of the compiled data.

The team at Neo Technologies was happy to oblige and proceeded to load the data into a Neo4j graph.

Mark Needham, at Neo Technologies, later created a docker container packaging both Neo and the TrumpWorld dataset making it easy for anyone to explore the rabbit hole that is Trump World. This dataset is also available online via the Neo4j Sandbox I mentioned earlier.

Graphs, GraphDBs and JavaScript + Exploring Trumpworld

20,000 foot view

Let’s imagine that we’re investigative journalist following leads. We begin by accessing the Neo4j dashboard and looking at the 20,000-foot view of TrumpWorld.

MATCH (n1)-[r]->(n2) RETURN r, n1, n2  

Graphs, GraphDBs and JavaScript + Exploring Trumpworld

Here we see only 300 of the 2,620 available nodes. The dashboard limits the size of graph visualizations in order to keep them manageable.

Follow the money

We can query the graph for banks and their connections to organizations and individuals. The orange node at the center is, you guessed it, Mr. Trump.

MATCH (bank:Organization)--(other)  
WHERE bank.name contains "BANK"  
RETURN *  

Graphs, GraphDBs and JavaScript + Exploring Trumpworld

Most connected organizations

Here we see which organizations are the most connected. Neo4j returns a table view because the following query focuses on the aggregation of the relationship type (r). This is how we’re able to see the varied types of relationships without knowing their labels.

MATCH (o:Organization)-[r]-()  
RETURN o.name, count(*), collect(distinct type(r)) AS types  
ORDER BY count(*) DESC  
LIMIT 5  

Graphs, GraphDBs and JavaScript + Exploring Trumpworld

Trump and Putin

We can investigate potential social ties between Trump and Putin using the following query.

MATCH (vp:Person {name:"VLADIMIR PUTIN"}),(dt:Person {name:"DONALD J. TRUMP"})  
MATCH path = allShortestPaths( (vp)-[*]-(dt) )  
RETURN path  

Graphs, GraphDBs and JavaScript + Exploring Trumpworld

By clicking on the links we discover the following:

  • In 2014, Donald Trump and Sergei Millian appeared together in a Facebook photo
  • Putin awarded Sergei Millian a prize in Jan. 2015 for developing ties between Russia and American businesspeople
  • In 2012, Putin awarded Order of Friendship to Rex Tillerson
  • Donald Trump tapped Rex Tillerson as Nominee for Secretary of State

Insights like these help journalists focus their resources and energies.

Recap

We began our journey by learning about network graphs. Along the way, we discovered that graphs are literally everywhere we look. In fact, network graphs could not be closer to our hearts – if you consider the network of arteries within our own bodies.

We also learned that we actually think in terms of graphs and that a graph database is a natural tool for representing our data models and their relationships.

Finally, we saw the power of using graph databases to better understand current events.

Give graph databases a try. You may just discover that they’re an ideal tool to tackle the modern challenges in our highly connected world.

Next steps

Books

There are many books on Graphs and Graph Databases. Here are the ones I’ve read.

  • Graph Databases by Ian Robison, Jim Webber & Emil Eifrem
  • Learning Neo4j by Rik Bruggen
  • Linked: The New Science Of Networks Science Of Networks by Albert-Laszlo Barabasi
  • The Tipping Point: How Little Things Can Make a Big Difference by Malcolm Gladwell
  • Six Degrees: The Science of a Connected Age by Duncan J. Watts

Source:: risingstack.com

babel-preset-env: a preset that configures Babel for you

By Axel Rauschmayer

babel-preset-env is a new preset that lets you specify an environment and automatically enables the necessary plugins.

The problem

At the moment, several presets let you determine what features Babel should support:

  • babel-preset-es2015, babel-preset-es2016, etc.: incrementally support various versions of ECMAScript. es2015 transpiles what’s new in ES6 to ES5, es2016 transpiles what’s new in ES2016 to ES6, etc.
  • babel-preset-latest: supports all features that are either part of an ECMAScript version or at stage 4 (which basically means the same thing).

The problem with these presets is that they often do too much. For example, most modern browsers support ES6 generators. Yet, if you use babel-preset-es2015, generator functions will always be transpiled to complex ES5 code.

The solution

babel-preset-env works like babel-preset-latest, but it lets you specify an environment and only transpiles features that are missing in that environment.

Note that that means that you need to install and enable plugins and/or presets for experimental features (that are not part of babel-preset-latest), yourself.

On the plus side, you don’t need es20xx presets, anymore.

Browsers

For browsers you have the option to specify either:

  • Browsers via browserslist query syntax. For example:

    • Support the last two versions of browsers and IE 7+.

          "babel": {
            "presets": [
              [
                "env",
                {
                  "targets": {
                    "browsers": ["last 2 versions", "ie >= 7"]
                  }
                }
              ]
            ]
          },
      
    • Support browsers that have more than 5% market share.

          "targets": {
            "browsers": "> 5%"
          }
      
  • Fixed versions of browsers:

        "targets": {
          "chrome": 56
        }
    

Node.js

If you compile your code for Node.js on the fly via Babel, babel-preset-env is especially useful, because it reacts to the currently running version of Node.js if you set the target "node" to "current":

    "babel": {
      "presets": [
        [
          "env",
          {
            "targets": {
              "node": "current"
            }
          }
        ]
      ]
    },

If you want to see this target in action, take a look at my GitHub repository async-iter-demo.

Additional options for babel-preset-env

This section gives a brief overview of additional options for babel-preset-env. For details, consult the preset’s readme file.

modules (string, default: "commonjs")

This option lets you configure to which module format ES6 modules are transpiled:

  • Transpile to popular module formats: "amd", "commonjs", "systemjs", "umd"
  • Don’t transpile: false

include, exclude (Array of strings, default: [])

  • include always enables certain plugins (e.g. to override a faulty native feature). It has the same effect as enabling plugins separately.
  • exclude prevents certain plugins from being enabled.

useBuiltIns (boolean, default: false)

Babel comes with a polyfill for new functionality in the standard library. babel-preset-env can optionally import only those parts of the polyfill that are needed on the specified platform(s).

There are two ways of using the polyfill:

  • core-js polyfills ES5, ES6+ as needed.
    • Install polyfill: npm install core-js --save
    • Activate polyfill: import "core-js";
  • babel-polyfill polyfills core-js and the regenerator runtime (to emulate generators on ES5).
    • Install polyfill: npm install babel-polyfill --save
    • Activate polyfill: import "babel-polyfill";

Either of the two import statements is transpiled to an environment-specific sequence of more fine-grained imports. For example:

    import "core-js/modules/es7.string.pad-start";
    import "core-js/modules/es7.string.pad-end";
    import "core-js/modules/web.timers";
    import "core-js/modules/web.immediate";
    import "core-js/modules/web.dom.iterable";

Things to note:

  • You should activate the polyfill exactly once in your program, e.g. in a “main” module.
  • useBuiltIns means that less code is downloaded to the browser (bundle sizes become smaller). However, it does not save RAM, because the polyfill only installs what is missing.

For more on polyfilling the standard library, consult chapter “Babel: configuring standard library and helpers” in “Setting up ES6”.

debug (boolean, default: false)

Logs the following information via console.log():

  • Targeted environments
  • Enabled transforms
  • Enabled plugins
  • Enabled polyfills

Check the next section for sample output.

Example

The following example is taken from the preset’s readme file:

    {
      "presets": [
        [ "env", {
          "targets": {
            "safari": 10
          },
          "modules": false,
          "useBuiltIns": true,
          "debug": true
        }]
      ]
    }

Modules are not transpiled. We can, e.g., rely on webpack to handle imports and exports for us.

The debug output is as follows:

    Using targets:
    {
      "safari": 10
    }
    
    Modules transform: false
    
    Using plugins:
      transform-exponentiation-operator {}
      transform-async-to-generator {}
    
    Using polyfills:
      es7.object.values {}
      es7.object.entries {}
      es7.object.get-own-property-descriptors {}
      web.timers {}
      web.immediate {}
      web.dom.iterable {}

Where does babel-preset-env get its information?

  • The features supported by a given JavaScript engine are determined via kangax’s compat-table.
  • Features are mapped to plugins via the file plugin-features.js.
  • browserslist enables queries such as "> 1%" and "last 2 versions".

What’s next?

Giving plugins access to their “environment”

Plans for the future include giving plugins the ability to examine what is possible in the current “environment”. That would have two benefits:

  • Some plugins (such as the one for the object spread operator) currently have options telling them whether to use native functionality or polyfills. If they were aware of their “environment”, the plugins wouldn’t need those options.

  • Babel-based minifiers can determine whether it’s OK to output, e.g., arrow functions.

Simplifying presets

  • Presets based on ECMAScript versions (es2015 etc.) are mostly made obsolete by env. The Babel team is considering eliminating them in future Babel releases (e.g. via a deprecation process).

  • Presets based on stages of the TC39 process (stage-3 etc.) are also candidates for removal, as things related to stages are in constant flux. You can’t really rely on anything in this space, because the stage of a proposal can change within 2 months. Therefore, directly referring to plugins of experimental features is the better approach.

Acknowledgements

  • Thanks to Henry Zhu for all the useful input for this blog post.

Further reading

Source:: 2ality

Cordova plugins in Angular

By [object Object]

Title

Hey you Angular developers. Even though Monaca is still to support Angular development in its Cloud IDE, there are some good news. Just about a month ago Cordova added type definitions (from DefinitelyTyped) to their core plugin repositories. So although that’s not released yet, you can already add those plugins to your Angular app.

This post shows you how to do that by means of a very simple example app. It assumes you’ve been using Monaca either through the command line (CLI) or the Localkit.

CLI

The Monaca CLI allows you to add plugins to your app via the handy monaca plugin add [plugin] command, where plugin is either the name of the plugin (e.g. “org.apache.cordova.camera”), its id (e.g. “cordova-plugin-camera”) or the URL of its GitHub repository (e.g. “https://github.com/apache/cordova-plugin-camera.git”. Since we want to get code that is merged but not released yet, the trick here is to make use of the last option.

So just navigate in your browser to the GitHub repo of the Cordova plugin you want to add and copy the URL. Before you leave notice the recently added types folder, which is what will allow the plugin to work with Angular’s TypeScript.

In order to do so just use the above line monaca plugin add + the link you just copied, on your main directory. This should be all you need. Skip the next section to see an example with Cordova’s globalization plugin.

Localkit

If you are using Monaca’s Localkit the process is a bit trickier and requires the help of the Monaca Cloud. You need first to upload your project there, even if you can’t edit or see your code.

Next go on the browser to the GitHub repository of the plugin you want to add, click “Clone or download” and then “Download ZIP”. Save it in a location of your choice. Before you leave notice the recently added types folder, which is what will allow the plugin to work with Angular’s TypeScript.

Now open your project on the Monaca Cloud. Go on the following link:

Plugin management

Then choose “Import Cordova Plugin”, “Choose File”, select the ZIP file you downloaded previously and press “OK”. You should be able to see the plugin in the “Enabled Plugins” list (might take a refresh) with the latest version and a “-dev” tag. Although you can’t tell, if all went well the plugin files have at this point been added to your project.

The Cloud can be used to directly build or debug your app but you most likely want to put the newly installed plugin to work. Thus go back to the Localkit and download the updated project. You may get a message saying that both versions are not synchronized but choose to continue the download anyway. You should be ready to use the plugin now. Slide on to the next section to see a usage example.

Example App

In this example I will show you a basic usage of the globalization Cordova core plugin.

I started by simply creating a fresh app named local-test with monaca create local-test on the command line and opting by Onsen UI and Angular 2 and the Minimum template (could have done the same in Localkit), which created a single page app with a button that triggers an alert() function. Then I edited that function to use one of the methods of the globalization plugin:

alert() {
  navigator.globalization.getPreferredLanguage(
    function (language) {
      alert('Your preferred language is ' + language.value + '. Right?');
    },
    function () {
      alert('Error getting languagen');
    }
  );
}

Here I am just getting the preferred language of the device and displaying a message with it but hopefully you can use the plugins for much more complex and useful stuff.

You can find more info on each plugin’s usage and methods on its respective GitHub page. What are you waiting for?


Onsen UI is an open source library used to create the user interface of hybrid apps. You can find more information on our GitHub page. If you like Onsen UI, please don’t forget to give us a star. ★★★★★

Source:: https://onsen.io/

Using Zones in Angular for better performance

Screenshot of a timeline profile

In our latest article, we talked about how to make our Angular apps fast by exploring Angular’s ChangeDetectionStrategy APIs as well as tricks on how to detach change detectors and many more. While we were covering many different options to improve the demo application’s performance, we certainly haven’t talked about all possible options.

That’s why Jordi Collell pointed out that another option would be to take advantage of Zone APIs, to execute our code outside the Angular zone, which will prevent Angular from running unnecessary change detection tasks. He even put time and energy into creating a demo plunk that shows how to do exactly that.

We want to say thank you for his contribution and think that the solution he came up with deserves its own article. So in this article we’re going to explore his plunk and explain how Jordi used Zones to make our demo application perform at almost 60 fps.

Seeing it in action

Before we jump right into the code, let’s first take a look at the demo plunk with the running application. As a quick recap: The idea was to render 10.000 draggable SVG boxes. Rendering 10.000 boxes is not a super sophisticated task, however, the challenge lies in making the dragging experience as smooth as possible. In other words, we aim for 60 fps (frames per second), which can be indeed challenging, considering that Angular re-renders all 10.000 boxes by default when an even has fired (that we bound to).

Here’s the demo with the unoptimized version:

And here’s Jordi’s optimized plunk, which uses Angular’s NgZone APIs:

Even though the difference is rather subtle, the optimized version performs much better in terms of JavaScript execution per frame. We’ll take a look at some numbers later, but let’s quickly recap Zones and then dive into the code and discuss how Jordi used Angular’s NgZone APIs to achieve this performance first.

The idea of Zones

Before we can use Zone APIs and specifically the ones from Angular’s NgZone, we need to get an understanding of what Zones actually are and how they are useful in the Angular world. We won’t go into too much detail here as we’ve already written two articles on this topic:

  • Understanding Zones – Discusses the concept of Zones in general and how they can be used to e.g. profile asynchronous code execution
  • Zones in Angular – Explores how the underlying Zone APIs are used in Angular to create a custom NgZone, which enables consumers and Angular itself to run code inside or outside Angular’s Zone

If you haven’t read these articles yet, we definitely recommend you to do so as they give a very solid understanding of what Zones are and what they do. The bottom line is, however, Zones wrap asynchronous browser APIs, and notify a consumer when an asynchronous task has started or ended. Angular takes advantage of these APIs to get notified when any asynchronous task is done. This includes things like XHR calls, setTimeout() and pretty much all user events like click, submit, mousedown, … etc.

Once notified, Angular knows that it has to perform change detection because any of the asynchronous operations might have changed the application state. This, for instance, is always the case when we use Angular’s Http service to fetch data from a remote server. The following snippet shows how such a call can change application state:

@Component(...)
export class AppComponent {

  data: any; // initial application state

  constructor(private dataService: DataService) {}

  ngOnInit() {
    this.dataService.fetchDataFromRemoteService().subscribe(data => {
      this.data = data // application state has changed, change detection needs to run now
    });
  }
}

The nice thing about this is that we as developers don’t have to care about notifying Angular to perform change detection, because Zones will do it for us as Angular subscribes to them under the hood.

Okay, now that we touched on that, let’s take a look at how they can be used to make our demo app fast.

Running outside Angular’s Zone

We know that change detection is performed whenever an asynchronous event happened and an event handler was bound to that event. This is exactly the reason why our initial demo performs rather jankee. Let’s look at AppComponent‘s template:

@Component({
  ...
  template: `
    <svg (mousedown)="mouseDown($event)"
         (mouseup)="mouseUp($event)"
         (mousemove)="mouseMove($event)">

      <svg:g box *ngFor="let box of boxes" [box]="box">
      </svg:g>

    </svg>
  `
})
class AppComponent {
  ...
}

Three (3) event handlers are bound to the outer SVG element. When any of these events fire and their handlers have been executed then change detection is performed. In fact, this means that Angular will run change detection, even when we just move with the mouse over the boxes without actually dragging a single box!

This is where taking advantage of NgZone APIs comes in handy. NgZone enables us to explicitly run certain code outside Angular’s Zone, preventing Angular to run any change detection. So basically, handlers will still be executed, but since they won’t run inside Angular’s Zone, Angular won’t get notified that a task is done and therefore no change detection will be performed. We only want to run change detection once we release the box we are dragging.

Okay, how do we achieve this? In our article on Zones in Angular, we already discussed how to run code outside Angular’s Zone using NgZone.runOutsideAngular(). All we have to do is to make sure that the mouseMove() event handler is only attached and executed outside Angular’s zone. In addition to that, we know we want to attach that event handler only if a box is being selected for dragging. In other words, we need to change our mouseDown() event handler to imperatively add that event listener to the document.

Here’s what that looks like:

import { Component, NgZone } from '@angular/core';

@Component(...)
export class AppComponent {
  ...
  element: HTMLElement;

  constructor(private zone: NgZone) {}

  mouseDown(event) {
    ...
    this.element = event.target;

    this.zone.runOutsideAngular(() => {
      window.document.addEventListener('mousemove', this.mouseMove);
    });
  }

  mouseMove(event) {
    event.preventDefault();
    this.element.setAttribute('x', event.clientX + this.clientX + 'px');
    this.element.setAttribute('y', event.clientX + this.clientY + 'px');
  }
}

We inject NgZone and call runOutsideAngular() inside our mouseDown() event handler, in which we attach an event handler for the mousemove event. This ensures that the mousemove event handler is really only attached to the document when a box is being selected. In addition, we save a reference to the underlying DOM element of the clicked box so we can update its x and y attributes in the mouseMove() method. We’re working with the DOM element instead of a box object with bindings for x and y, because bindings won’t be change detected since we’re running the code outside Angular’s Zone. In other words, we do update the DOM, so we can see the box is moving, but we aren’t actually updating the box model (yet).

Also, notice that we removed the mouseMove() binding from our component’s template. We could remove the mouseUp() handler as well and attach it imperatively, just like we did with the mouseMove() handler. However, it won’t add any value performance-wise, so we decided to keep it in the template for simplicity-sake:

<svg (mousedown)="mouseDown($event)"
      (mouseup)="mouseUp($event)">

  <svg:g box *ngFor="let box of boxes" [box]="box">
  </svg:g>

</svg>

In the next step, we want to make sure that, whenever we release a box (mouseUp), we update the box model, plus, we want to perform change detection so that the model is in sync with the view again. The cool thing about NgZone is not only that it allows us to run code outside Angular’s Zone, it also comes with APIs to run code inside the Angular Zone, which ultimately will cause Angular to perform change detection again. All we have to do is to call NgZone.run() and give it the code that should be executed.

Here’s the our updated mouseUp() event handler:

@Component(...)
export class AppComponent {
  ...
  mouseUp(event) {
    // Run this code inside Angular's Zone and perform change detection
    this.zone.run(() => {
      this.updateBox(this.currentId, event.clientX + this.offsetX, event.clientY + this.offsetY);
      this.currentId = null;
    });

    window.document.removeEventListener('mousemove', this.mouseMove);
  }
}

Also notice that we’re removing the event listener for the mousemove event on every mouseUp. Otherwise, Angular would keep performing change detection on every mouse move, even though we’ve released the box already. In addition to that, we would pile up event handlers, which could not only cause weird side effects but also blows up our runtime memory.

Measuring the performance

Alright, now that we know how Jordi implemented this version of our demo application, let’s take a look at some numbers! The following numbers have been recorded using the exact same techniques on the exact same machine as in our previous article on performance.

  • 1st Profile, Event (mousemove): ~0.45ms, ~0.50ms (fastest, slowest)
  • 2nd Profile, Event (mousemove): ~0.39ms, ~0.52ms (fastest, slowest)
  • 3rd Profile, Event (mousemove): ~0.38ms, ~0.45ms (fastest, slowest)

Conclusion

Using Zones is a great way to escape Angular’s change detection, without detaching change detectors and making the application code too complex. In fact, it turns out that Zones APIs are super easy to use, especially NgZone‘s APIs to run code outside or inside Angular. Based on the numbers, we can even say that this version is about as fast as the fastest solution we came up with in our previous article. Considering that the developer experience is much better when using Zones APIs, since they are easier to use than manually detaching and re-attaching change detector references, it’s definitely the most “beautiful” performance improvement we have so far.

However, we shouldn’t forget that this solution also comes with a couple (probably fixable) downsides. For example, we’re relying on DOM APIs and the global window object, which is something we should always try to avoid. If we wanted to use this code with on the server-side then direct access of the window variable would be problematic. We will discus these server-side specific issues in a future article. For the sake of this demo, this isn’t a big deal though.

Again, a huge shout-out goes to Jordi Collell who not only made us adding this option, but also taking the time to actually implement a first version of this demo!

Source:: Thoughtram

AngularJS 1.x Fundamentals (Part 2)

By thomasnyambati

In the previous article in this series, we looked into AngularJs concepts and features. We looked into what AngularJs is and its features that make it standout. In this article, we will take a deep look into some of these features and how they can be used to build awesome and robust web applications.

Before we proceed I would like to clarify that any reference to Angular in this article will be for Angular version 1.x. So let’s not get it confused with Angular2, even though some of the concepts are similar they do differ in many ways.

The features we will be looking into in this article include;

  • Modules
  • Controllers and,
  • Data binding

Modules

Structuring code is key in building maintainable software. Good news for us is that with Angular we can easily divide front end code into reusable components called modules. A module is basically a container that holds different components of your application under one name.

Most applications have a main method that instantiates and wires all the different parts of the application. However, this is not the case with AngularJs. Instead, modules declaratively specify how the application will be bootstrapped and executed.

Using this approach comes with several benefits:

  • The declarative process is easier to understand.
  • The codebase can be packaged into reusable components.
  • The modules can be loaded in any order (or even in parallel) because modules delay the execution.
  • Unit tests only have to load relevant modules, which keeps them fast.
  • End-to-end tests can use modules to override configuration.

Declaring a Module

A module is declared using the angular.module() function. This function requires three arguments to fully declare a valid module, these are module name, module dependencies and the configuration function respectively. In some cases, a module may not have any dependencies, therefore, you place and empty array and do away with the configuration function if it is not required. The code snippet below shows the syntax and example of module declaration.

// Angular module syntax
angular.module(name, [requires], confgFn);

// Declare a module
angular.module('myApp', []);

Once you have declared your module, the method will return a reference to a newly created module which can be used to attach other components like controllers, directives, services and so forth. Look at angular.module() as a global API for creating, retrieving modules and registering components.

Retrieving a module.

Ok, now that we have declared our module, how do we retrieve it?Retrieving modules can be very tricky sometimes and accounts to some of the bugs you may encounter while working with AngularJs. To avoid such, it is important to always remember that using angular.module('myModule', []) will create the module myModule and overwrite any existing module named myModule and therefore we should use angular.module('myModule') to retrieve an existing module. Note that it does not have the dependency array.

// Declaring a module
angular.module('sampleModule',[])

// Retrieving a module
function GreetController() {
    this.greetings = " Hey there i am a controller";
}

angular.module('sampleModule')
    .controller('GreetController', [GreetController]);

You can take a look at jsfiddle implementation below.

Recommended Setup

In the above example, we managed to declare a simple module and attached a simple controller to it. However, this approach will not scale when it comes to big and complex applications. Instead, It is recommended to break your application into multiple modules. This might look something like:

  • Application level module to attach other modules and initialization code.
  • A module for each feature.
  • A module for reusable components.

For more details on how to structure modules for bigger applications refer to the community style guide for reference. It is important to note that what I have mentioned above is a mere suggestion and therefore you are free to modify or come up with a suitable workflow that works well for you and your team.

// Modules for controller
angular.module('myApp.controller', [])
    .controller('GreetController', function(ResponseService) {
        this.greetings = ResponseService.greetings
    });

// Module for services
angular.module('myApp.service', [])
    .service('ResponseService', function() {
        this.greetings = "Nice to meet you controller, I am service";
    });

// Application level module to attach services, directives,controllers or filters modules
angular.module('myApp', ['myApp.service', 'myApp.controller'])

Below is jsfiddle on the same.


Module loading & Dependencies

A module requires a collection of configuration and run blocks that get applied to the application during the bootstrap process. In its simplest form, a module consist of two kinds of blocks

  • Configuration Blocks
  • Run Blocks

Configuration blocks

These are functions that get executed during the provide registration and configuration phase. It is important to note that only providers and constants can be injected into the configuration blocks. The reasoning behind this is to prevent services from being accidentally instantiated before they are fully configured.

Configuration Blocks are denoted by the .config() function.

// Configuration Blocks syntax
angular.module('myModule', [])
    .config(function(injectables) {
        // provider-injector
        // You can only inject Providers (not instances) into config blocks.
    });

// Example Configuration Blocks
angular.module('myModule', [])
    .config(function($provide, $compileProvider, $filterProvider) {
        $compileProvider.directive('directiveName', ...);
        $filterProvider.register('filterName', ...);
    });

In the example, above we have injected the $filterProvider, $compileProvider into the configuration block and used the providers to register and compile the filter and directive before they can be used in our application. Not that when bootstrapping Angular applies all constant definitions first and the applies the configuration blocks in the same order they are registered.

Run Blocks

Run blocks are the closest thing to the main method in AngularJs. Run blocks are executed to kickstart the application. They are executed after all the services have been configured and the injector has been created.

The code wrapped by the run blocks is typically hard to unit test and for this reason, they should be declared in isolated modules so that the can be ignored in the unit test.

Same as the configuration blocks only instances and constants should be injected into run blocks, this is to prevent further configuration during application run time.


angular.module('myModule')
    .run(function(injectables) {
        // instance-injector
        // You can only inject instances (not Providers) into run blocks
    });

Dependencies

As we have seen in previous examples above, Modules can list other modules as their dependencies. When a module specifies that it depends on in another module, the required module needs to be loaded before the requiring module is loaded.

In other words, the configuration blocks of the required modules execute before the configuration blocks of the requiring module. The same is true for the run blocks. Each module can only be loaded once, even if multiple other modules require it.


Controllers

Controllers are constructor functions that are used to augment the Angular scope. It is responsible for responding to user inputs and performs interactions on the data model objects. It receives input, validates it and then performs the business operation that modifies the state of the model.

When a controller is attached to the DOM, Angular will instantiate a new Controller object, using the specified controller’s constructor function. It then pulls together the model used by the view and sets up any corresponding dependencies needed to render the view or handle input from the consumer of the view. In other words, the controller can be used to set up initial state and add behaviours to the scope object.

Let’s look at an example below

...
function MyController(UserService) {
   // The controller business logic goes in here
   this.users = UserService.all();
}
...

In our first article, we said controllers act as the middleman between the model and the view(template). In the above example, when the controller is instantiated it will fetch all users from the UserService -our model, and make it available as scoped variable users which can be accessed by referencing the controller once attached to the view.

Declaring a Controller

In Angular, we can declare controllers before we use them in our templates (View). This is done through use of the .controller() method. This keeps the controller’s constructor function out of the global scope.

This method registers the controller with a name that can be used to associate it with a view.The example below illustrates how to declare a controller.

// module declation syntax
angular.module('Modulename',[])
    .controller('ControllerName', [controllerFn])

Using the example controller above, we can declare it as shown below


// The controller
function MyController(UserService) {
   // The controller business logic goes in here
   this.users = UserService.all();
}

// Declaring a controller
angular.module('MyApp', [])
    .controller('MyController',[MyController]);

Once declared MyController can be associated with the relevant view in the angular application.

Attaching Properties and Functions to Scope

As we have mentioned above, controllers are JavaScript constructor functions that act as the middleman between the model and the view. However, for it to be used it has to be associated with either a module, directive or components.

Using the controller example above, let’s see on how we can associate controllers to various services in Angular.

Association via ngController Directive

The ngController directive attaches a controller class to the view. When Angular compiles the view it will associate the controller specified and use it to bind data and functions to the scope. It is recommended to use the controller as syntax. This approach binds the properties directly to the controller thus making it easier to access data and methods especially when multiple controllers apply.

Let’s look at an example that demonstrates controller as syntax.

...
// Greet controller
function GreetController() {
    this.name = 'Mister Awesome'
    this.greet = function() {
        alert(this.name)
    };
}
....

Associating the controller to the view

...
<!--attach the controller to the view-->
<div ng-controller="GreetController as $ctrl">
    <!--Display the name -->
    <p> {{ $ctrl.name }}<p/>
<div/>
...

In the example above when ngController initializes the controller it assigns it an alias of $ctrl which we have used to access the method greet and the name from our view.

Below is an implementation of the same in jsfiddle.

Setting up the Initial State

When controllers are initialized, they should have the initial state that will be displayed to the users. This might be default information such as user details on the profile page. Controller’s initial state can be done by attaching properties to the controller scope, these properties will be available to the template when the controller is registered.

Initial state can be constant values or HTTP requests made to your server. The example below shows how to set the initial state of a controller.

....
// Controller with initial state
function ContactsController() {
    this.contacts = {
        name: "Thomas Awesome",
        address:"249 Union Avenue, Brooklyn"
    }
}
....

With the example above when the controller is initialized the scope variable contacts will be initialized to the value of the contacts object.

Adding Behaviour to the Scope Object.

Web application entails having views that react to events and data passed from the user to the server or vice Versa. This may also include user initiated events such as a form submit, login, logout etc.

In order to react to these events or execute computations, we must provide behaviour to the controller scope. This is achieved by adding methods to the controller, which are later made available to the view. If you look at the example above, we have only displayed the initialized values.

Let us add a function that can change the address to somewhere in 28 W 245 New York.

...
// Controller with function that changes its properties value.
function ContactsController() {
    var self = this;
     self.contacts = {
         name: "Thomas Awesome",
         address:"249 Union Avenue, Brooklyn"
    }
    // Function that changes the address 
    self.changeAddress = function() {
        self.contacts.address = "28 W 245 New York";
    }
}
...

Using Controllers Correctly

In general, a controller shouldn’t try to do too much. It should contain only the business logic needed for a single view. The most common way to keep controllers slim is by encapsulating work that doesn’t belong to controllers into services and then using these services in controllers via dependency injection. The controller should only be used to set initial site and add behaviour to the view, anything beyond this should be delegated to services or directives.

Here are some of the scenarios that a controller shouldn’t be used for:

Manipulate DOM

Sometimes we get carried away with our awesomeness and want to some serious DOM manipulation in the controller. It will work no doubt but introducing any presentation logic into Controllers significantly affects its testability. Controllers should contain only business logic and nothing else, Angular has data binding for most cases and directives to encapsulate manual DOM manipulation.

Format input

Thou shall not format inputs in your controller, use angular form controls instead. Angular come shipped with awesome form controls which you can leverage to make your inputs look awesome and give your users amazing user experience.

Filter output

Never ever filter you outputs in controllers, use angular filters instead. Angular has inbuilt filters you can use to format outputs like date, JSON etc. It also provides a way to build your own making it easier to give your app unique data filters.

Share code or state across controllers.

Controllers only act as a middleman between the model and the view. If you are looking to share functionality across controllers, use angular services instead. With angular services, you can share common functionality across your application.

Many of the don’t above have recommended other angular features we have not yet covered, worry not we will cover the in details in the subsequent articles.

Data binding

Data binding is the automatic synchronisation of data between the model and the view. If changes happen in the view or the model AngularJs will propagate those changes to the view and the model respectively.

The concept of data binding enables us to provide real-time changes to our view whenever data in the model changes. This gives users the desktop app experience and they don’t have to reload the browser to access update changes.

Data binding in Angular is classified into three parts;

  1. one-time data binding.
  2. one-way data binding and
  3. two-way data binding.

We are going to look at them one by one get a deeper understanding of how they work.

One-Time Data Binding.

The Angular $digest cycle essentially loops through all the bindings and checks for any changes then re-renders the values. This has performance implication to our apps especially when the application scales. To solve this problem Angular has introduced the concept of one-time data binding.

The main purpose of one-time binding expression is to provide a way to create a binding that gets deregistered and frees up resources once the binding is stabilised. Reducing the number of expressions being watched makes the digest loop faster and allows more information to be displayed at the same time.

One time data binding is achieved by using one-time expression ::. One-time expressions will stop recalculating once they are stable. This happens after the first digest if the expression result is a non-undefined value. What this means is that once we have our value the binding will be released from the $digest watchers thus less binding to worry about.

Let us look at an example

...
// Controller demonstrating One -Time Data binding.
function EventController() {
  var counter = 0;
  var self = this;
  var names = ['Igor', 'Misko', 'Chirayu', 'Lucas'];
  // exposing the click event to the scope
  self.clickMe = function(clickEvent) {
    self.name = names[counter % names.length];
    counter++;
  };
}
...

angular.module('oneTimeBinding', [])
  .controller('EventController', [EventController]);

Associate the controller to the view.

...
<div ng-controller="EventController as $ctrl">
  <button ng-click="$ctrl.clickMe($event)">Click Me</button>
  <p>One time binding: {{ ::$ctrl.name }}</p>
  <p>Normal binding: {{ $ctrl.name }}</p>
</div>
...

In the above example, we have bound the name variable using one-time expression and the other one using normal expressions. When the button is clicked, the value of Normal binding will change while the one with one-time binding will remain with the first value bound which is is Marko in this instance.

You can reference the example below to see how it works.

One-Way data binding

One-way data binding is a unidirectional data propagation from the Scope to the view or parent component to the child component. In AngularJs there are instances that we only require data changes from the scope to be reflected in the view, not the other way round.

A good example will be when displaying student scores assuming we are building a student management system. In this instance you only want to display the score, and if the score changes you need same reflected on the view. Any changes that happen in the view should not affect the model or the scope.

This can be achieved by using ng-bind or the expression directive {{ }}. By now you should be familiar with this way of binding having used it in several examples above. Let’s look at the code sample below.

// Controller demonstrating One-way
function StudentScoresController() {
  this.class = 'Class One';
  this.scores = [{
    name: 'Alex Magana',
    score: 40
  }, {
    name: 'Nduta Opksey',
    score: 70
  }, {
    name: 'Lawrence Mocha',
    score: 90
  }];
}
...

In the view.

...
<div ng-controller="StudentScores as $ctrl" class="container jumbotron">
  <table class="table table-striped table-bordered">
    <tr>
      <td>Name</td>
      <td>Score</td>
    </tr>
    <tr ng-repeat="student in $ctrl.students">
      <td>{{ student.name }}</td>
      <td>{{ student.score }}</td>
    </tr>
  </table>
</div>
  ...

One-way data binding has also been introduced in the .component() and directive () methods allowing us to pass data to child components without affecting the parent. More details on data binding will be covered in the directives section of this article series.

Two-Way Data Binding

Two-way data binding is what we love most about Angular. What this means is that changes in the view and model are automatically synchronised and reflected in view and model respectively. What this means for us is that we do not have to manage or manually update the view or model on changes that happen on either side, Angular will handle that for us.

The best scenario to use two-way data binding is when using forms. In this instance, you might what to ensure that the data you are collecting from the users is the correct one. With the use of ng-model, we can instantly display the user inputs while they type. This also makes it easier to do instant validation as user input changes.

...
// Controller for the form
function FormController(argument) {
  this.user = {
    name: "Moses Koena",
    age: 24
  };
}
...

Let’s associate the above controller to the view.

<div ng-controller="FormController as $ctrl">
  <p>Name: {{ $ctrl.user.name }}</p>
  <p>Age : {{ $ctrl.user.age }}</p>

  <input type="text" ng-model="$ctrl.user.name">
  <input type="number" ng-model="$ctrl.user.age">
</div>

When the user makes any changes on the forms, the value of our user object will changes automatically without us clicking any buttons. isn’t this beautiful!

Here is a live example to you can examine and see first hand how two-way data binding magic works.

Conclusion

AngularJs has been built to make building web application easy and fun. We have seen how you can use modules to structure your codebase, how controllers augment the view by attaching properties and functions to the Scope. We have also looked into classifications of data binding – (One-Way, One-Time, Two-Way) and scenarios under which to use each.

In most cases, you will use all that these features together to deliver the seamless and elegant desktop experience to your application. Therefore it is important for us to know how to integrate all to this awesome powerful features. I will leave this to you to go and explore and find you footing.

What we have discussed here is just a by the way, I would recommend you research further to understand how you can use this features to your advantage.

Source:: scotch.io