Monthly Archives: March 2017

Node.js Weekly Update - 31 March, 2017

By Gergely Németh

Node.js Weekly Update - 31 March, 2017

Below you can find RisingStack‘s collection of the most important Node.js news, projects & updates from this week:

1. Deploying Node.js Microservices to AWS using Docker

This post focuses on building a simple microservice and packaging it in a docker container, then hosting the container on AWS.

Deploying microservices to the cloud is plagued with lots of complexity. To simply the microservice portion we’re going to use an NPM library called Hydra – which will greatly simply the effort while offering considerable scalability benefits. Even if you choose not to use Hydra, the information in this post should help you get started with AWS and Docker.

2. The Definitive Guide for Monitoring Node.js Applications

This article is about running and monitoring Node.js applications in Production.

Let’s discuss these topics:

  • What is monitoring?
  • What should be monitored?
  • Open-source monitoring solutions
  • SaaS and On-premise monitoring offerings

3. Six Reasons Why JavaScript’s Async/Await Blows Promises Away

In case you missed it, Node now supports async/await out of the box since version 7.6.

Why Is It better?

  • Concise and clean
  • Error handling
  • Conditionals
  • Intermediate values
  • Error stacks
  • Debugging

4. Slaying Monoliths at Netflix with Node.js

The growing number of Netflix subscribers – nearing 85 million at the time of this Node.js Interactive talk – has generated a number of scaling challenges for the company.

In his talk, Yunong Xiao, Principal Software Engineer at Netflix, describes these challenges and explains how the company went from delivering content to a global audience on an ever-growing number of platforms, to supporting all modern browsers, gaming consoles, smart TVs, and beyond.

5. Continuous Deployment of a Dockerized Node.js Application to AWS ECS

Find out how to set up a continuous deployment pipeline with AWS ECS, AWS CloudFormation, Node.js, Docker, and Semaphore.

The end goal is to have a workflow that allows us to push code changes up to GitHub and have them seamlessly deployed on AWS ECS. To accomplish this, we’ll have Semaphore watch our GitHub repository, test it whenever changes are made, and deploy it if the branch being updated is master.

6. Case Study: Node.js at Capital One

Capital One CIO Robert Alexander has encouraged the company’s IT department to operate like a startup and embrace open source software development.

The result has been several different project teams using Node.js to rapidly prototype and build new applications inside the business, cutting the time of the development cycle significantly and fostering a climate of innovation.

7. Regressions in v4.8.1 and v6.10.1

So it would appear that there is a memory leak in v4.8.1 and v6.10.1

Latest Node.js Releases

○ Node v7.8.0 (Current)

  • buffer:
    • do not segfault on out-of-range index
  • crypto:
    • Fix memory leak if certificate is revoked
  • deps:
    • upgrade npm to 4.2.0
    • fix async await desugaring in V8
  • readline:
    • add option to stop duplicates in history

Previously in the Node.js Weekly Update

In the previous Node.js Weekly Update we read about the new LTS and Current versions, e2e testing with Nightwatch, and the free Orgs npm announced and more..

We help you to stay up-to-date with Node.js on a daily basis too. Check out our Node.js news page and its Twitter feed!

Source:: risingstack.com

Configuring Azure Functions: Intellisense via JSON Schemas

By John Papa

Configuring Azure Functions: Intellisense via JSON Schemas

I build a lot of Angular apps. They all need data, and often I find myself building a Node server to host my API calls and serve my data. There is a lot of ceremony in setting up a web API, which is one reason why I have been interested in serverless functions. Enter Azure Functions, which are incredibly easy to get started to create some of your own APIs.

I’ll be posting more about Angular apps and Azure Functions and my experiences with them, including some quickstarts.

Configuring

Azure Functions can be configured a few ways. One of the most flexible is to use JSON files.

When I set up an Azure Function, there are two key configuration files I often use: function.json and host.json. The entire Azure Function app service has one host.json file to configure the app. Every end-point function has its own configuration, defined in a function.json file. Pretty straightforward overall.

Now, I love JSON. It makes configuring the functions so easy … except when I have to remember every setting. Ugh. This is where JSON schemas make a huge and positive difference in the development experience.

Configuring Azure Functions: Intellisense via JSON Schemas

I use VS Code, which has an easy way to provide intellisense for JSON files. Fortunately, each of the Azure Functions JSON files has a defined schema in the popular GitHub repo https://github.com/SchemaStore/schemastore. This makes it easy to reference the schema from most modern code editors.

Identifying the JSON Schemas to VS Code

Add this json.schemas property to your settings.json file in VS Code.

"json.schemas": [
  {
    "fileMatch": [
      "/function.json"
    ],
    "url": "http://json.schemastore.org/function"
  },
  {
    "fileMatch": [
      "/host.json"
    ],
    "url": "http://json.schemastore.org/host"
  }
],

Once added, go back to a function.json file and we can see we get some help in the editor! This makes it easy to focus on writing the function code, and less on remembering every JSON property and its domain of values.

Configuring Azure Functions: Intellisense via JSON Schemas

I’ll follow up with more posts on how to create and configure Azure Functions. In the meantime, check out this post on using the Azure CLI with Azure Functions, by Shayne Boyer

Source:: johnpapa

Start your Vue Mobile App Now. Onsen UI for Vue 2 Beta Release

By Fran Dios

Onsen UI and Vue.js

We are excited to announce that the Vue Components for Onsen UI beta version is finally out. vue-onsenui@2.0.0-beta.0 is now available on NPM and you can start creating Vue mobile apps with the power of Onsen UI. Vue UI for mobile devices made easy.

In this article we also show how to create a kitchen sink app that includes most of the components with and without Vuex.

Some Background First

In case you are a Vue developer willing to create mobile apps and did not hear about Onsen UI yet, here is a brief introduction.

Onsen UI is an open source library used to create the user interface of hybrid apps with a focus on simplicity and ease of use. It is built on top of Custom Elements (Web Components standard) but provides wrappers to support multiple JS frameworks such as Vue. Even though we are releasing Vue support now, we have been here for a while supporting other frameworks and letting many developers bootstrap their mobile apps.

Onsen UI apps are made with the Cordova framework, which provides an API to use native features in hybrid apps via plugins. Onsen UI belongs to the Monaca Platform, a set of tools that eases everything related to Cordova development. Mobile debugger, CLI, templates, backend solutions, push notifications and direct publication in app stores are some of the available features. Onsen UI is the open source UI framework used to create apps in Monaca although, of course, it can be used separately for Cordova apps or mobile sites as well.

Beta News

During alpha version we changed the API a couple of times to make it feel better for Vue developers. During the beta, the API itself is not expected to be significantly modified. In general, only new features and fixes are likely to happen so updating to new versions should be quick and easy. In fact, the API has not changed since alpha.1.

New Apps with Monaca CLI/Localkit

Vue hybrid apps are now possible with Onsen UI & Monaca Toolkit. As mentioned in the last blog post, Monaca CLI & Localkit have been updated to include Vue templates. This gives you a Cordova app template prepared to work with Onsen UI and Vue. Furthermore, it will be soon updated to support Webpack v2 (stay tunned!).

Kitchen Sink App with Vue CLI

Since the previous templates use Webpack v1, this example has been created with Vue CLI, which already supports Webpack v2. Unlike the previous ones, the project will not have a Cordova-like structure but it can be enough for this example. It is based on webpack-simple template and modified afterward to support Onsen UI.

Before starting with this app, it is highly recommended having a look at the docs overview (or tutorial app) in order to understand the basics of Onsen UI, its components and how everything is connected to Vue.

Prevue

First of all, let’s see a working version of this app so you can get an idea of what we are doing. Here is the main view.

These two frames share the same source code, but the optional Onsen UI autostyling feature is changing the effects and styles according to the platform.

There are two versions of this app. The first one uses Vuex to manage the app state and can be found here. If you do not want Vuex in your app, check out this branch for an example where the state is simply passed down as props to other components.

Webpack 2 Configuration

The Webpack configuration file that comes in the webpack-simple template has been slightly modified to fit our needs. Apart from including a couple of plugins, the most important change is related to CSS loader. Onsen UI bundles css/onsen-css-components.css, which contains the transpiled CSS components including browser prefixes and everything else to work right away. However, in case we want to modify the theme colors, the source css-components-src/src/onsen-css-components.css is also included and can be modified. It might be a bit confusing since both files have a .css extension, but the source uses some new CSS features that are not yet native in some browser (variables, apply, etc.), thus requiring some processing. Specifically, we need postcss-loader and postcss-cssnext. The full configuration is located here.

Choosing Components Order

There are 3 main navigation components for handling multiple pages in Onsen UI: VOnsNavigator, VOnsTabbar and VOnsSplitter. These components are meant to have children and manage them. Therefore, we get different behavior depending on the order that we choose for them (if we ever need to combine them). The rest of the components can be considered as, in general, mere “leaves” in the DOM tree.

In this app we want to have a VOnsTabbar that separates different sections (home, forms and animations). We also want to have a side menu common to all of these sections showing some links. In order to achieve this behavior, we set VOnsSplitter as the parent of VOnsTabbar. This way, we can change VOnsTabbar‘s inner pages while the VOnsSplitter is not affected. Note that, if we change VOnsSplitter‘s content in this scenario, the VOnsTabbar would disappear.

After that, we also want show some extra pages featuring other components. In this new pages we don’t need to change between sections with the VOnsTabbar or show the menu, we just want to see the current component. Therefore, we can set a VOnsNavigator as the parent of the previous two components for this purpose. It will push pages over the other components and hide them until we pop the page.

Code Highlights

  • Custom Toolbar

Onsen UI provides customizable components that can be wrapped in other components depending on our needs. Here we will need a VOnsBackButton most of the times with a custom label, title and some optional button on the right side. Therefore, we can create an extra CustomToolbar wrapping VOnsToolbar:

<template>
  <v-ons-toolbar>
    <div class="left">
      <slot name="left">
        <v-ons-back-button v-if="backLabel">
          {{ backLabel }}
        </v-ons-back-button>
      </slot>
    </div>
    <div class="center"><slot>{{ title }}</slot></div>
    <div class="right"><slot name="right"></slot></div>
  </v-ons-toolbar>
</template>

This allows us to add a toolbar in 1 single line:

<custom-toolbar back-label="Back">Title</custom-toolbar>
  • Navigator

By default, VOnsNavigator runs this.pageStack.pop() when VOnsBackButton is tapped or a device back button fires. As usual, we can prevent the default behavior with @click.prevent="..." and @deviceBackButton.prevent="...", respectively, but there is a handier way for situations where we want to change all of them. Use the new popPage prop to override the way VOnsNavigator pops a page:

<v-ons-navigator
  :page-stack="$store.state.navigator.stack"
  :options="$store.state.navigator.options"
  :pop-page="() => this.$store.commit('navigator/pop')"
></v-ons-navigator>

Additionally, we can pass options prop to change the transition animations.

  • Tabbar

On iOS, tabs usually contain an icon, unlike on Android (Material Design) where most of the times contain just a label. We can choose to display icons only on iOS for VOnsTab. In order to do this, simply pass a null value to the icon prop for Android:

{
  label: 'Home',
  icon: this.$ons.platform.isAndroid() ? null : 'ion-home',
  page: Home
}

Something similar for the titles. In Material Design usually the title is fixed when changing between tabs:

computed: {
  title() {
    return this.$ons.platform.isAndroid()
      ? 'Kitchen Sink'
      : this.tabs[this.index].label
    ;
  }
}
  • Pull Hook

Making a Twitter-like pull to refresh icon is quite simple:

<v-ons-icon
  :spin="state === 'action'"
  :icon="state === 'action' ? 'fa-spinner' : 'fa-arrow-down'"
  :rotate="state === 'preaction' && 180"
></v-ons-icon>

Do not forget to add a transition for the icon: transition: transform .2s ease-in-out.

  • Forms

In this app we are not sharing input values with other components so it is not saved in the store. However, it can be easily connected by using VOnsModel directive and computed props:

<v-ons-switch v-ons-model="switchOn" />
...
computed: {
  switchOn: {
    get() {
      this.$store.state.switchOn;
    },
    set(newValue) {
      this.$store.commit('switch', newValue);
    }
  }
}

Vuex

Since multiple navigation components are combined in this app, Vuex becomes quite handy to manipulate a component from anywhere in the app. For example, changing a VOnsTabbar section directly from VOnsSplitter. It is not required, though, and the same functionality can be achieved without it.

We might add a template store object prepared for these components that can be imported from vue-onsenui package if developers consider it useful.

What about Vue Router?

Onsen UI already provides components for simple and yet powerful routing based on push/pop. It is possible to have multiple page stacks and combine them, thus making an extra router not very necessary. Even though it is not a priority, we will try to support Vue Router for those developers who prefer it. We’ll have some updates soon regarding this topic.

Conclusion

We hope you are as excited as us about this news and give vue-onsenui a try for your Vue hybrid apps. vue-onsenui development is quite open and you can contribute with issue reports, pull requests or by simply commenting on what you like or don’t. We’d like to improve and tailor it for Vue developers. Help us make the best Vue UI framework for mobile apps!


Onsen UI is an open source library used to create the user interface of hybrid apps. You can find more information on our GitHub page. If you like Onsen UI, please don’t forget to give us a star! ★★★★★

Source:: https://onsen.io/

Monaca CLI support for Onsen UI 2.2.0 and Vue 2

By Adam Kozuch

CLI Category Vue

This week we have two news to announce. First of all, following the recent release of Onsen UI 2.2.0 now we also provide support for it in Monaca CLI version 2.2.0.
Therefore, you can use transpirable frameworks Angular 2+ and React with the new Onsen UI and cssnext.
Moreover, from now on, you can create Vue.js 2 project in CLI based on our standard templates: Minimum, Navigation, Splitter and Tabbar.
The new Vue 2 templates support the 2.0.0-alpha.1 version of the Vue binding and the newest Onsen UI.
Below we prepared a short tutorial that shows how to create a Vue 2 project in Monaca CLI.

Vue project in CLI

If you are the user of Monaca CLI you can just use the standard monaca create projectName command. In the list of available frameworks, you should see Vue 2.

Now just choose the type of template that fits your needs and after a few moments the project will be accessible in your local environment.

CLI Vue Templates

Feel free to test it in your browser by using the monaca preview command.

CLI Vue Example

Changes in webpack

Following changes in Onsen UI 2.2.0 now all webpacks(Angular 2+, React and Vue 2) support cssnext.
Also, we plan to update the currently used webpack 1 to the webpack 2. You can expect to hear about it soon.

Localkit will support mentioned changes in following week. We would like to strongly encourage you to try not only Vue 2, but also other transpirable projects.
We hope our changes will make your work easier and more enjoyable. We are looking forward to your feedback.

Source:: https://onsen.io/

Deploying Node.js Microservices to AWS using Docker

By Carlos Justiniano

Deploying Node.js Microservices to AWS using Docker

In this two-part series, we’ll look at building and deploying microservices to Amazon’s AWS using Docker.

In this first part, we’ll focus on building a simple microservice and packaging it in a docker container, we’ll also step through hosting the container on AWS. In part two, we’ll assemble a cluster of machines on AWS using Docker Swarm mode.

Make no mistake, this is fairly involved stuff, but I’m going to soften the blow in order to make this topic approachable to a wider audience.

If you’re a pro at docker and AWS then you can skim through this article and look forward to part two.

Getting Started with AWS & Docker

Deploying microservices to the cloud is plagued with lots of complexity. To simply the microservice portion we’re going to use an NPM library called Hydra – which will greatly simply the effort while offering considerable scalability benefits. Even if you choose not to use Hydra, the information in this post should help you get started with AWS and Docker.

A quick recap if you’re wondering what this Hydra thing is. Hydra is a NodeJS package which facilitates building distributed applications such as Microservices. Hydra offers features such as service discovery, distributed messaging, message load balancing, logging, presence, and health monitoring. As you can imagine, those features would benefit any service living on cloud infrastructure.

If you’d like to learn more, see two of my earlier posts here on RisingStack. The first is Building ExpressJS-based microservices using Hydra, and the second is Building a Microservices Example Game with Distributed Messaging. A microservice game? Seriously? For the record, I do reject claims that I have too much time on my hands. 🙂

We’ll begin by reviewing docker containerization - just in case you’re new to this. Feel free to skim or skip over the next section, if you’re already familiar with Docker.

Containerization?

Virtual Machine software has ushered in the age of software containerization where applications can be packaged as containers making them easier to manage. Docker is a significant evolution of that trend.

Running microservices inside of containers makes them portable across environments. This greatly helps to reduce bugs which can be found during development as the environment your software runs in locally can match what you run in production.

Packaging a NodeJS microservice inside of a Docker container is straightforward. To begin with, you should download and install the Docker community edition from docker.com – if you haven’t already done so.

Here is an overview of the containerization steps:

  • Build a simple service
  • Create a Dockerfile
  • Build a container
  • Run a container

Let’s take a look at each of these steps.

Building a simple microservice

To build our simple microservice we’ll use a package called Hydra-express, which creates a microservice using Hydra and ExpressJS. Why not just ExpressJS? By itself, an ExpressJS app only allows you to build a Node server and add API routes. However, that basic server isn’t really a complete microservice. Granted that point is somewhat debatable – shades of gray if you will. In comparison, a Hydra-express app includes functionality to discover other Hydra apps and load balance requests between them using presence and health information. Those capabilities will become important when we consider multiple services running and communicating with each other on AWS and in a Docker Swarm cluster. Building Hydra and Hydra-Express apps are covered in more detail in my earlier RisingStack articles.

This approach does however, require that you’re running a local instance of Redis or have access to a remote one. In the extremely unlikely event that you’re unfamiliar with Redis – checkout this quick start page.

In the interests of time, and to avoid manually typing the code for a basic hydra-express app – we’ll install Yeoman and Eric Adum’s excellent hydra app generator. A Yeoman generator asks a series of questions and then generates an app for you. You can then customize it to suit your needs. This is similar to running the ExpressJS Generator.

$ sudo npm install -g yo generator-fwsp-hydra

Next, we’ll invoke Yeoman and the hydra generator. Name your microservice hello and make sure to specify a port address of 8080 – you can then choose defaults for the remaining options.

$ yo fwsp-hydra
fwsp-hydra generator v0.3.1   yeoman-generator v1.1.1   yo v1.8.5  
? Name of the service (`-service` will be appended automatically) hello
? Your full name? Carlos Justiniano
? Your email address? carlos.justiniano@gmail.com
? Your organization or username? (used to tag docker images) cjus
? Host the service runs on?
? Port the service runs on? 8080
? What does this service do? Says hello
? Does this service need auth? No
? Is this a hydra-express service? Yes
? Set up a view engine? No
? Set up logging? No
? Enable CORS on serverResponses? No
? Run npm install? No
   create hello-service/specs/test.js
   create hello-service/specs/helpers/chai.js
   create hello-service/.editorconfig
   create hello-service/.eslintrc
   create hello-service/.gitattributes
   create hello-service/.nvmrc
   create hello-service/.gitignore
   create hello-service/package.json
   create hello-service/README.md
   create hello-service/hello-service.js
   create hello-service/config/sample-config.json
   create hello-service/config/config.json
   create hello-service/scripts/docker.js
   create hello-service/routes/hello-v1-routes.js

Done!  
'cd hello-service' then 'npm install' and 'npm start'  

You’ll end up with a folder called hello-service.

$ tree hello-service/
hello-service/  
├── README.md
├── config
│   ├── config.json
│   └── sample-config.json
├── hello-service.js
├── package.json
├── routes
│   └── hello-v1-routes.js
├── scripts
│   └── docker.js
└── specs
    ├── helpers
    │   └── chai.js
    └── test.js

5 directories, 9 files  

In the folder structure above the config directory contains a config.json file. That file is used by Hydra-express to specify information about our microservice.

The config file will look something like this:

{
  "environment": "development",
  "hydra": {
    "serviceName": "hello-service",
    "serviceIP": "",
    "servicePort": 8080,
    "serviceType": "",
    "serviceDescription": "Says hello",
    "plugins": {
      "logger": {
        "logRequests": true,
        "elasticsearch": {
          "host": "localhost",
          "port": 9200,
          "index": "hydra"
        }
      }
    },
    "redis": {
      "url": "127.0.0.1",
      "port": 6379,
      "db": 15
    }
  }
}

If you’re using an instance of Redis which isn’t running locally you can specify its location under the hydra.redis config branch. You can also optionally specify a Redis url such as redis://:secrets@example.com:6379/15 and you can remove the port and db key values from the config.

After cd-ing into the folder you can build using npm install, and after running npm start you should see:

$ npm start

> hello-service@0.0.1 start /Users/cjus/dev/hello-service
> node hello-service.js

INFO  
{ event: 'start',
  message: 'hello-service (v.0.0.1) server listening on port 8080' }
INFO  
{ event: 'info', message: 'Using environment: development' }
serviceInfo { serviceName: 'hello-service',  
  serviceIP: '192.168.1.151',
  servicePort: 8080 }

Take note of the serviceIP address 192.168.1.151 – yours will be different.

Using the IP address and Port above we can access our v1/hello route from a web browser:

Deploying Node.js Microservices to AWS using Docker

Note, I’m using the excellent JSON Formatter chrome extension to view JSON output in all of its glory. Without a similar browser extension you’ll just see this:

{“statusCode”:200,”statusMessage”:”OK”,”statusDescription”:”Request succeeded without error”,”result”:{“greeting”:”Welcome to Hydra Express!”}}

OK, let’s dockerize this thing!

Creating the Dockerfile

In order to containerize our microservice, we need to provide instructions to Docker. This is done using a text file called a Dockerfile. If you’re following along and used the hydra generator, you already have an way to easily create a Dockerfile. You simply type $ npm run docker build and the docker.js file we saw earlier will be invoked to create your Dockerfile and build your container. That’s a quick way to get the job done – but if you’ve never created a Dockerfile following in this section will be educational.

Here is a sample Dockerfile:

FROM node:6.9.4-alpine  
MAINTAINER Carlos Justiniano cjus34@gmail.com  
EXPOSE 8080  
RUN mkdir -p /usr/src/app  
WORKDIR /usr/src/app  
ADD . /usr/src/app  
RUN npm install --production  
CMD ["npm", "start"]  

The first line specifies the base image that will be used for your container. We specify the light-weight (Alpine) image containing a minimal Linux and NodeJS version 6.9.4  –  however, you can specify the larger standard Linux image using: FROM: node:6.9.4

The EXPOSE entry identifies the port that our microservice listens on. The remaining lines specify that the contents of the current directory should be copied to /usr/src/app inside of the container. We then instruct Docker to run the npm install command to pull package dependencies. The final line specifies that npm start will be invoked when the container is executed. You can learn more on the Dockerfiles documentation page.

Build the container

There is one thing we need to do before we build our container. We need to update our microservice’s config.json file. You may be pointing to a local instance of Redis like this:

    "redis": {
      "url": "127.0.0.1",
      "port": 6379,
      "db": 15
    }

You’ll need to change the IP address pointing to localhost at 127.0.0.1 – because when our service is running in a container its network is different! Yes friends, welcome to the world of docker networking. So in the container’s network – Redis isn’t located at 127.0.0.1 – in fact, Redis is running outside of our container.

There are lots of ways of dealing with this, but one way is simply to change the URL reference to a named DNS entry – like this:

    "redis": {
      "url": "redis",
      "port": 6379,
      "db": 15
    }

That basically says “when looking for the location of Redis, resolve the DNS entry named redis to an IP address”. We’ll see how this works shortly.

With the config change and a Dockerfile on hand we’re now ready to package our microservice inside of a container.

$ docker build -t cjus/hello-service:0.0.1 .

Note: Don’t forget the trailing period which specifies the working directory.

The -t tag for the command above specifies your service name and version. It’s a good practice to prefix that entry with your username or company name. For example: cjus/hello-service:0.0.1 If you’re using Docker hub to store your containers then you’ll definitely need to prefix your container name. We’ll touch on Docker hub a bit later.

You should see a long stream of output as your project is being loaded into the container and npm install is being run to create a complete environment for your microservice.

Running our container

We can run our container using one command:

$ docker run -d -p 8080:8080 
   --add-host redis:192.168.1.151 
   --name hello-service 
   cjus/hello-service:0.0.1

We use the docker run command to invoke our container and service. The -d flag specifies that we want to run in daemon (background mode) and the -p flag publishes our services ports. The port syntax says: “on this machine use port 8080 (first portion) and map that to the containers internal port (second portion)” which is also 8080. The --add-host allows us to specify a DNS entry called redis to pass to our container – how cool is that? We also name the service using the --name flag  –  that’s useful otherwise docker will provide a random name for our running container. The last portion shown is the service name and version. Ideally, that should match the version in your package.json file.

Communicating with our container

At this point you should be able to open your web browser and point it to http://localhost:8080/v1/hello to access your service – the same way we did earlier when our service was running outside of the container. Using docker commands you can start, stop, remove containers and a whole lot more. Checkout this handy command cheat sheet.

Sharing your containers

Now that you’ve created a container you can share it with others by publishing it to a container registry such as Docker Hub. You can setup a free account  which will allow you to publish unlimited public containers, but you’ll only be able to publish one private container. As they say in the drug business: “The first one is free”. To maintain multiple private containers you’ll need a paid subscription. However, the plans start at a reasonably low price of $7 per month. You can forgo this expense by creating your own local container repository. However, this isn’t a useful option when we need to work in the cloud.

I have an account on docker hub under the username cjus. So to push the hello-service container to my docker account I simply use:

$ docker push cjus/hello-service:0.0.1

To pull (download) a container image from my docker hub repo I use this command:

$ docker pull cjus/hello-service:0.0.1

A look at configuration management

If you refer back to our sample microservice’s config.json file you’ll realize that it got packaged in our docker container. That happened because of this line in our Dockerfile which instructs docker to copy all the files in the current directory into the /usr/src/app folder inside of the docker container.

ADD . /usr/src/app  

So that included our ./config folder. Packaging a config file inside of the container isn’t the most flexible thing to do – after all, we might need a different config file for each environment our service runs in.

Fortunately, there is an easy way to specify an external config file.

$ docker run -d -p 8080:8080 
   --add-host redis:192.168.1.151 
   -v ~/configs/hello-service:/usr/src/app/config 
   --name hello-service 
   cjus/hello-service:0.0.1

The example above has a -v flag which specifies a data “volume”. The mapping consists of two directories separated by a colon character.

So: source-path:container-path

The volume points to a folder called configs in my home directory. Inside that folder I have a config.json file. That folder is then mapped to the /usr/src/app/config folder inside of the docker container.

When the command above is issued the result will be that the container’s /usr/src/app/config will effectively be mapped to my ~/configs folder. Our microservice still thinks it’s loading the config from its local directory and doesn’t know that we’ve mapped that folder to our host machine.

We’ll look at a much cleaner way of managing config files when we deploy our containers to a docker swarm in part two of this series. For now, we’ll just roll with this.

Moving to Amazon Web Services

I have to assume here that you’re familiar with using AWS and in particular creating EC2 instances and later ssh-ing into them. And that you’re comfortable creating security groups and opening ports. If not, you can still follow along to get a sense of what’s involved.

We’ll begin by signing into AWS and navigating to the EC2 Dashboard. Once there click on the “Launch Instance” button. On the page that loads select the AWS Marketplace tab. You should see a screen like this:

Deploying Node.js Microservices to AWS using Docker

Search for ECS Optimized to locate the Amazon ECS-Optimized AMI. Amazon created this image for use with its EC2 Container Service. We won’t be using ECS and will opt instead to use Docker and later, Docker Swarm. This choice will allow you to use the skills you acquire here on other cloud providers such as Google Cloud and Microsoft’s Azure. The reason we’re using an ECS Optimized AMI is because it has Docker pre-installed! In part two of this series, we’ll use Docker tools to launch AWS EC2 instances and install the docker engine onto them. However, let’s not get ahead of ourselves.

For now, select Amazon ECS-Optimized AMI and create an EC2 t2.micro instance. Go ahead and configure it using defaults and a security group which opens port 8080.

Once the EC2 instance is ready you can SSH into it to install our docker container.

$ ssh 54.186.15.17
Warning: Permanently added 'ec2-54-186-15-17.us-west-2.compute.amazonaws.com,54.186.15.17' (ECDSA) to the list of known hosts.  
Last login: Sat Mar 25 21:47:19 2017 from pool-xx-xxx-xxx-xxx.nwrknj.fios.verizon.net

   __|  __|  __|
   _|  (   __    Amazon ECS-Optimized Amazon Linux AMI 2016.09.g
 ____|___|____/

For documentation visit, http://aws.amazon.com/documentation/ecs  
2 package(s) needed for security, out of 9 available  
Run "sudo yum update" to apply all updates.  

You should run the security updates while you’re there.

You can check the version of docker that’s running using:

[ec2-user@ip-172-31-6-97 ~]$ docker --version
Docker version 1.12.6, build 7392c3b/1.12.6  

To ensure that you can pull (download) your private docker containers, you’ll need to sign into docker hub using:

$ docker login

To install our microservice we just need to pull it from docker hub.

$ docker pull cjus/hello-service:0.0.1

Note: replace cjus above with your docker user name.

Now we’re ready to run it. But we don’t just want to execute it on the command line as we did earlier because we need to make sure that our container runs should our EC2 instance reboot. To do that we’ll add an two entries to the machine’s /etc/rc.local file.

$ sudo vi /etc/rc.local

And add the following entries:

docker rm -f hello-service  
docker run -d -p 8080:8080   
   --restart always 
   --add-host redis:54.202.205.22 
   -v /usr/local/etc/configs/hello-service:/usr/src/app/config 
   --name hello-service 
   cjus/hello-service:0.0.1

Note: make sure to use your own docker hub user name in the last line above.

Our -v volume flag above specifies the location of the hello-service config file. You’ll need to create that folder and copy a config file into it. That will give you the ability to later tweak or extend the settings.

$ sudo mkdir -p /usr/local/etc/configs/hello-service
$ cd /usr/local/etc/configs/hello-service

Referring back to our docker run command above, you’ll also notice that I specified a Redis location as 54.202.205.22. That’s a separate instance from our new EC2 instance. In my example, I’ve created another EC2 instance to host a Redis docker container. You also have the option of running a docker container on the current machine or on another in the same Amazon VPC. While that works, the recommended solution for production use is to point to an Amazon ElasticCache running a Redis cluster or a service such as RedisLabs.

For our basic tests here, you can add Redis as a docker container using:

$ docker pull redis:3.0.7

Then add this onto the /etc/rc.local file:

docker rm -f redis  
docker run -d -p 6379:6379 --restart always -v /data:/data --name redis redis:3.0.7  

Notice that we’re using -v /data:/data above. That will allow Redis to persist its data. You’ll need to actually create the /data folder using: sudo mkdir /data.

After making the changes above you can restart your EC2 instance(s) with sudo reboot.
Once the machine restarts you should be able to access our sample microservice through the hosted container.

Recap

In this article, we saw how to build a simple microservice, containerize it, and use the same container on an AWS EC2 instance. Granted, there are lots of different ways of doing this. The example here is intended to be just one simple approach to get you started. With small modifications, you would be able to create lots of different services running across many machines.

The examples in this article and the online docker documentation should give you the tools you need to get started with microservices in the cloud.

In the second part of this series, we’ll look at a more advanced approach using a cluster of machines and Docker Swarm mode. Stay tuned!

Source:: risingstack.com

Building a Tic-Tac-Toe Game with Vue 2: Part 1

By shammadahmed

This tutorial assumes that you have a little prior knowledge of JavaScript and the Vue framework. You also need to have Node and Git installed on your system.

Introduction

This tutorial focuses on building a game with the Vue framework. You will also learn about vue-cli, vue-loader and the workflow of making user interfaces for the web using Vue. At the end of the tutorial, you will have a playable Tic-Tac-Toe game like this below:

What is Tic-Tac-Toe?

In this tutorial, we are going to build a simple tic-tac-toe game with the Vue framework. Tic-Tac-Toe is a two-players pencil and paper game. One player is Cross ‘X’ and the other one is Nought ‘O’. The game is based on a 3×3 grid with each box in the grid having the space to be marked with either X or O. One move consists of one mark. The game is turn-based meaning that players own a move one after the other. Once a mark has been placed, it cannot be altered. So, who wins the game? The one who is able to place 3 consecutive Xs or Os in a line. The line can be vertical, horizontal or diagonal. And to get acquainted with it, search Google for tic-tac-toe and play it the times until you don’t want to any longer.

Game Board

Hey Hammad, I already knew all this stuff and I’ve played it a hundred times before? I know, I know. Forgive me for this intro and let’s get started to dive in.

Why Vue

Vue Website

This version of Tic-Tac-Toe is going to be browser-based and implemented using Vue. The reason we are building it with Vue is that there can’t be anything simpler than that. And by anything, I mean any other JavaScript framework. You know you can use something else. But in this particular tutorial, you have to stick by me.

Setup

For the setup, we are going to use vue-loader with vue-cli.

vue-cli

vue-cli is a simple CLI (Command Line Interface) tool for scaffolding Vue projects. It provides project boilerplate, allows you to write ES2015, convert processors into plain CSS and handles all the rest. Install vue-cli on your machine using the following command:

npm install -g vue-cli

Now that you vue-cli, run the following commands in the terminal for vue-cli to scaffold the project structure for you:

vue init webpack-simple vue-tic-tac-toe

Here, vue-tic-tac-toe is the name of the project, in which vue-cli will init in. webpack-simple is a template that includes both Webpack and vue-loader.

After that, cd in the vue-tic-tac-toe and install the dependencies using npm:

cd vue-tic-tac-toe
npm install

vue-loader

vue-loader is a loader for Webpack that allows you to write the template and CSS for a component all in one file. The file needs to have .vue extension. This is how an example .vue file looks like:

<template>
    <div class="message">
        {{ speaker }} says: {{ message }} to the <world></world>
    </div>
</template>

<script>
    import World from './World.vue'

    export default {
        components: { World },
        data () {
            return {
                speaker: 'Hammad',
                message: 'I will rule the world'
            }
        }
    }
</script>

<style>
    .message {
        padding: 10px;
        background-color: steelblue;
        color: #fff;
    }
</style>

Create a components folder in the src directory present in the root. It will contain all of our components. Now that everything is ready, run this command to view this app in the browser:

npm run dev

This command will load http://localhost:8080/ in the browser. Every change you make in your code will be reflected in the browser even without refreshing the page. Open App.vue in your code editor and delete all the unnecessary stuff present in the template and script tag. Now your App.vue file looks like this:

<template>
  <div id="app">

  </div>
</template>

<script>
export default {
  name: 'app',
  data () {
    return {

    }
  }
}
</script>

<style>
#app {
  font-family: 'Avenir', Helvetica, Arial, sans-serif;
  -webkit-font-smoothing: antialiased;
  -moz-osx-font-smoothing: grayscale;
  text-align: center;
  color: #2c3e50;
  margin-top: 60px;
}

h1, h2 {
  font-weight: normal;
}

ul {
  list-style-type: none;
  padding: 0;
}

li {
  display: inline-block;
  margin: 0 10px;
}

a {
  color: #42b983;
}
</style>

Game Elements

Thinking of this game, all the things that come to my mind are:

  • 2 Players (X and O)
  • The Board Grid

Grid Board

  • 3 Rows
  • 3 Columns
  • 9 Grid Cells
  • Scoreboard (The number of wins for each player)

Scoreboard

  • Number of match being played

Match Number

  • Player turn

Game Status

  • Restarting the game

Game Restart

We will extract some of them elements into their own components and others as properties of those components.

Vue Components

The benefit of using components is that we can reuse them. Like, we can use the Cell component 9 times, without having the need to duplicate it. This keeps our code DRY (Don’t Repeat Yourself). Our component structure will look like this:

-- App
---- Board
------ Cell x9

Create these components: Board.vue and Cell.vue in the components folder with the following boilerplate code.

<template>

</template>

<script>
    export default {
        data () {}
    }
</script>

<style>

</style>

Styling

To make sure that this game doesn’t hurt your eyes, you can grab all the styling and fonts from here and paste them in your code.

In the index.html file, I only added two fonts and changed the title. The Dosis font is for the whole body and the Gochi Hand font is for the X and O placed in the grid. Both are taken from Google Fonts. This is how our index.html file looks like:

<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="utf-8">
    <title>Tic-Tac-Toe Game</title>
    <link href="https://fonts.googleapis.com/css?family=Dosis|Gochi+Hand" rel="stylesheet">
  </head>
  <body>
    <div id="app"></div>
    <script src="/dist/build.js"></script>
  </body>
</html>

Change the tag of your App component to this:

<style>
body {
  background-color: #fff;
  color: #fff;
  font-family: 'Dosis', Helvetica, sans-serif;
  -webkit-font-smoothing: antialiased;
  -moz-osx-font-smoothing: grayscale;
  text-align: center;
  margin: 0px;
}

#app {
  margin: 0 auto;
  max-width: 270px;
  color: #34495e;
}

h1 {
  text-transform: uppercase;
  font-weight: bold;
  font-size: 3em;
}

.restart {
  background-color: #e74c3c;
  color: #fff;
  border: 0px;
  border-bottom-left-radius: 10px;
  border-bottom-right-radius: 10px;
  font-family: 'Dosis', Helvetica, sans-serif;
  font-size: 1.4em;
  font-weight: bold;
  margin: 0px;
  padding: 15px;
  width: 100%;
}

.restart:hover {
  background-color: #c0392b;
  cursor: pointer;
}

.scoreBoard {
  display: flex;
  flex-direction: row;
  justify-content: space-around;
  align-items: center;
  width: 100%;
  height: 15px;
  background-color: #16a085;
  box-shadow: 10px solid #fff;
  padding: 20px;
  overflow-x: none;
}

.scoreBoard h2 {
  margin: 0px;
}

.scoreBoard span {
  float: right;
  font-size: 1.5em;
  font-weight: bold;
  margin-left: 20px;
}
</style>

Add this to the tag of the Grid component:

.grid {
  background-color: #34495e;
  color: #fff;
  width: 100%;
  border-collapse: collapse;
}

.gameStatus {
  margin: 0px;
  padding: 15px;
  border-top-left-radius: 20px;
  border-top-right-radius: 20px;
  background-color: #f1c40f;
  color: #fff;    
  font-size: 1.4em;
  font-weight: bold;
}

.statusTurn {
    background-color: #f1c40f;
}

.statusWin {
    background-color: #2ecc71;
}

.statusDraw {
    background-color: #9b59b6;
}

And this to that of the Cell component:

.cell {
  width: 33.333%;
  height: 90px;
  border: 6px solid #2c3e50;
  font-size: 3.5em;
  font-family: 'Gochi Hand', sans-serif;
}

.cell:hover {
    background-color: #7f8c8d;
}

.cell::after {
  content: '';
  display: block;
}

.cell:first-of-type {
  border-left-color: transparent;
  border-top-color: transparent;
}

.cell:nth-of-type(2) {
  border-top-color: transparent;
}

.cell:nth-of-type(3) {
  border-right-color: transparent;
  border-top-color: transparent;
}

tr:nth-of-type(3) .cell {
  border-bottom-color: transparent;
}

Component Templates

The template section of a component contains all the markup that makes up the component. Our App component will contain the Gird component and the Gird component will contain 9 Cell components. The App component is very simple and only contains a heading and a grid for the game. We will add more functionality later.

<div id="app">
  <div id="details">
    <h1>Tic Tac Toe</h1>
  </div>
  <grid></grid>
</div>

The Grid component contains a table that has three rows and three cells in each row. The cell number is passed down as a prop to uniquely identify each cell. The template of the Grid component:

<table class="grid">
  <tr>
    <cell name="1"></cell>
    <cell name="2"></cell>
    <cell name="3"></cell>
  </tr>
  <tr>
    <cell name="4"></cell>
    <cell name="5"></cell>
    <cell name="6"></cell>
  </tr>
  <tr>
    <cell name="7"></cell>
    <cell name="8"></cell>
    <cell name="9"></cell>
  </tr>
</table>

The Cell component contains only a

tag to hold the mark X or O.

<td class="cell">{{ mark }}</td>

The Game Flow

To start adding functionality to our game, we need to determine the flow of events which will take place with each user interaction. The flow of the game is as follows:

  • The App is loaded.
  • All cells are empty.
  • O is the first player.
  • The player can place an O in any of the cells.
  • The player-turn is changed X.
  • Each time a player places a mark, the turn is handed to the non-active player.
  • After each strike, we need to check if the game meets any winning condition.
  • We also need to check if the game is a draw.
  • After or anytime in between a game, a button called Restart can be clicked to restart the game.
  • The status of the game that is if the game is in progress or won or is a draw.
  • In terms of progress, the status displays the turn of the respective player.
  • The game also displays the number of matches and the number of wins for each player.

For all of this to be able to happen, our components need some data properties and methods.

Data Properties

We will divide the data among the components according to their relation or ease of access to that component.

The App component will hold the number of matches and the number of wins for each player.

data () {
    return {
      matches: 0,
      wins: {
        O: 0,
        X: 0
      }
    }
}

The Grid component holds the data for the active player (X or O), the game status, status message, status color (for displaying on the top bar), the number of moves played by both players to (check for a draw), the mark placement for each cell and all (8) the winning conditions. The winning conditions array contains 8 arrays, and each array contains possible winning same cell (all X or all O) mark arrangement of cell number. These conditions can be compared with the cells object to check for a win.

data () {
  return {
      // can be O or X
      activePlayer: 'O',
      // maintains the status of the game: turn or win or draw
      gameStatus: 'turn',

      gameStatusMessage: `O's turn`,
      // status color is used as background color in the status bar
      // it can hold the name of either of the following CSS classes
      // statusTurn (default) is yellow for a turn
        // statusWin is green for a win
        // statusDraw is purple for a draw
      gameStatusColor: 'statusTurn',
      // no. of moves played by both players in a single game (max = 9)
      moves: 0,
      // stores the placement of X and O in cells by their cell number
        cells: {
            1: '', 2: '', 3: '',
            4: '', 5: '', 6: '',
            7: '', 8: '', 9: ''
        },
        // contains all (8) possible winning conditions
        winConditions: [
            [1, 2, 3], [4, 5, 6], [7, 8, 9], // rows
            [1, 4, 7], [2, 5, 8],    [3, 6, 9], // columns
            [1, 5, 9], [3, 5, 7]             // diagonals
        ],
  }
}

The Cell component holds the mark that the player placed in it. By default that value is set to an empty string. The frozen property is used to ensure that the player is not able to change the mark, once it is placed.

props: ['name'],
data () {
    return {
     // enables the player to place a mark
     frozen: false,

        // holds either X or O to be displayed in the td
        mark: ''
    }    
}

The Cell component also has a name prop for uniquely identifying each cell with a number.

Event Bus

Our components need to talk to each other to inform them about a change in their data property or an action performed by the user like placing a mark in a cell. To do so, we will use an event bus. To make an event bus, we will assign a new Vue instance to a property called Event on the window object. You can change this to anything you want depending on the context of your code, but Event, here, will do just fine.

window.Event = new Vue()

With it, you can do things like Event.$emit() and Event.$on for firing and listening to events respectively. The event name as the first argument. You can also include any other data after the name argument. The other data is called Payload. Consider this example:

Event.$emit('completed', this.task)

This fires an event called completed and passes this.task as payload. You can listen to this event fire with the Event.$on() method like this:

Event.$on('completed' (task) => {
    // do something
})

To continue listening to an event fire, we can place the Event.$on method in the created method of that Vue component.

created () {
    Event.$on('completed' (task) => {
        // do something
    })
}

After adding the Event bus to your main.js file, it will look like this:

import Vue from 'vue'
import App from './App.vue'

window.Event = new Vue()

new Vue({
  el: '#app',
  render: h => h(App)
})

Now, we are ready to fire and listen for events in our game.

What’s Next

We will continue, adding functionality to our game, in a follow-up lesson of this two-part series.

Conclusion

In this tutorial, we wireframed almost all the parts of our game. We have added styling, templates and data properties and distributed them among components. At the end, we also created an event bus to handle all of the data flow and user action notifications between our components. If you did not undertand something, experienced a bug or have some other question, feel free to ask in the comments below.

Source:: scotch.io

Test a Flask App with Selenium WebDriver – Part 2

By mbithenzomo

This is the second and final part of a tutorial on how to test a Python/Flask web app with Selenium webdriver. We are testing Project Dream Team, an existing CRUD web app. Part One introduced Selenium WebDriver as a web browser automation tool for browser-based tests. By the end of Part One, we had written tests for registration, login, performing CRUD operations on departments and roles, as well as assigning departments and roles to employees.

In Part Two, we will write tests to ensure that protected pages can only be accessed by authorised users. We will also integrate our app with CircleCI, a continuous integration and delivery platform. I have included a demo video showing all the tests running, so be sure to check it out!

Permissions Tests

Recall that in the Dream Team app, there are two kinds of users: regular users, who can only register and login as employees, and admin users, who can access departments and roles and assign them to employees. Non-admin users should not be able to access the departments, roles, and employees pages. We will therefore write tests to ensure that this is the case.

In your tests/test_front_end.py file, add the following code:

# tests/test_front_end.py

class TestPermissions(CreateObjects, TestBase):

    def test_permissions_admin_dashboard(self):
        """
        Test that non-admin users cannot access the admin dashboard
        """
        # Login as non-admin user
        self.login_test_user()

        # Navigate to admin dashboard
        target_url = self.get_server_url() + url_for('home.admin_dashboard')
        self.driver.get(target_url)

        # Assert 403 error page is shown
        error_title = self.driver.find_element_by_css_selector("h1").text
        self.assertEqual("403 Error", error_title)
        error_text = self.driver.find_element_by_css_selector("h3").text
        assert "You do not have sufficient permissions" in error_text

    def test_permissions_list_departments_page(self):
        """
        Test that non-admin users cannot access the list departments page
        """
        # Login as non-admin user
        self.login_test_user()

        # Navigate to admin dashboard
        target_url = self.get_server_url() + url_for('admin.list_departments')
        self.driver.get(target_url)

        # Assert 403 error page is shown
        error_title = self.driver.find_element_by_css_selector("h1").text
        self.assertEqual("403 Error", error_title)
        error_text = self.driver.find_element_by_css_selector("h3").text
        assert "You do not have sufficient permissions" in error_text

    def test_permissions_add_department_page(self):
        """
        Test that non-admin users cannot access the add department page
        """
        # Login as non-admin user
        self.login_test_user()

        # Navigate to admin dashboard
        target_url = self.get_server_url() + url_for('admin.add_department')
        self.driver.get(target_url)

        # Assert 403 error page is shown
        error_title = self.driver.find_element_by_css_selector("h1").text
        self.assertEqual("403 Error", error_title)
        error_text = self.driver.find_element_by_css_selector("h3").text
        assert "You do not have sufficient permissions" in error_text

    def test_permissions_list_roles_page(self):
        """
        Test that non-admin users cannot access the list roles page
        """
        # Login as non-admin user
        self.login_test_user()

        # Navigate to admin dashboard
        target_url = self.get_server_url() + url_for('admin.list_roles')
        self.driver.get(target_url)

        # Assert 403 error page is shown
        error_title = self.driver.find_element_by_css_selector("h1").text
        self.assertEqual("403 Error", error_title)
        error_text = self.driver.find_element_by_css_selector("h3").text
        assert "You do not have sufficient permissions" in error_text

    def test_permissions_add_role_page(self):
        """
        Test that non-admin users cannot access the add role page
        """
        # Login as non-admin user
        self.login_test_user()

        # Navigate to admin dashboard
        target_url = self.get_server_url() + url_for('admin.add_role')
        self.driver.get(target_url)

        # Assert 403 error page is shown
        error_title = self.driver.find_element_by_css_selector("h1").text
        self.assertEqual("403 Error", error_title)
        error_text = self.driver.find_element_by_css_selector("h3").text
        assert "You do not have sufficient permissions" in error_text

    def test_permissions_list_employees_page(self):
        """
        Test that non-admin users cannot access the list employees page
        """
        # Login as non-admin user
        self.login_test_user()

        # Navigate to admin dashboard
        target_url = self.get_server_url() + url_for('admin.list_employees')
        self.driver.get(target_url)

        # Assert 403 error page is shown
        error_title = self.driver.find_element_by_css_selector("h1").text
        self.assertEqual("403 Error", error_title)
        error_text = self.driver.find_element_by_css_selector("h3").text
        assert "You do not have sufficient permissions" in error_text

    def test_permissions_assign_employee_page(self):
        """
        Test that non-admin users cannot access the assign employee page
        """
        # Login as non-admin user
        self.login_test_user()

        # Navigate to admin dashboard
        target_url = self.get_server_url() + url_for('admin.assign_employee', id=1)
        self.driver.get(target_url)

        # Assert 403 error page is shown
        error_title = self.driver.find_element_by_css_selector("h1").text
        self.assertEqual("403 Error", error_title)
        error_text = self.driver.find_element_by_css_selector("h3").text
        assert "You do not have sufficient permissions" in error_text

We begin by creating a class TestPermissions, which inherits from the CreateObjects and TestBase classes that we wrote in Part One. In each of the test methods inside the class, we login as a non-admin user, and then attempt to access a protected page. First, we test the departments pages (list and add), then the roles pages (list and add), and finally the employees pages (list and assign). In each method, we test that the 403 error page is shown by asserting that the appropriate page title (“403 Error”) and text (“You do not have sufficient permissions to access this page”) are shown on the page.

Take note of the difference between the assertEqual method and the assert ... in statement. The former checks that two things are exactly the same, whereas the latter checks that the first thing is contained in the second. In the case of our tests, “403 Error” and the error page title are exactly the same, so we can use assertEqual. For the second assertion, we are merely checking that the words “You do not have sufficient permissions” are contained in the error page text. The assert ... in statement is ideal when you don’t want to check for identicalness, but rather that a certain important word or phrase is contained in the element in question.

Let’s run our tests now:

$ nose2
......................................
----------------------------------------------------------------------
Ran 38 tests in 168.981s

OK

Continuous Integration and Continuous Delivery

You may have heard of continuous integration (CI), but you may not be very clear on what exactly it is or how to implement it in your development workflow. Well, CI refers to a software development pratice of integrating project code into a shared repository frequently, typically multiple times a day. CI usually goes hand-in-hand with automated builds and automated testing, such that each time code is pushed into the shared repo, the code is run and tested automatically to ensure it has no errors.

The idea is that small changes in the code are integrated to the main repo frequently, which makes it easier to catch errors should they occur and troubleshoot them. This is in contrast to a scenario where integration is done less often and with more code, making it more difficult to detect which change was responsible if an error was to occur.

Martin Fowler, Chief Scientist at ThoughtWorks, put it well when he said:

Continuous Integration doesn’t get rid of bugs, but it does make them dramatically easier to find and remove.

Continuous delivery entails building and handling your code in such a way that it can be released into production at any time. Practising continuous delivery means not having any code in your main repo that you wouldn’t want to deploy. Sometimes, this even means that any code that is pushed to the main repo is automatically put in production if the build is successful and all tests pass. This is called continuous deployment.

Introducing CircleCI

Now that you’re up to speed with continuous integration and continuous delivery, let’s get familiar with one of the most popular continuous integration and delivery platforms today: CircleCI. CircleCI is quick and easy to set up. It automates software builds and testing, and also supports pushing code to many popular hosts such as Heroku and Google Cloud Platform.

To start using CircleCI, sign up by authenticating your GitHub or Bitbucket account. Once you login, navigate to the Projects page where you can add your project repository. Select Build Project next to your repository name, and CircleCI will start the build.

Uh oh! The first build fails. You’ll notice the disconcerting red colour all over the page, the multiple error messages, and even the disheartening red favicon in your browser, all of which denote failure. First of all, congratulations on your first failed build! 🙂 Secondly, don’t worry; we haven’t configured CircleCI or our app yet, so it’s no wonder the build failed! Let’s get to work setting things up to turn the red to green.

Environment Variables

We’ll start by adding some important environment variables to CircleCI. Because we won’t be reading from the instance/config.py file, we’ll need to add those variables to CircleCI. On the top right of the build page on CircleCI, click the cog icon to access the Project Settings. In the menu on the left under Build Settings, click on Environment Variables. You can now go ahead and add the following variables:

  1. SECRET_KEY. You can copy this from your instance/config.py file.

  1. SQLALCHEMY_DATABASE_URI. We will use CircleCI’s default circle_test database and ubuntu user, so our SQLALCHEMY_DATABASE_URI will be mysql://ubuntu@localhost/circle_test.

You should now have all two environment variables:

The circle.yml File

Next, create a circle.yml file in your root folder and in it, add the following:

machine:
  python:
    version: 2.7.10
test:
  override:
    - nose2 

We begin by indicating the Python version for our project, 2.7.10. We then tell CircleCI to run our tests using the nose2 command. Note that we don’t need to explicitly tell CircleCI to install the software dependencies because it automatically detects the requirements.txt file in Python projects and installs the requirements.

The create_app Method

Next, edit the create_app method in the app/__init__.py file as follows:

# app/__init__.py

def create_app(config_name):
    # modify the if statement to include the CIRCLECI environment variable
    if os.getenv('FLASK_CONFIG') == "production":
        app = Flask(__name__)
        app.config.update(
            SECRET_KEY=os.getenv('SECRET_KEY'),
            SQLALCHEMY_DATABASE_URI=os.getenv('SQLALCHEMY_DATABASE_URI')
        )
    elif os.getenv('CIRCLECI'):
        app = Flask(__name__)
        app.config.update(
            SECRET_KEY=os.getenv('SECRET_KEY')
        )
    else:
        app = Flask(__name__, instance_relative_config=True)
        app.config.from_object(app_config[config_name])
        app.config.from_pyfile('config.py')

This checks for CircleCI’s built-in CIRCLECI environment variable, which returns True when on CircleCI. This way, when running the tests on CircleCI, Flask will not load from the instance/config.py file, and will instead get the value of the SECRET_KEY configuration variable from the environment variable we set earlier.

The Test Files

Now edit the create_app method in the tests/test_front_end.py file as follows:

# tests/test_front_end.py

# update imports
import os

class TestBase(LiveServerTestCase):

    def create_app(self):
        config_name = 'testing'
        app = create_app(config_name)
        if os.getenv('CIRCLECI'):
            database_uri = os.getenv('SQLALCHEMY_DATABASE_URI')
        else:
            database_uri = 'mysql://dt_admin:dt2016@localhost/dreamteam_test',
        app.config.update(
            # Specify the test database
            SQLALCHEMY_DATABASE_URI=database_uri,
            # Change the port that the liveserver listens on
            LIVESERVER_PORT=8943
        )
        return app

This ensures that when the tests are running on CircleCI, Flask will get the SQLALCHEMY_DATABASE_URI from the environment variable we set earlier rather than using the test database we have locally.

Finally, do the same for the create_app method in the tests/test_back_end.py file:

# tests/test_back_end.py

# update imports
import os

class TestBase(TestCase):

    def create_app(self):
        config_name = 'testing'
        app = create_app(config_name)
        if os.getenv('CIRCLECI'):
            database_uri = os.getenv('SQLALCHEMY_DATABASE_URI')
        else:
            database_uri = 'mysql://dt_admin:dt2016@localhost/dreamteam_test',
        app.config.update(
            # Specify the test database
            SQLALCHEMY_DATABASE_URI=database_uri
        )
        return app

Push your changes to your repository. You’ll notice that as soon as you push your code, CircleCI will automatically rebuild the project. It’ll take a few minutes, but the build should be successful this time. Good job!

Status Badge

CircleCI porvides a status badge for use on your project repository or website to display your build status. To get your badge, click on the Status Badges link in the menu on the left under Notifications. You can get the status badge in a variety of formats, including image and MarkDown.

Conclusion

You are now able to write a variety of front-end tests for a Flask application with Selenium WebDriver. You also have a good understanding of continuous integration and continuous delivery, and can set up a project on CircleCI. I hope you’ve enjoyed this tutorial! I look forwrad to hearing your feedback and experiences in the comment section below.

For more information on continuous integration in Python with CircleCI, you may refer to this Scotch tutorial by Elizabeth Mabishi.

Source:: scotch.io

Node.js Weekly Update - 24 March, 2017

By Ferenc Hamori

Node.js Weekly Update - 24 March, 2017

Below you can find RisingStack‘s collection of the most important Node.js news, projects & updates from this week:

1. End-to-End Testing with Nightwatch.js

Nightwatch.js enables you to “write end-to-end tests in Node.js quickly and effortlessly that run against a Selenium/WebDriver server”.

  • End-to-end testing is part of the black-box testing toolbox. This means that as a test writer, you are examining functionality without any knowledge of internal implementation.

  • End-to-end testing can also be used as user acceptance testing, or UAT. UAT is the process of verifying that the solution actually works for the user.

2. I’ve been a Web Developer for 17 Years, and this is what I learned

Daniel Khan has a 17 year long web development carrier behind him, so he decided to share his insights during NodeConfBP (RisingStack’s Node.js Conference).

Node.js Weekly Update - 24 March, 2017

A few insights:

  • History repeats itself and the tech industry is most likely heading towards another crash – so developers must prepare themselves.
  • The Broken Windows theory fits well with coding, so remember to always write code for your future self!
  • You shouldn’t jump on a latest fashionable JS framework bandwagon all the time.
  • The module system is kind of broken, there are hundreds of modules solving the same problem. It’s hard to figure out what to use.
  • Information on StackOverflow is often wrong, don’t blindly copypaste code from there!

And so on..

3. npm announced free Orgs for Open-Source projects

Today, we’re excited to announce that npm Orgs, our collaboration tool for helping teams manage permissions and share their code, is free for all developers of open source packages. You may invite an unlimited number of collaborators to manage an unlimited number of public packages for $0.

Node.js Weekly Update - 24 March, 2017

4. Build a “Serverless” Slack Bot in 9 Minutes with Node.js and StdLib

Slack bots — they’re fun, they’re useful, and people are building new businesses around bot interaction models on a daily basis.

Node.js Weekly Update - 24 March, 2017

StdLib is a Function as a Service software library. The easiest way to think of it is to imagine if AWS Lambda and GitHub had a child, then asked NPM and Twilio to be the godparents — scalable microservices with no server administration, easy command line management, version immutability, service discovery, and the ability to charge customers for premium services you’ve built on a per-request basis.

5. Requiring modules in Node.js: Everything you need to know

When Node invokes that require() function with a local file path as the function’s only argument, Node goes through the following sequence of steps:

  • Resolving: To find the absolute path of the file.
  • Loading: To determine the type of the file content.
  • Wrapping: To give the file its private scope. This is what makes both the require and module objects local to every file we require.
  • Evaluating: This is what the VM eventually does with the loaded code.
  • Caching: So that when we require this file again, we don’t go over all the steps another time.

In this article, I’ll attempt to explain with examples these different stages and how they affect the way we write modules in Node.

Latest Node.js Releases

○ Node v6.10.1 (LTS)

IMPORTANT: This Node.js Version became the default on Amazon Lambda!

  • performance: The performance of several APIs has been improved.
    • Buffer.compare() is up to 35% faster on average.
    • buffer.toJSON() is up to 2859% faster on average.
    • fs.*statSync() functions are now up to 9.3% faster on average.
    • os.loadavg is up to 151% faster.
    • process.memoryUsage() is up to 34% faster.
    • querystring.unescape() for Buffers is 15% faster on average.
    • querystring.stringify() is up to 7.8% faster on average.
    • querystring.parse() is up to 21% faster on average.
  • IPC: Batched writes have been enabled for process IPC on platforms that support Unix Domain Sockets.
    • Performance gains may be up to 40% for some workloads.
  • child_process: spawnSync now returns a null status when child is terminated by a signal.
    • This fixes the behavior to act like spawn() does.
  • http:
    • Control characters are now always rejected when using http.request().
    • Debug messages have been added for cases when headers contain invalid values.
  • node: Heap statistics now support values larger than 4GB.
  • timers: Timer callbacks now always maintain order when interacting with domain error handling.

○ Version 7.7.4 (Current)

  • deps: Add node-inspect 1.10.6
  • inspector: proper WS URLs when bound to 0.0.0.0
  • tls: fix segfault on destroy after partial read.

Previously in the Node.js Weekly Update

In the previous Node.js Weekly Update we read fantastic articles about Serverless Node, a new course on Using npm scripts, Client-side routing, Concurrency in Node + v7.7.3 Released was released as well!

We help you to stay up-to-date with Node.js on a daily basis too. Check out our Node.js news page and its Twitter feed!

Source:: risingstack.com