Monthly Archives: May 2017

Create a custom Slack slash command with Node.js and Express

By Luciano Mammino

Node.js Design Patterns Mario Casciaro Luciano Mammino book cover

In this tutorial we are going to learn how to build and deploy a custom Slack slash command using Node.js and the Express web framework.

If you are interested in creating Slack integrations with Node.js, you might also be interested in a previous article that illustrates how to build a Slack bot with Node.js.

About the Author

Hi, I am Luciano (@loige on Twitter) and I am the co-author of Node.js Design Patterns Second Edition (Packt), a book that will take you on a journey across various ideas and components, and the challenges you would commonly encounter while designing and developing software using the Node.js platform. In this book you will discover the “Node.js way” of dealing with design and coding decisions. I am also the co-maintainer of FullStack Bulletin, a free weekly newsletter for the ambitious full stack developer.

Slack “slash commands”… Wait, what?

Slash commands are special messages that begin with a slash (/) and behave differently from regular chat messages. For example, you can use the /feed command to subscribe the current channel to an RSS feed and receive notifications directly into Slack everytime a new article is published into that feed.

Slack feed slash command screenshot

There are many slash commands available by default, and you can create your custom ones to trigger special actions or to retrieve information from external sources without leaving Slack.

Building a URL shortener slash command

In this tutorial we are going to build a “URL shortener” slash command, which will allow us to generate personalised short urls with a versatile syntax. For example, we want the following command to generate the shorturl http://loige.link/rome17:

/urlshortener create a short url for the link http://loige.co/my-universal-javascript-web-applications-talk-at-codemotion-rome-2017/ using the domain @loige.link and the custom slashtag ~rome17

We are going to use Rebrandly as Short URL service. If you don’t know this service I totally recommend you, essentially for 3 reasons:

  1. It offers a very extensive FREE plan.
  2. Has an easy to use and well documented API for creating short URLs programmatically.
  3. Supports custom domains (I personally use it for my blog with loige.link and for FullStack Bulletin with fstack.link).

So, before starting the tutorial, be sure to have a Rebrandly account and an API Key, which you can generate from the API settings page once you are logged in.

Create a new Slack application

In order to create a new custom slash command for a given Slack organisation you have to create an app in the Slack developer platform.

Slack, create a new app

There are a number of easy steps you will need to follow to get started:

  1. Select the option to create a new slash command

Slack, create slash command

  1. Specify some simple options for the slash command

Slack, create slash command options

Notice that, for now, we are passing a sample request bin URL as Request URL so that we can inspect what’s the payload that gets sent by Slack before implementing our custom logic.

  1. Install the new app in your test organisation

Slack, install custom app into organisation

Slack, OAuth flow

Now the command is linked and can be already used in your slack organisation, as you can see in the following image:

Slack, use custom Slash command

When you submit this command, Slack servers will send a POST request to the Request URL with all the details necessary to implement your custom logic and provide a response to the user invoking the command:

Slack, sample request from slack servers for implementing a slash command

The integration flow

Before moving on, let’s understand how the data flows between the different components that make the Slack slash command work. Let’s start with a picture:

Slack slash command integration flow

In brief:

  1. A user types a slash command followed by some text (the arguments of the command) into a Slack chat window.
  2. The Slack server receives the command and forwards it with an HTTP POST request to the Request URL associated with the command (hosted by the slash command developer on a separate server). The POST request contains many details about the command that has been invoked, so that the server receiving it can react accordingly. Some of the fields passed by Slack to the application server are:
    • token: a unique value generated by Slack for this integration, it should be kept secret and can be used to verify that the slash command request is really coming from Slack and not from another external source. Take not of your because you will need it later on.
    • text: the full text passed as argument to the slash command
    • team_id: the Slack id of the team where the slash cammand has been installed
    • channel_name: the name of the channel where the command was invoked
    • user_name: the user name of the user who invoked the command
    • response_url: a special URL that can be used by the server to provide an asynchronous response to slack (useful for managing long lived tasks that might take more than 3 seconds to complete).
  3. The application server responds to the HTTP request with 200 OK and a message containing the output of the command that should be displayed to the user.

Application architecture

So it should be clear now that our goal is to implement a little web server app that receives url shortening commands, calls the rebrandly APIs to do so and returns the shortened URLs back to the Slack server.

We can break down our app into some well-defined components:

  1. The web server: deals with all the HTTP nuances, receives and decodes requests from the Slack server and forwards it to the underlying components. Collects the result from them and returns it as an HTTP response.
  2. The command parser: parses the text (arguments) of the slash commands and extract URLS, slashtags and domains.
  3. The url shortener: uses the result of the command parser to generate the short URLs by invoking the Rebrandly APIs.

Preparing the project

For this project we will Node.js 6.0 or higher, so, before moving on, be use you have this version in your machine.

Let’s get ready to write some come, but first create a new folder and run npm init in it. In this tutorial we are going to use some external dependencies that we need to fetch from npm:

npm install  
  body-parser@^1.17.2  
  express@^4.15.3  
  request-promise-native@^1.0.4  
  string-tokenizer@^0.0.8  
  url-regex@^4.0.0

Now let’s create a folder called src and inside of it we can create all the files that we will need to write:

mkdir src
touch  
  src/commandParser.js  
  src/createShortUrls.js 
  src/server.js 
  src/slashCommand.js 
  src/validateCommandInput.js

Quite some files, uh? Let’s see what we need them for:

  • server.js: is our web server app. It spins up an HTTP server using Express that can be called by the Slack server. It servers as an entry point for the whole app, but the file itself will deal only with the HTTP nuances of the app (routing, request parsing, response formatting, etc.) while the actual business logic will be spread in the other files.
  • slashCommand.js: implements the high level business logic needed for the slash command to work. It reiceves the content of the HTTP request coming from the Slack server and will use other submodules to process it and validate it. It will also invoke the module that deals with the Rebrandly APIs and manage the response, properly formatting it into JSON objects that are recognized by Slack. It will delegate some of the business logic to other modules: commandParser, validateCommandInput and createShortUrls.
  • commandParser: this is probably the core module of our project. It has the goal to take an arbitrary string of text and extract some informations like URLs, domains and slashtags.
  • validateCommandInput: implements some simple validation rule to check if the result of the command parser is something that can be used with the Rebrandly APIs to create one or more short URLs.
  • createShortUrls: implements the business logic that invokes the Rebrandly APIs to create one or more custom short URLs.

This should give you a top-down view of the architecture of the app we are going to implement in a moment. If you are a visual person (like me), you might love to have a chart to visualize how those modules are interconnected, here you go, lady/sir:

Slack slash command components architecture graph

The command parser

We said that the command parser is the core of our application, so it makes sense to start to code it first. Let’s jump straight into the source code:

// src/commandParser.js
const tokenizer = require('string-tokenizer')
const createUrlRegex = require('url-regex')

const arrayOrUndefined = (data) => {
  if (typeof data === 'undefined' || Array.isArray(data)) {
    return data
  }

  return [data]
}

const commandParser = (commandText) => {
  const tokens = tokenizer()
    .input(commandText)
    .token('url', createUrlRegex())
    .token('domain', /(?:@)((?:(?:[a-zu00a1-uffff0-9]-*)*[a-zu00a1-uffff0-9]+)*.[a-zu00a1-uffff]{2,})/, match => match[2])
    .token('slashtag', /(?:~)(w{2,})/, match => match[2])
    .resolve()

  return {
    urls: arrayOrUndefined(tokens.url),
    domain: tokens.domain,
    slashtags: arrayOrUndefined(tokens.slashtag)
  }
}

module.exports = commandParser

This module exports the function commandParser. This function accepts a string called commandText as the only argument. This string will be the text coming from the slash command.

The goal of the function is to be able to extrapolate all the meaningful information for our task from a free format string. In particular we want to extrapolate URLs, domains and slashtags.

In order to do this we use the module string-tokenizer and some regular expressions:

  • The module url-regex is used to recognize all valid formats of URLs.
  • Then we define our own regex to extract domains, assuming that they will be prefixed by the @ character. We also specify an inline function to normalize all the matches and get rid of the @ prefix in the resulting output.
  • Similarly we define a regular expression to extract slashtags, which needs to have the ~ character as prefix. Here as well we cleanup the resulting matches to get rid of the ~ prefix.

With this configuration, the string-tokenizer module will return an object with all the matching components organised by key: all the URLs will be stored in an array under the key url and the same will happen with domain and slashtag for domains and slashtags respectively.

The caveat is that, for every given token, string-tokenizer returns undifined if no match is found, a simple string if only one match is found and an array if there are several substring matching the token regex.

Since we want to have potentially many urls and many associated slashtags but only one URL at the time, we want to return an object with a very specific format that satisfies those expectations:

  • urls: an array of urls (or undefined if none is found)
  • domain: the domain as a string (or undefined if none is specified)
  • slashtags: an array of slashtags (or undefined if none is found)

We process the output obtained with the string-tokenizer module (also using the simple helper function arrayOrUndefined) and return the resulting object.

That’s all for this module.

In case you want to learn more about regular expressions, there’s an amazing article about regular expressions available here at Scotch.io.

Validation module

The goal of the parseCommand module was very clear: extract and normalize some information from a text in order to construct an object that describes all the short URLs that needs to be created and their options.

The issue is that the resulting command object might be inconsistent in respect to some business rules that we need to enforce to interact with the Rebrandly APIs:

  • There must be at least one URL.
  • There must be at most one domain per command (if none is specified a default one will be used).
  • The number of slashtags cannot exceed the number of URLs (slashtags will be mapped to URLs in order, if there are more URLs than slashtags, the remaining URLs will get a randomly generated slashtag).
  • A command cannot contain more than 5 URLs (Rebrandly standard APIs are limited to 10 requests per second, so with this rule we should reasonably avoid to reach the limit).

The module validateCommandInput is here to help use ensure that all those rules are respected. Let’s see its code:

// src/validateCommandInput.js
const validateCommandInput = (urls, domain, slashtags) => {
  if (!urls) {
    return new Error('No url found in the message')
  }

  if (Array.isArray(domain)) {
    return new Error('Multiple domains found. You can specify at most one domain')
  }

  if (Array.isArray(slashtags) && slashtags.length > urls.length) {
    return new Error('Urls/Slashtags mismatch: you specified more slashtags than urls')
  }

  if (urls.length > 5) {
    return new Error('You cannot shorten more than 5 URLs at the time')
  }
}

module.exports = validateCommandInput

The code is very simple and pretty much self-descriptive. The only important thing to underline is that the validateCommandInput function will return undefined in case all the validation rules are respected or an Error object as soon as one validation rule catches an issue with the input data. We will see soon how this design decision will make our validation logic very concise in the next modules.

The Rebrandly API Client module

Ok, at this stage we start to see things coming together: we have a module to parse a free text and generate a command object, another module to validate this command, so now we need a module that uses the data in the command to actually interact with our short URL service of choice through rest APIs. The createShortUrls is here to address this need.

// src/createShortUrls.js
const request = require('request-promise-native')

const createErrorDescription = (code, err) => {
  switch (code) {
    case 400:
      return 'Bad Request'
    case 401:
      return 'Unauthorized: Be sure you configured the integration to use a valid API key'
    case 403:
      return `Invalid request: ${err.source} ${err.message}`
    case 404:
      return `Not found: ${err.source} ${err.message}`
    case 503:
      return `Short URL service currently under maintenance. Retry later`
    default:
      return `Unexpected error connecting to Rebrandly APIs`
  }
}

const createError = (sourceUrl, err) => {
  const errorDescription = createErrorDescription(err.statusCode, JSON.parse(err.body))
  return new Error(`Cannot create short URL for "${sourceUrl}": ${errorDescription}`)
}

const createShortUrlFactory = (apikey) => (options) => new Promise((resolve, reject) => {
  const body = {
    destination: options.url,
    domain: options.domain ? { fullName: options.domain } : undefined,
    slashtag: options.slashtag ? options.slashtag : undefined
  }

  const req = request({
    url: 'https://api.rebrandly.com/v1/links',
    method: 'POST',
    headers: {
      apikey,
      'Content-Type': 'application/json'
    },
    body: JSON.stringify(body, null, 2),
    resolveWithFullResponse: true
  })

  req
    .then((response) => {
      const result = JSON.parse(response.body)
      resolve(result)
    })
    .catch((err) => {
      resolve(createError(options.url, err.response))
    })
})

const createShortUrlsFactory = (apikey) => (urls, domain, slashtags) => {
  const structuredUrls = urls.map(url => ({url, domain, slashtag: undefined}))
  if (Array.isArray(slashtags)) {
    slashtags.forEach((slashtag, i) => (structuredUrls[i].slashtag = slashtag))
  }

  const requestsPromise = structuredUrls.map(createShortUrlFactory(apikey))
  return Promise.all(requestsPromise)
}

module.exports = createShortUrlsFactory

This module is probably the longest and the most complex of our application, so let’s spend 5 minutes together to understand all it’s parts.

The Rebrandly API allows to create one short URL at the time, but this module exposes an interface with which is possible to create multiple short URLs with a single function call. For this reason inside the module we have two abstraction:

  • createShortUrlFactory: that allows to create a single URL and remains private inside the module (it’s not exported).
  • createShortUrlsFactory: (notice Url vs Urls) that uses the previous function multiple times. This is the publicly exported function from the module.

Another important details is that both functions here are implementing the factory function design pattern. Both functions are used to create two new functions were that contains the Rebrandly apikey in their scope, this way you don’t need to pass the API key around everytime you want to create a short url and you can reuse and share the generated functions.

With all these details in mind, undestanding the rest of the code should be fairly easy, because we are only building some levels of abstraction over a REST request to the Rebrandly API (using request-promise-native).

The slash command module

Ok, now that we have the three main modules we can combine them together into our slashCommand module.

Before jumping into the code, remember that the goal of this module is to grab the request received from Slack, process it a generate a valid response using Slack application message formatting rules and Slack message attachments:

// src/slashCommand.js
const commandParser = require('./commandParser')
const validateCommandInput = require('./validateCommandInput')

const createErrorAttachment = (error) => ({
  color: 'danger',
  text: `*Error*:n${error.message}`,
  mrkdwn_in: ['text']
})

const createSuccessAttachment = (link) => ({
  color: 'good',
  text: `*<http://${link.shortUrl}|${link.shortUrl}>* (<https://www.rebrandly.com/links/${link.id}|edit>):n${link.destination}`,
  mrkdwn_in: ['text']
})

const createAttachment = (result) => {
  if (result.constructor === Error) {
    return createErrorAttachment(result)
  }

  return createSuccessAttachment(result)
}

const slashCommandFactory = (createShortUrls, slackToken) => (body) => new Promise((resolve, reject) => {
  if (!body) {
    return resolve({
      text: '',
      attachments: [createErrorAttachment(new Error('Invalid body'))]
    })
  }

  if (slackToken !== body.token) {
    return resolve({
      text: '',
      attachments: [createErrorAttachment(new Error('Invalid token'))]
    })
  }

  const { urls, domain, slashtags } = commandParser(body.text)

  let error
  if ((error = validateCommandInput(urls, domain, slashtags))) {
    return resolve({
      text: '',
      attachments: [createErrorAttachment(error)]
    })
  }

  createShortUrls(urls, domain, slashtags)
    .then((result) => {
      return resolve({
        text: `${result.length} link(s) processed`,
        attachments: result.map(createAttachment)
      })
    })
})

module.exports = slashCommandFactory

So, the main function here is slashCommandFactory, which is the function exported by the module. Again we are using the factory pattern. At this stage you might have noticed, how I tend to prefer this more functional approach as opposed to creating classes and constructors to keep track of initialization values.

In this module the factory generates a new function that has createShortUrls and
slackToken in the function scope. The createShortUrls argument is a function that needs to be created with the createShortUrlsFactory that we saw in the previous module. We are using another important design pattern here, the Dependency injection pattern, that allows us to combine different modules in a very versatile way. This patterns offers many advantages, like:

  • Keep modules decoupled.
  • Allow to switch implementation of the dependency without changing the code of the dependant modules, for example we could switch to another short URL service without the need of changing a single line of code in this module.
  • Simplified testability.

Enough with design patterns and back to our slashCommandFactory function… The function it generates contains the real business logic of this module which, more or leass, reads like this:

  1. Verify if the current message body is present (otherwise stop and return error message).
  2. Verify that the request token is the one we were expecting from Slack (the request is very unlickely to have been forged by a third party). If the token is not valid stop and return error message.
  3. Use the commandParser to extrapolate information about the meaning of the current received command.
  4. Validate the command details using validateCommandInput (if the validation fails, stop and return an error message).
  5. Use the injected createShortUrls function to generate all the requested short URLs.
  6. Creates a response object for Slack containing details about every generated short URL (or errors that happened during the generation of one or more of them). For this last step we also use the internal utility function createAttachment.

Also notice that, since the operation performed by this module is asynchronous, we are returning a Promise, and that we resolve the promise also in case of errors. We didn’t use a reject because we are managing those errors and we want to propagate them up to Slack as valid responses to the Slack server so that the user can visualize a meaningful error message.

Web server with Express

We are almost there, the last bit missing is the web server. With Express on our side and all the other business logic modules already written this should be an easy task:

// src/server.js
const Express = require('express')
const bodyParser = require('body-parser')
const createShortUrlsFactory = require('./createShortUrls')
const slashCommandFactory = require('./slashCommand')

const app = new Express()
app.use(bodyParser.urlencoded({extended: true}))

const {SLACK_TOKEN: slackToken, REBRANDLY_APIKEY: apiKey, PORT} = process.env

if (!slackToken || !apiKey) {
  console.error('missing environment variables SLACK_TOKEN and/or REBRANDLY_APIKEY')
  process.exit(1)
}

const port = PORT || 80

const rebrandlyClient = createShortUrlsFactory(apiKey)
const slashCommand = slashCommandFactory(rebrandlyClient, slackToken)

app.post('/', (req, res) => {
  slashCommand(req.body)
    .then((result) => {
      return res.json(result)
    })
    .catch(console.error)
})

app.listen(port, () => {
  console.log(`Server started at localhost:${port}`)
})

I believe the code above is quite self descriptive, but let’s recap what’s going on in there:

  1. We initialize a new Express app and activate the body parser extension (which allows us to parse urlencoded messages from Slack).
  2. We verify if the app has been initialized with all the necessary environment variables (SLACK_TOKEN for the Slack slash command token and REBRANDLY_APIKEY for the Rebrandly API key), otherwise we shutdown the application with an error. We can optionally specify also the environment variable PORT to use a different HTTP port for the server (by default 80).
  3. We use our factory functions to generate the rebrandlyClient and initialize the slashCommand.
  4. At this stage we are ready to register a POST route for the slash command which will just hook the parse the incoming HTTP requests and pass them to the slashCommand function we created before. When the slashCommand completes we just return its response as JSON to the Slack server using res.json.
  5. Finally, we can start the app with app.listen.

That’s all, hooray! Let’s move into running and test this Slack integration!

Local testing with Ngrok

Our app is complete and you can start it by running:

export SLACK_TOKEN="your slack token"
export REBRANDLY_APIKEY="your rebrandly API key"
export PORT=8080 #optional
node src/server

At this stage our app will be listening at localhost on port 8080 (or whatever other port you specified during the initialization). In order for Slack to reach it you will need a publicly available URL.

For now we don’t need a permanent publicly available server, we just need a public URL to test the app. We can easily get a temporary one using ngrok.

After installing ngrok, we have to run:

ngrok http 8080

This command will print a public https URL.
You can copy this into your Slack slash command Request URL.

Finally we are ready to go into our Slack app and invoke our custom slash command:

Slack testing custom url shortener slash command input

If we did everything correctly, at this stage, you should see a response from our app directly in Slack:

Slack testing custom url shortener slash command output

Publishing the integration on Heroku

If you are happy with the current status of the app and you want to have permanently available for your Slack team it’s time to move it online. Generally for those kind of cases Heroku can be a quick and easy option.

If you want to host this app on Heroku, be sure to have an account and the Heroku CLI already installed, then initialize a new Heroku app in the current project folder with:

heroku create awesome-slack-shorturl-integration

Beware that you might need to replace awesome-slack-shorturl-integration with a unique name for an Heroku app (somebody else reading this tutorial might have taken this one).

Let’s configure the app:

heroku config:set --app awesome-slack-shorturl-integration SLACK_TOKEN=<YOUR_SLACK_TOKEN> REBRANDLY_APIKEY=<YOUR_REBRANDLY_APIKEY>

Be sure to replace and with your actual configuration values and then you are ready to deploy the app with:

git push heroku master

This will produce a long output. At the end of it you should see the URL of the app on Heroku. Copy it and paste it as Request URL in the slash command config on your Slack app.

Now your server should be up and running on Heroku.

Enjoy it and keep shortening your URLs wisely!

Wrapping up

So, we are at the end of this tutorial, I really hope you had fun and that I inspired you to create some new cool Slack integration! Well, if you are out of ideas I can give you few:

  • /willIGoOutThisWeekend: to get the weather forecast in your area for the coming weekend.
  • /howManyHolidaysLeft: to tell you how many days of holiday you have left in this year.
  • /atlunch and /backfromlunch: to strategically change your availability status when you are going to lunch and when you are back.
  • /randomEmoji: in case you need help in finding new emojis to throw at your team members.

… OK ok, I am going to stop here, at this stage you will probably have better ideas 🙂

I hope you will share your creatins with me in the comments here, I might want to add some new integration in my Slack team!

Before closing off, I have thank you few people for reviewing this article:

Also, here are few related links that I hope you will enjoy:

Cheers 🙂

Source:: scotch.io

20 Excellent Resources for Learning Kotlin

By Danny Markov

20-kotlin-resources

Кotlin is a modern programming language that runs on the Java Virtual Machine. It has an elegant syntax and is interoperable with all existing Java libraries. In the 2017 Google I/O, the Android team announced that Kotlin will become an official programming language for the Android platform. This puts Kotlin in position to become one of the top programming languages of the future.

To help you get started with your Kotlin journey, we’ve curated a list of some of the best Kotlin learning resources available right now. We haven’t included any paid courses or books, everything on the list is 100% free.


The Kotlin Website

The official website for the project is a very good place to start your Kotlin education. In the reference section you can find in-depth documentation that covers all the main concepts and features of the language. The tutorials section has a variety of practical step-by-step guides on setting up a working environment and working with the compiler.

There is also the Kotlin editor, a browser app that let’s you try out the language. It is loaded with many examples including the Koans course – by far the best way to get familiar with the syntax.

Keddit: Learn Kotlin while developing an Android App

An excellent 11-part series by Juan Ignacio Saravia in which he puts Kotlin into action and builds a Reddit clone app. The tutorials cover a vast number of topics ranging from setting up the workspace to using APIs and even unit testing. The code is available on GitHub.

Antonio Leiva’s Blog

Antoni Leiva’s blog is dedicated to all things Kotlin. It is updated weekly(ish) with high-quality tutorials and articles in which more advanced Kotlin developers can learn about new libraries and find all kinds of practical techniques.

Android Announces Support for Kotlin

The official Google blog post that explains the reasons behind the exciting announcement and why Kotlin deserves a place in the Android ecosystem. The article then goes on to give a brief preview of some of the awesome syntax improvements that Kotlin brings.

Design Patterns implemented in Kotlin

Dariusz Baciński has created a useful GitHub repo containing common design patterns implemented in Kotlin. There are similar projects written in several languages including Java, Swift, JavaScript, and PHP, so if you are coming from one of these programming background you can use them as a reference point.

Learn X in Y minutes

A quick cheatsheet with some of the most important features and syntax quirks that will help you write better Kotlin code. There are examples on working with classes, loops, and lists, as well as implementations of classic programming problems such as generating a Fibonacci sequence.

The Kotlin Blog

The official blog for Kotlin by its authors at JetBrains. Here you can find all Kotlin related news and updates, as well as all kinds of tutorials, tips, and other useful articles.

Get Started with Kotlin on Android

A helpful article from the Google Developers blog that explains how to setup Android Studio for Kotlin, how to convert .java files to .kt files, and how to incorporate the new language into an existing Android project. There are also some code comparisons on the same Android APIs used with both Kotlin and Java.

Android Testing With Kotlin

Great article that shows us how to write and run tests for Android apps using Kotlin. The author does a great job of explaining what different types of tests are available, when to use them, and how to make sure we are testing properly. Another good tutorial on this topic can be found here.


Introduction to Kotlin

A talk from Google I/O 2017 dedicated to introducing Kotlin to people for the first time and giving them an idea of how it can improve their workflow. It covers many of the basics and showcases some cool Kotlin tips.

Life is Great and Everything Will Be Ok, Kotlin is Here

The second Kotlin talk from Google I/O 2017. This one covers more advanced topics like design patterns, best practices, and other common principles. It also sheds some light on what it is like to use Kotlin in production and the challenges of adopting a young language in the workplace.

Peter Sommerhoff’s Kotlin Tutorials

Here is a free Kotlin course for complete beginners that includes all the basics from variables to conditionals to loops and functions. It then goes on to more advanced topics like object-orientation in Kotlin and functional programming like lambda expressions.

Better Android Development with Kotlin & Gradle

This talk from 2016 consists of a brief overview of the language’s features followed by a real world example where you’ll learn how Kotlin fits in with the existing tools in a typical Android workflow.

Better Android Development with Kotlin & Gradle

A very good 8-minute tutorial that quickly goes over the most important Kotlin features, such as the shortened variable declarations, lambdas, extension function, and more.

Android Development with Kotlin — Jake Wharton

Introduction to Kotlin that explains how the new language will improve the Android ecosystem and shows us a number of cool ways we can use the smart Kotlin syntax to our advantage.


From Java To Kotlin

Useful cheatsheet containing short snippets of code that will help you quickly look up the Kotlin alternatives to common Java operators, functions, and declarations.

Kotlin Educational Plugin

A plugin for IntelliJ IDEs that allows you to take the Koans course in a local offline environment.

Kotlin on GitHub

Kotlin has been open-source for more than 5 years and there is a GitHub repo containing the entire history of the project. If you want to support the language there are multiple ways you can contribute, be it directly or by working on the docs.

Kotlin Android Template

Template Android project that makes it super easy to setup a stable Kotlin workspace and quickly bootstrap your apps.

Awesome Kotlin

An extensive list of Kotlin resources containing all sorts of useful links, books, libraries, frameworks, and videos. The list is very well organized, with a stylized version also available at kotlin.link.

Source:: Tutorialzine.com

Using ES6 and modern language tools to program a MIDI controller

By David Szakallas

Using ES6 and modern language tools to program a MIDI controller

In this blogpost I summarize the challenges of creating a flexible and customizable MIDI controller mapping for the Mixxx DJ software. I will focus on the technical aspects of using the scripting facilities of the platform, and tackling the difficulties encountered on the journey.

Using ES6 and modern language tools to program a MIDI controller

I own two Novation Launchpads. The most iconic use-cases of this cool grid controller is launching samples. Launchpad cover videos are very popular on YouTube. These are done by slicing up the songs, and playing back live, spiced with some flashy visual effects.

You can also use launchpads for DJing. While being fit for a handful of things: cueing samples, beatjumping and looping, etc.; the Launchpad have neither a jogwheel nor any rotary controls or faders, so it falls short on functions like scratching or crossfading. Thus, it’s best to use as companion to your other DJ gear.

If you are interested in Mixxx you can download it from its homepage.
If you want to know what MIDI is you can learn it here. You can learn about MIDI controllers on Wikipedia.

If you already use Mixxx for DJing, and you are only interested in the script itself, you can check it out on GitHub. You can find a manual and everything else needed to get started there.

Intro

Serato and Traktor are the two leading digital DJ software on the market. But I wonder if you ever heard about Mixxx!? It serves the same purpose as its commercial counterparts, but with a moral advantage: it’s free and open-source.

Using ES6 and modern language tools to program a MIDI controller

Creating a successful community-driven project in the professional audio software industry has a specific difficulty:

Not only you have to write software that meets high standards regarding UX and stability, but you also have to support a range of hardware devices to convert the crowd.

See, there’s not much use of a live performance software without the ability to control it. Also, you can expect the target audience consisting of DJs and electronic musicians to be fond of their expensive hardware, and simply choosing software that supports their arsenal – and not the other way around.

Now imagine that you want to start a community-driven pro audio project, and you want it to support a lot of devices. What can you do?

One way is to go and try to appeal to the manufacturers for lending you a piece of each of their more popular models accompanied with instructions on how to develop for them (programming manuals are often publicly available, fortunately).

Then even if the particular manufacturer is kind enough to lend you hardware without any legal contract, it becomes your responsibility to distribute it among all your contributors, whom you must trust enough or bind them by a contract.

This needs a well-organized community process, a lot of effort, and most likely a legal person.

But what if you have neither of these? You could go with a simpler, libre approach: get your users involved in the development process, so anyone who owns a device can program it and share with the community. Mixxx chose this path.

Well then, let the members of the community write their own controller mappings for Mixxx! But what would be a perfect platform for this job? How would you execute these mappings?

Mixxx, quite unsurprisingly, is written in C++.

You probably know that it is a complex system programming language meant for creating performance critical applications. I can also tell you that it’s damn hard, so it’s not ideal for non-programmers to start hacking a DJ software as a hobby.

If only we could use a

  • simple (so it’s easy to learn),
  • interpreted (no complicated build process please!),
  • sandboxed (prevents bringing the whole application down),
  • dynamic (easy build process once more)

language such as JavaScript!

The smart people working on Mixxx, of course, realized this, so as you would expect from the title, JavaScript is what we’ll use to program MIDI controllers in Mixxx.

Feeding the FinickyMonkey

A further reason why JavaScript was chosen is that it’s simply the easiest solution.

Mixxx was written with Qt, a popular native application framework which is already bundled with a JavaScript interpreter for the purpose of extending its declarative GUI markup language called QML.

The current version of Mixxx is built on Qt 4.8 – having a god knows what type and version of JS interpreter, which I will call FinickyMonkey from now on.

FinickyMonkey is claimed to be ES5 compliant, however, that doesn’t hold for its parser, throwing errors on e.g. x.default or { final: 'x' }.

First I didn’t understand, so I started digging to find out the following:

In ES3, keywords and future-reserved keywords can be neither member expression literals nor property literals, a restriction lifted in ES5, in addition to removing a lot of future-reserved keywords specified in ES3, like final, abstract or public. It seems that the parser remained in the ES3 era.

Wait a moment, the title suggests that you use modern JavaScript! How does using ES3 or ES5 justify that claim?

Well, of course it doesn’t, and I don’t do that.

Instead, I transpile my code with Babel to the target platform and use a module bundler, pretty much the same way a front-end developer would do for the browser!

Going back to ES3, as Babel generates noncompliant code from certain language features I’d rather use, e.g. default exports or for-of-loops, I had to work around it.

Fortunately, I could find transforms for the formerly mentioned property naming rules, greatly mitigating the issue. However, removed future-reserved keywords as identifiers remains an unresolved problem as of now still. (It only turned up in one case so far).

Use next current generation JavaScript, today.

Today, JavaScript (ECMAScript 6) is a pretty decent language.

Modularized, with statically resolved imports; an overwhelming amount of tools for code analysis and transformation; and nice language features overall. The community provides a wide range of packages under permissive licenses.

I decided in the very beginning that I want to make use of all this.

The first major concern is using modern JavaScript – ES6. I already mentioned Babel in the previous section. By using it, I am able to write code in the current generation of JavaScript.

Second in line is modularization, which enables me to split my project into separate files and makes me able to use packages from npm like one of the downright necessary collection utility modules (lodash or underscore). My files and the external dependencies are bundled up with a module bundler into a single script file the FinickyMonkey can interpret.

Finally, I added a linter from the start to enforce consistent coding style and prevent simple mistakes. Later, I also decided to use a static type checking tool, Flow, which can prevent harder-to-detect mistakes.

There is nothing special about this so far, it is similar to a conventional front end JavaScript application setup! Sadly, however, the Mixxx community has not yet started using these language tools, as you can see if you visit the repo, making this project a pioneer in utility model.

Rolling up everything

I initially used Browserify in conjunction with its Babel plugin to bundle my ES6 modules into a nice fat standalone module which can be interpreted by the FinickyMonkey.

It was a perfectly working solution, and exactly that boring as everybody is already using Browserify successfully for years to transfer CommonJS code back to the stone age.

In case you don’t know how this stuff works, here’s a brief intro. Browserify knows nothing about ES2015, and just as little about ES6 modules, as it was created to bundle CommonJS modules.

So before letting Browserify ‘link’ our modules, we have to cheat and run a Babel transform on each of our files which (among other things) rewrites ES6 modules into the CommonJS format, so that it can be handled by the bundler.

Of course, we lose the benefits coming with ES6 modules that arise as a consequence of the fact that imports and exports are resolved ahead-of-time.

Whereas this is not possible with CommonJS (a though job at least), an ES6-capable bundler could simply identify and eliminate certain chunks of dead code automatically – concretely those manifesting in the form of unused exports – by simply looking at the dependency graph.

This is commonly known as ‘tree-shaking’, which in addition to being an incorrect name for the problem*, sounds silly too. Fortunately there is a new module bundler on the block called Rollup that does this, so I gave it a go.

Rewriting the scripts to use Rollup was straightforward, however, I felt the whole process’s justifiability somewhat hindered after I realized that there are only a handful of ES6 modules out on npm.

The source of this situation is rooted in platform support of course, as Node.js doesn’t support ES6 modules yet, and it appeared in browsers only recently.

This isn’t a game stopper for front end packages where dependents use a compilation toolchain anyways, so ES6 modules can be easily integrated. The problem is relevant for the server though, where common development practice disregards module bundling and generally any kind of ahead-of-time code manipulation. This ambivalence is clearly reflected in the landscape of npm packages**, as shown below.

Legend:

  • ✅ : ES6 by default
  • ⚠️ : ES6 not the default distribution, or some other quirk
  • ❌ : no ES6

Utility (these are used both server and client side):

HTTP, DB and messaging (mainly on the server):

  • ❌ express
  • ❌ redis
  • ❌ socket.io
  • ❌ request
  • ❌ mongoose

Front end frameworks:

  • ✅ Angular
  • ✅ Ember
  • ❌ React
  • ✅ Vue

At the end of the day, for my Launchpad script only my own hand-written, organic code and lodash could be handled OOTB by Rollup, while I had to use a CommonJS to ES6 transformer plugin for the rest.

*It originates from LISP, where it was used for figuring out dead-code dynamically by evaluating all possible execution paths, so if Browserify had some kind of dead-code elimination for CommonJS, that usage would be a better fit the term.

**Checked in May 2017

Static types with Flow

I started with plain ES6 and later on decided to add Flow definitions for the sake of experimentation.

Flow is a static type checker and a language extension for JavaScript, that unlike TypeScript only requires transpilation to the extent of eradicating type annotations from the source code.

Type annotations are similar to comments in a sense that they have absolutely no impact on the runtime behavior of the code. Instead, they help the type checker in essence by serving as a marker with which you can label values as instances of intended types.

Here’s an example. They can be incrementally added as you rediscover your code with your new torch.

Beware, as you will find many skeletons in the closet!

As I mentioned, type annotations don’t even make it into the code, and more interestingly, neither do they cause code to be generated by the transpiler.

They are just deleted, period.

Contrary to TypeScript that always had stuff requiring code generation, Flow has no intention of dynamically extending the language.

There is power in elegance: this property ensures that Flow code behaves the same way as the equivalent JavaScript without type annotations.

You can actually choose to add them in the form of comments, so it doesn’t even require an intermediate step. The fact that the transpilation remains optional also means that type checking remains a separate process, decoupled from transpilation. Imagine Flow as a linter on steroids.

Flow made me think a lot. Static types forced me to approach my source code differently.

As soon as I started adding type annotations, I began to realize that my application was badly structured. Why? A lot of previously hidden dependencies appeared between the source files in the form of type imports (if you have a type definition in another source file you have to import it, such as you import an object) and it was a mess, so I had to reorganize my code.

I also realized that I can generalize a lot by introducing superclasses. There is still much left to be desired, for example, the preset builder remains very dynamic despite all my efforts.

Taming the Mixxx APIs

The two main APIs that are exposed to you when you are working on Mixxx controller scripts are the MIDI and Engine APIs.

You use the MIDI API to talk to the MIDI device, while the Engine API let’s you observe and modify Mixxx’s internals. I made some effort to create a wrapper for both APIs, taking more time with the Engine API wrapper which is almost in a state where it can be separated from this project to be used by others, although it was not my original intention to do so.

I think the biggest advantage for using both API wrappers over their native counterparts is the event notification system.

The native APIs are a mess, with undocumented and unconventional (the worst!) behavior, which you are very likely to misuse and leak resources when e.g. reassigning event handlers.

The wrapper greatly simplifies correct usage with EventEmitters that should be familiar from Node.js. There is stuff that is not yet implemented yet, like enforcing correct usage for all of Mixxx’s controls.

For example, we could prevent modifying read-only controls. Unlike the Engine API wrapper, the MIDI API wrapper can’t be externalized in its current form as it is specialized for Launchpad.

Mixxx’s ‘module loading’ interface also requires you to supply an XML file containing meta data about the controller and the script, and a list of your MIDI listener bindings. Instead of writing this file by hand, which is pretty long and hard to maintain, I generate it with the EJS templating tool created for HTML but seems to handle XML just as well.

<?xml version='1.0' encoding='utf-8'?>  
<MixxxControllerPreset mixxxVersion="1.11+" schemaVersion="1">  
    <info>
        <name><%= manufacturer %> <%= device %></name>
        <author><%= author %></author>
        <description><%= description %></description>
        <forums><%= homepage %></forums>
    </info>
    <controller id="<%= manufacturer %> <%= device %>">
        <scriptfiles>
            <file functionprefix="<%= global %>" filename="<%= manufacturer %>-<%= device %>-scripts.js"/>
        </scriptfiles>
        <controls>
            <% buttons.forEach(function (button) { %><control>
                <group>[Master]</group>
                <key><%= global %>.__midi_<%= button.status %>_<%= button.midino %></key>
                <status><%= button.status %></status>
                <midino><%= button.midino %></midino>
                <options>
                    <script-binding/>
                </options>
            </control><% }) %>
        </controls>
        <outputs/>
    </controller>
</MixxxControllerPreset>  

Conclusion

If you are interested in the project itself, you can find it on GitHub by the name szdavid92/mixxx-launchpad.

There is a comprehensive user manual making it easy to start out.

I hope that all I’ve written down here might turn useful for someone who wants to create a new controller mapping for Mixxx and I hope they follow my footsteps in doing so.

Furthermore, I am inclined to put more work in the API wrappers, so if you would like to use them, I could make an effort and complete them so they can be separated into an external package you can use.

Thanks for reading, and happy coding!

Source:: risingstack.com

State Management in Vue: Getting Started with Vue

By Chris Nwamba

Vue is awesome. But, like every other component based framework, it is difficult to keep track of state when their your application starts growing. This difficulty is pronounced when their is so much data moving around from one component to another.

React pioneered the abstraction of state management using the Flux pattern. The Flux pattern was made obvious by Redux which is not dependent on the React library itself. This abstraction led other component based framework to implement a Flux/Redux solution that serves right the framework right. Vue introduced Vuex for this purpose and in this article, we will get our hands dirty with Vuex.

The Challenge of State Management

Using state management solutions are not always required. When your app is small or does not have so much data flow, there is less need to employ Vuex. Otherwise, assuming we have a comment app with the following component structure:

This is still fairly simple yet, if we need to move data from the App component down to CommentItem component, we have to send down to CommentList first before CommentItem. It’s even more tedious when raising events from CommentItem to be handled by the App component.

It gets worse when you try to another another child to CommentItem. Probably a CommentButton component. This means when the button is clicked, you have to tell CommentItem, which will tell CommentList, and finally App. Trying to illustrate this already gives me headache, then consider when you have to implement it in a real project.

You might be tempted to keep local states for every component where that could work. The problem is, you will easily loose track of what exists and why a particular event is happening at a given time. It is easy to loose data sync which ends you up in a pool of confusion.

Vuex To The Rescue

You can lift the heavy duty of managing state from Vue to Vuex. Data flow and updates will be happening from one source and you end up wrapping your mind around one store to keep track of what is happening in your app.

States! States!! States!!!

In the Flux/Redux/Vuex word, the word “state(s)” gets thrown around a lot and can get annoying because they mean different thing in different context. State basically means the current working status of an app. It is determined by what data exists and where such data exist at a given time.

In Vuex, states are just plain objects:

{
    counter: 0,
    list: [],
    // etc
}

Might look simple, but what you see above can be so powerful that it controls what happens in your application and why it happens. Before we dig into seeing more state magic, lets setup Vuex in a Vue project. Vue is simple to setup and we can just play on Codepen.

Setup Vue and Vuex

Assuming you have an existing Vue starter app that looks like the example below:

const App = new Vue({
  template: `
    <div>Hello</div>
  `
});
App.$mount('#app');

When working with Vue and Vuex via script imports, Vuex automatically sets itself up. If using the module system, then you need to install Vuex:

import Vue from 'vue'
import Vuex from 'vuex'

Vue.use(Vuex)

Vuex Store

The Vuex store is a centralized means for managing state. Everything about the state including how to retrieve state values and update state values are defined in the store. You can create a store by creating an instance of Vuex’s Store and passing in our store details as object:

const store = new Vuex.Store({
  state: {
    count: 0
  }
})

The state we discussed previously is passed in alongside other store members that we will discuss in following sections.

In order to have access to this store via the Vue instance, you need to tell Vue about it:

const App = new Vue({
  template: `
    <div>Hello</div>
  `,
  // Configure store
  store: store
});
App.$mount('#app');

You can log this to the console via the created lifecycle hook to see that $store is now available:

Rendering Store State

Vuex state is reactive. Therefore, you can treat it like the usual reactive object returned from a data function. We can use the computed member property to retrieve values:

...
var App = new Vue({
  computed: {
    counter: function() {
      return this.$store.state.counter
    }
  },
  template: `
    <p class="counter">{{counter}}</p>
  `,
  store: store
});

The computed counter can then be bound to the template using interpolation.

See the Pen Vue Vuex by Chris Nwamba (@codebeast) on CodePen.

Store Getters

We will violate DRY when more than one component depends on a computed value because we will have to compute for each component. Therefore, it’s best to handle computation inside the store using the getters property:

const store = new Vuex.Store({
  state: {
    counter: 0
  },
  getters: {
    counter: state => state.counter * 2
  }
})

var App = new Vue({
  computed: {
    counter: function() {
      return this.$store.getters.counter
    }
  },
  template: `
    <p class="counter">{{counter}}</p>
  `,
  store: store
});

Store Mutations

Mutations are synchronous functions that are used to update state. State must not be updated directly, which means this is incorrect:

increment () {
    this.$store.state.counte++
}

The state changes must be done via mutation functions in the store:

const store = new Vuex.Store({
  state: {
    counter: 0
  },
  ...
  mutations: {
    // Mutations
    increment: state => state.counter++
  }
})

Mutations are like events. To call them, you have to use the commit method:

  methods: {
    increment: function () {
      this.$store.commit('increment')
    },
  },
  template: `
    <div>
      <p class="counter">{{counter}}</p>
      <div class="actions">
        <div class="actions-inner">
          <button @click="increment">+</button>
        </div>
      </div>
    </div>
  `,

See the Pen Vue Vuex by Chris Nwamba (@codebeast) on CodePen.

Todo With Vuex

The counter example is pretty basic and doesn’t have much use of Vuex. It’d be nice to take this to another level and try to build a todo app where we can create, complete, and remove todos.

Listing Todos

First thing, we need to make a list of todos. This list will live in our store’s state as array:

const store = new Vuex.Store({
  state: {
    todos: [
      {
        task: 'Code',
        completed: true
      },
      {
        task: 'Sleep',
        completed: false
      },
      {
        task: 'Eat',
        completed: false
      }
    ]
  },
})

We can render this list of todos in our component:

const store = new Vuex.Store({
  state: {
    todos: [
      ...
    ]
  },
  getters: {
    todos: state => state.todos
  }
})

const TodoList = {
  props: ['todos'],
  template: `
    <div>
      <ul>
        <li v-for="t in todos" :class="{completed: t.completed}">{{t.task}}</li>
      </ul>
    </div>
  `
}

var App = new Vue({
  computed: {
    todos: function() {
      return this.$store.getters.todos
    }
  },
  template: `
    <div>
      <todo-list :todos="todos"></todo-list>
    </div>
  `,
  store: store,
  components: {
    // Add child component to App
    'todo-list': TodoList
  }
});

Rather than write all our logic in one App component, we wrote todo list relating logic in a different component, TodoList. App needs to be aware of this new component, therefore, we import the component and render it:

template: `
    ...
      <todo-list :todos="todos"></todo-list>
    ...
  `,
components: {
    'todo-list': TodoList
  }

TodoList receives the list of todos via props, iterates of the the todos, strikes of completed todos, and renders each of the todo items.

See the Pen Vue Vuex by Chris Nwamba (@codebeast) on CodePen.

Creating New Todos

To create new todos, we need a text box where the task can be entered. The text will be wrapped with a form:

// Store
const store = new Vuex.Store({
  state: {
    todos: [
      ...
    ]
  },
  ...
  mutations: {
    // Add todo mutation
    addTodo: (state, payload) => {
      // Assemble data
      const task = {
        task: payload,
        completed: false
      }
      // Add to existing todos
      state.todos.unshift(task);
    }
  }
})

// App Component
var App = new Vue({
  data: function() {
    return {
      task: ''
    }
  },
  methods: {
    addTodo: function() {
      // Commit to mutation
      this.$store.commit('addTodo', this.task)
      // Empty text input
      this.task = ''
    }
  },
  template: `
    <div>
      <form @submit.prevent="addTodo">
        <input type="text" v-model="task" />
      </form>
      <todo-list :todos="todos"></todo-list>
    </div>
  `,
  ...
});

Completing and Removing Todos

To complete a todo, we need to find what todo needs to be completed and toggle it’s completed property. This can be triggered by clicking on each item on the todo list:

const store = new Vuex.Store({
  state: {
    todos: [
      ...
    ]
  },
  mutations: {
    addTodo: (state, payload) => {
      const task = {
        task: payload,
        completed: false,
        id: uuid.v4()
      }
      console.log(state)
      state.todos.unshift(task);
    },
    // Toggle Todo
    toggleTodo: (state, payload) => {
      state.todos = state.todos.map(t => {
        if(t.id === payload) {
          // Update the todo 
          // that matches the clicked item
          return {task: t.task, completed: !t.completed, id: t.id}
        }
        return t;
      })
    }
  }
})

const todoList = {
  props: ['todos'],
  methods: {
    toggleTodo: function(id) {
      this.$store.commit('toggleTodo', id)
    }
  },
  template: `
    <div>
      <ul>
        <li v-for="t in todos" :class="{completed: t.completed}" @click="toggleTodo(t.id)">{{t.task}}</li>
      </ul>
    </div>
  `,
}

When a todo item is clicked, we raise a toggleTodo event. This event’s handler receives an parameter id and commits a toggleTodo mutation with the id.

In the mutation function, we iterate over each of the items, find the one that needs to be update, and update it.

We added a uuid library in other to uniquely identify each items when updating them:

<script src="http://wzrd.in/standalone/uuid@latest"></script>

See the Pen Vue Vuex by Chris Nwamba (@codebeast) on CodePen.

You could choose to remove todo items entirely from the list. We can do this by listening to dblclick event:

const store = new Vuex.Store({
  state: {
    todos: [
      ...
    ]
  },
  getters: {
    todos: state => state.todos
  },
  mutations: {
    deleteTodo: (state, payload) => {  
      const index = state.todos.findIndex(t => t.id === payload);
      state.todos.splice(index, 1) 
      console.log(index)
    }
  }
})

const todoList = {
  props: ['todos'],
  methods: {
    toggleTodo: function(id) {
      this.$store.commit('toggleTodo', id)
    },
    deleteTodo: function(id) {
      this.$store.commit('deleteTodo', id)
    }
  },
  template: `
    <div>
      <ul>
        <li v-for="t in todos" :class="{completed: t.completed}" @click="toggleTodo(t.id)" @dblclick="deleteTodo(t.id)">{{t.task}}</li>
      </ul>
    </div>
  `,
}

Similar to what we did for toggling todos. We just remove an item from the array based on the id of the todo item matched.

See the Pen Vue Vuex by Chris Nwamba (@codebeast) on CodePen.

Final Notes

We built a basic counter and fairly practical todo app to showcase how you can use Vuex in common state manipulation tasks.

It would be nice to keep one thing in mind before we close off. Mutations are synchronous but the real world is not. Therefore, we need a way to handle asynchronous actions. Vuex provides another member called Actions. Actions allows you to carry out async operations and then commit mutations when the async operations are completed.

Source:: scotch.io

Custom themes with Angular Material

Picture of md-toolbar component

When building bigger applications, we always strive for flexibility, extensibility and reusability. This applies not only to the actual application logic, but also to style sheets. Especially nowadays, where things like CSS variables and modules exist. These tools are great and they solve many different problems in a very elegant way. However, one thing that’s still super hard to do these days is theming. Being able to use existing, or create new components, but easily changing their look and feel without changing their code. This is often required when we build things that can be reused across different projects, or if the project we’re working on should simply enable the user to change the color scheme.

The Angular Material project comes with a built-in story for theming, including using any of Material Design’s own predefined themes, but also creating custom themes that will be used not only by components provided by Angular Material, but also our own custom ones.

In this article we’ll explore how theming is implemented, how pre-built themes can be used and how we can make our own custom components theme-able so they pick up the configured theme as well!

What is a theme?

“Theme” can mean many different things to different people, so it’s good to clarify what a theme in the context of Angular Material means. Let’s get right into it.

The official theming guide is pretty much to the point here:

A theme is a set of colors that will be applied to the Angular Material components.

To be more specific, a theme is a composition of color palettes. That’s right, not just a single color palette, multiple color palettes. While this might sound unnecessary first, it turns out that this is a very powerful setup to define themes in a very flexible way.

Alright, but what color palettes are needed to compose a theme? As for Angular Material, it boils down to five different color palettes with each being used for different parts of the design:

  • Primary – Main colors most widely used across all screens and components.
  • Accent – Also known as the secondary color. Used for floating action buttons and interactive elements.
  • Warn – Colors to convey error state.
  • Foreground – Used for text and icons.
  • Background – Colors used for element backgrounds.

If you want to dive deeper into the whole color usuability story in Material Design, we recommend checking out the Material Design Specification for colors, as it describes the topic in very deep detail.

Using pre-built themes

As mentioned earlier, Angular Material already comes with a set of pre-built themes that can be used right out of the box. Available pre-built themes are: deeppurple-amber, indigo-pink, pink-bluegrey and purple-green.

Using them is as easy as including or importing the dedicated CSS file that comes with all Angular Material builds. So assuming we’ve installed Angular Material in our Angular CLI project using:

$ yarn|npm install --save @angular/material

We can go ahead and add any of the pre-built CSS files to our global styles by configuring our .angular-cli.json accordingly:

{
  ...
  "styles": [
    "../node_modules/@angular/material/prebuilt-themes/indigo-pink.css",
    "styles.scss"
  ]
  ...
}

Or, if we don’t want to fiddle around in our angular-cli.json file, we can also import any pre-built theme right into the projects styles.scss file like this:

@import '../node_modules/@angular/material/prebuilt-themes/indigo-pink.css';

We can easily try it out by having our application using Angular Material components. So first we add MaterialModule to our AppModule‘s imports:

import { MaterialModule } from '@angular/material';

@NgModule({
  imports: [
    ...
    MaterialModule
  ],
  ...
})
export class AppModule {}

Then we go ahead and render, for example, Angular Material’s tool bar component:

@Component({
  selector: 'my-app',
  template: `
    <md-toolbar>Awesome toolbar</md-toolbar>
  `
})
export class AppComponent {}

Looks cool right? Another thing that’s worth mentioning is that some Material components offer properties to configure whether they use the current theme’s primary, accent or warn color:

<md-toolbar color="primary">Awesome toolbar</md-toolbar>

Picture of md-toolbar component with primary theme color

Custom theme using built-in color palettes

Alright, using pre-built themes is a pretty cool thing as we get good looking components right away without doing any serious work. Let’s talk about how to create a custom theme using Angular Material’s predefined color palettes.

In order to create a custom theme, we need to do a couple of things:

  • Generate core styles – These are theme independent styles, including styles for elevation levels, ripple effects, styles for accessibility and overlays
  • Primary color palette – Generate color palette for the theme’s primary color
  • Accent color palette – Generate color palette for the theme’s accent color
  • Warn color palette – Generate color palette for the theme’s warn color
  • Theme generation – Given the color palettes we generated, we create a theme, which can be used by Angular Material, or custom components

While this looks like a lot of work, it turns out Angular Material gives us many tools to make these tasks a breeze. Let’s start off by creating a new custom-theme.scss file and import it in our root styles.scss instead of the pre-built theme. After that we’ll go through the list step by step:

@import './custom-theme';

Generate core styles

This is a pretty easy one. Angular Material provides many very powerful SCSS mix-ins that do all the ground work for us. The mix-in that generates Material’s core styles is called mat-core. All we have to do is to import and call it.

Here’s what that looks like (custom-theme.scss):

@import '../node_modules/@angular/material/theming';

@include mat-core();

Generate color palettes

The next thing we need to do is to generate color palettes, which can then be composed to an actual theme. To generate a color palette, we can use Angular Material’s mat-palette mix-in. mat-palette takes a base palette (yes, that’s another palette, more on that in a second), and returns a new palette that comes with Material specific hue color values for “light”, “dark” and “contrast” colors of the given base palette.

But what is this base palette? The base palette is just another color palette that compromises primary and accent colors of a single color. Wait, this sounds super confusing! Let’s take Material Design’s red color palette as an example:

Picture of a Material Design color palette

Here we see all color codes for lighter and darker versions of the color red, as part of the Material Design specification. The values 50 – 900 represent the hue values or the “strength” of the color, or how light or dark it is. 500 is the recommended value for a theme’s primary color. There are much more defined color palettes and they are very nicely documented right here.

So now that we know what a base palette is, we need to figure out how to create such a thing. Do we have to define and write them ourselves? The answer is yes and no. If we want to use our own custom color palettes, we need to define them manually. However, if we want to use any of Material Design colors, Angular Material comes with predefined palette definitions for all of them! If we take a quick look at the source code, we can see how to palette for the color red is implemented:

$mat-red: (
  50: #ffebee,
  100: #ffcdd2,
  200: #ef9a9a,
  300: #e57373,
  400: #ef5350,
  500: #f44336,
  600: #e53935,
  700: #d32f2f,
  800: #c62828,
  900: #b71c1c,
  A100: #ff8a80,
  A200: #ff5252,
  A400: #ff1744,
  A700: #d50000,
  contrast: (
    50: $black-87-opacity,
    100: $black-87-opacity,
    200: $black-87-opacity,
    300: $black-87-opacity,
    400: $black-87-opacity,
    500: white,
    600: white,
    700: white,
    800: $white-87-opacity,
    900: $white-87-opacity,
    A100: $black-87-opacity,
    A200: white,
    A400: white,
    A700: white,
  )
);

It’s basically just a map where each key (tone value) maps to a color code. So if we ever want to define our own custom color palette, this is what it could look like.

Okay, let’s create a palette for our primary, accent and warn colors. All we have to do is to call the mat-palette mix-in with a base color palette. Let’s use $mat-light-blue for primary, $mat-orange for accent and $mat-red for warn colors. We can simply reference these variables because we imported Angular Material’s theming capabilities in the previous step:

$custom-theme-primary: mat-palette($mat-light-blue);
$custom-theme-accent: mat-palette($mat-orange, A200, A100, A400);
$custom-theme-warn: mat-palette($mat-red);

Oh wait, what’s that? Why do we pass additional values to mat-palette when generating our accent color palette? Well… Let’s take a closer look at what mat-palette actually does.

Understanding mat-palette

We’ve already mentioned that mat-palette generates a Material Design color palette out of a base color palette. But what does that actually mean? In order to get a better picture of what’s going on in that mix-in, let’s take a look at its source code:

@function mat-palette($base-palette, $default: 500, $lighter: 100, $darker: 700) {
  $result: map_merge($base-palette, (
    default: map-get($base-palette, $default),
    lighter: map-get($base-palette, $lighter),
    darker: map-get($base-palette, $darker),

    default-contrast: mat-contrast($base-palette, $default),
    lighter-contrast: mat-contrast($base-palette, $lighter),
    darker-contrast: mat-contrast($base-palette, $darker)
  ));

  // For each hue in the palette, add a "-contrast" color to the map.
  @each $hue, $color in $base-palette {
    $result: map_merge($result, (
      '#{$hue}-contrast': mat-contrast($base-palette, $hue)
    ));
  }

  @return $result;
}

A mix-in is just a function – it takes arguments and returns something. mat-palette takes a base color palette (which is a map like $mat-red) and optional default values for the generated color palette’s default, lighter and darker colors. Eventually it returns a new color palette that has some additional map values. Those additional values are the mentioned default, lighter and darker colors, as well as their corresponding default-contrast, lighter-contrast and darker-contrast colors. On top of that it generates keys for contrast values for each base hue tone (50 – 900).

As we can see, we basically end up with a color palette that comes with everything the base palette provides, plus some additional keys for easy accessibility. So coming back to the question why we pass additional values to mat-palette for our accent color, we now know that all we do is configuring the default, lighter and darker color tone.

Generating themes

A theme lets us apply a consistent tone to our application. It specifies the darkness of the surfaces, level of shadow and appropriate opacity of ink elements. The Material Design specification describes two different variations of themes – dark and light.

Angular Material implements another set of mix-ins to generate either light or dark themes using mat-light-theme and mat-dark-theme respectively. Now that we have all of our color palettes in place, we can do exactly that. Let’s create a light theme object like this:

$custom-theme: mat-light-theme($custom-theme-primary, $custom-theme-accent, $custom-theme-warn);

If we take a quick look at mat-light-theme‘s source code, we can see that ti really just prepares another map object that can be later easily consumed for theming:

@function mat-light-theme($primary, $accent, $warn: mat-palette($mat-red)) {
  @return (
    primary: $primary,
    accent: $accent,
    warn: $warn,
    is-dark: false,
    foreground: $mat-light-theme-foreground,
    background: $mat-light-theme-background,
  );
}

That’s it! We can now use that generated theme object and feed it to Angular Material’s angular-material-theme mix-in, which really just passes that theme object to other mix-ins for each component, so they can access the color values from there:

@include angular-material-theme($custom-theme);

Here’s the complete code for our custom theme, using $mat-light-blue and $mat-orange:

@import '../node_modules/@angular/material/theming';

@include mat-core();

$custom-theme-primary: mat-palette($mat-light-blue);
$custom-theme-accent: mat-palette($mat-orange, A200, A100, A400);
$custom-theme-warn: mat-palette($mat-red);

$custom-theme: mat-light-theme($custom-theme-primary, $custom-theme-accent, $custom-theme-warn);

@include angular-material-theme($custom-theme);

Theming custom components

There’s one thing we haven’t talked about yet: theming custom components. So far we’ve only changed the look and feel of Angular Material’s components. That’s because we’re calling the angular-material-theme mix-in with our custom theme object. If we’d remove that call, we’d end up with all Material components in their base colors. This becomes more clear when we take a look at what angular-material-theme does:

@mixin angular-material-theme($theme) {
  @include mat-core-theme($theme);
  @include mat-autocomplete-theme($theme);
  @include mat-button-theme($theme);
  @include mat-button-toggle-theme($theme);
  @include mat-card-theme($theme);
  @include mat-checkbox-theme($theme);
  @include mat-chips-theme($theme);
  @include mat-datepicker-theme($theme);
  @include mat-dialog-theme($theme);
  @include mat-grid-list-theme($theme);
  @include mat-icon-theme($theme);
  @include mat-input-theme($theme);
  @include mat-list-theme($theme);
  @include mat-menu-theme($theme);
  @include mat-progress-bar-theme($theme);
  @include mat-progress-spinner-theme($theme);
  @include mat-radio-theme($theme);
  @include mat-select-theme($theme);
  @include mat-sidenav-theme($theme);
  @include mat-slide-toggle-theme($theme);
  @include mat-slider-theme($theme);
  @include mat-tabs-theme($theme);
  @include mat-toolbar-theme($theme);
  @include mat-tooltip-theme($theme);
}

Every component in Angular Material comes with a dedicated theme mix-in, that takes a theme object to access its values for theme specific styles. We can use exactly the same pattern to theme our own custom components. This turns out to be very powerful because it enables us to easily change a theme in our entire application just by changing the theme object!

Let’s say we have a custom component FileTreeComponent as we created it in MachineLabs (a project you might want to check out!). FileTreeComponent renders a list of files and we want that component to respond to the configured theme. Here’s what its template looks like (simplified):

<ul class="ml-file-list">
  <li class="ml-file-list-item" *ngFor="let file of files">
    <md-icon>description</md-icon> {{file.name}}
  </li>
</ul>

It also comes with a base CSS file that introduces just enough styles so that the component is usable and accessible. No colors applied though. We won’t go into much detail here because there’s nothing new to learn. However, just to give a better idea, here are some corresponding base styles for FileTreeComponent:

.ml-file-list {
  max-height: 250px;
  overflow: scroll;
  list-style: none;
  margin: 0;
  padding-left: 2em;
  padding-right: 2em;
  padding-top: 1.2em;
  padding-bottom: 1.2em;
}

.ml-file-list-item {
  font-size: 0.9em;
  font-weight: 400;
  padding: 0.4em 0.4em;
  line-height: 1.4;
  position: relative;

  md-icon {
    font-size: 1.2em;
    vertical-align: -22%;
    height: 18px;
    width: 18px;
  }

  &:hover { cursor: pointer; }
}

The component looks something like this:

MachineLabs file tree component

We want to add theming capabilities to the following elements inside FileTreeComponent when a theme is applied:

  • .ml-file-list needs a border in the “foreground” color of the configured theme
  • .ml-file-list-item needs the theme’s background hover color when hovering over it
  • When ml-file-list-item is selected, we need to give it a lighter version of the theme’s primary color

These rules can be easily implemented, simply by following the same pattern that Angular Material is using for its own components. We define a mix-in for FileTreeComponent that takes a theme object and uses that to access theme values using map-get and mat-color mix-ins.

Let’ start off by creating a ml-file-tree-theme mix-in and pull out the color palettes from the given theme we’re interested in (file-tree-theme.scss):

@mixin ml-file-tree-theme($theme) {

  $primary: map-get($theme, primary);
  $warn: map-get($theme, warn);
  $background: map-get($theme, background);
  $foreground: map-get($theme, foreground);

}

Remember how mat-light-theme created additional values for foreground and background for our theme? With map-get we can access any value by its key of a given map. In other words, we’re pulling out color palettes for the theme’s primary, warn, background and foreground colors.

Once that is done, we can start using color values of these color palettes in our style sheets using the mat-color mix-in. mat-color takes a color palette and a hue value (or one of the descriptive names like lighter) returns the color corresponding color. If we want .ml-file-list to have a border in the divider foreground color of the given theme, it’d look something like this:

.ml-file-list {
  border-bottom: 1px solid mat-color($foreground, divider);
}

We use exactly the same technique to theme the background color of .ml-file-list-item like this:

.ml-file-list-item {

  &:hover, &:active, &:focus {
    background-color: mat-color($background, hover);
  }

  &.selected {
    background-color: mat-color($primary, lighter, 0.5);
    color: mat-color($foreground, text);
  }
}

One thing to note here is that mat-color takes an optional third argument to configure the color’s opacity.

That’s it! FileTreeComponent is now fully theme-aware and its look and feel responds to the configured theme. Here’s the complete code:

@mixin ml-file-tree-theme($theme) {

  $primary: map-get($theme, primary);
  $warn: map-get($theme, warn);
  $background: map-get($theme, background);
  $foreground: map-get($theme, foreground);

  .ml-file-list {
    border-bottom: 1px solid mat-color($foreground, divider);
  }

  .ml-file-list-item {

    &:hover, &:active, &:focus {
      background-color: mat-color($background, hover);
    }

    &.selected {
      background-color: mat-color($primary, lighter, 0.5);
      color: mat-color($foreground, text);
    }
  }
}

And here’s what our component looks like now:

MachineLabs file tree component

Last but not least, we have to call the ml-file-tree-theme mixing with our custom theme object. We do that by importing the mix-in in our custom-theme.scss file and execute it like this:

...
@import 'app/lab-editor/file-tree/file-tree-theme.scss';

...
@include ml-file-tree-theme($custom-theme);

In fact, we can take it one level further and create a meta theme mix-in that executes all theme mix-ins for our custom components, the same way Angular Material does it with angular-material-theme. To do that we create a new mix-in custom-theme, which would look like this:

@mixin custom-theme($theme) {
  @include ml-file-tree-theme(theme);
}

@include custom-theme($custom-theme);

Here again, the complete code of our custom-theme.scss file:

@import '../node_modules/@angular/material/theming';
@import 'app/lab-editor/file-tree/file-tree-theme.scss';

@include mat-core();

$custom-theme-primary: mat-palette($mat-light-blue);
$custom-theme-accent: mat-palette($mat-orange, A200, A100, A400);
$custom-theme-warn: mat-palette($mat-red);

$custom-theme: mat-light-theme($custom-theme-primary, $custom-theme-accent, $custom-theme-warn);

@mixin custom-theme($theme) {
  @include ml-file-tree-theme(theme);
}

@include angular-material-theme($custom-theme);
@include custom-theme($custom-theme);

Conclusion

Angular Material’s theming capabilities are very powerful and as of right now, it seems to be the only UI component library that gets it fairly right. Color palettes can be easily changed and reused and custom components can be enabled to consume a configured theme to match the look and feel of the entire application.

Source:: Thoughtram

Enhanced Cordova App Security with SSL Certificate Pinning

By Bryan Ellis

A simple static application has fewer security concerns compared to a dynamic application that requires an internet connection. An example concern, that a developer might have with either a static or dynamic application is how to prevent non-authorized users from copying or reading the source code. This can easily be resolved by encrypting to the application content.

More security concerns appear when an internet connection is established for fetch data. This connection can put a user or application at risk. The biggest concerns, either hybrid or native development, is the data integrity. Did the data come from the originated source? Has the data been tampered with before reaching the app? This commonly known as a man in the middle attack.

Requirements

  • A Monaca account with a plan that supports Custom Cordova plugin.
  • Existing knowledge on how to build custom Monaca Debugger.

SSL Pinning Guide

The first step to address this concern is to make sure that all endpoint URLs are using HTTPS/SSL. This will ensure that the data being transported between the server and app is encrypted.

Cordova does not support true certificate pinning. The main barrier to this is a lack of native APIs in Android for intercepting SSL connections to perform the check of the server’s certificate.

The second step is to implement a way to for checking the certificate fingerprint validity. This step is known as certificate pinning. There are Cordova plugins available to help achieve this by checking the server’s public key fingerprint or certificate.

One of the plugin we will be looking at and setting up is the SSLCertificateChecker-PhoneGap-Plugin plugin. This plugin will check the fingerprint of a certificate is matching.

First, we will need to add this plugin to the project by navigating to the Manage Cordova Plugin screen and click on Import Cordova Plugin.

Next select Specify URL or Package Name, insert https://github.com/EddyVerbruggen/SSL
CertificateChecker-PhoneGap-Plugin.git
and press OK

Alternatively, you can download the plugins ZIP package from GitHub and upload the package by selecting Upload Compressed ZIP Package.

The plugin will eventually appear under the Enabled Plugin list.

After the plugin has been added, In JavaScript, we have access to APIs necessary to validate the certificate fingerprint before making requests.

In this example, we will create an app that will have a button to fetch the last 50, most recent Monaca news and release titles.

The button will first perform a check of the API URL against the known fingerprint to determine if the URL’s certificate is valid. If the certificate is valid, then it will fetch the content.

In a script tag, we will create two variables.

var url = 'https://monaca.mobi/en/api/news/list?type=news_and_release&limit=50';
var fingerprint = '67 C2 0A 07 6F D9 55 23 38 03 E4 78 2E 0C B5 CC 24 0C A8 B8';

The url variable will contains the target URL that we will test and make request for data.
The fingerprint variable will contains the monaca.mobi public key fingerprint for testing.

Next, we will add these variables, to the plugin’s API window.plugins.sslCertificateChecker with a success and error callback.

Example Usage:

window.plugins.sslCertificateChecker.check(
  // Success Callback
  function success(message) { },

  // Error Callback
  function error(message) { },


  // URL to test
  url,

  // URL Public Key Certificate Fingerprint
  fingerprint
);

If the fingerprint is valid, the success callback is executed. This app example will perform the second request to fetch the actual data for display when the success callback is executed.

If the fingerprint is invalid, there are four possible likely outcomes.

  • The URL has a new public key with a new fingerprint.
  • The fingerprint in code was inserted incorrectly.
  • The connection to the server was lost or response timeout.
  • Possible Man in the Middle Attack.

Since this plugin will only perform pre-checks before making the official request for the response data, it will mitigate to an extent but contains some drawbacks.

Drawbacks:

  • If you are required to check on every request, this plugin would have to duplicate each request.
  • If the check occurs at a given time during the app’s lifecycle, for example start up, the connection may be secure at that give time but not always. For example: your wifi connection changes networks and your app was already started.

Some of these drawbacks could also be mitigated but the ideal case would be to test on every request and not duplicate the requests.

For the entire full source code with a working app, please see the GitHub repository.
SSL Certificate Checker Demo Repo

I hope you found this tutorial helpful on improving your mobile hybrid app’s security.

Source:: https://onsen.io/

Essential Angular VS Code Extensions

By John Papa

Essential Angular VS Code Extensions

When it comes to efficient and effective development experiences, excellent tooling makes all the difference. That’s why I love VS Code.

VS Code has a great extensibility model, which makes it easy to create awesome extensions that enhance the development experience. It’s no secret that I love shortcuts and I don’t love memorizing syntax … which is why I created my snippets for Angular. It’s been quite popular with over 500,000 downloads, but it’s certainly not the only extension I use.

I am often asked, “What are you favorite VS Code extensions for Angular?”. I decided it was time to share them via an extension pack.

Introducing my Angular Essentials extension pack for VS Code. By installing this extension pack you get a set of great extensions that are helpful with Angular development. You can check out the initial list below.

This is hot off the press as I literally just published this as it has 1 install.

As web tools evolve, the usefulness of extensions come and go. I reserve the right to update the extension pack’s contents up to my own discretion.

Here is the list of extensions the pack includes:

Angular v4 Snippets – Angular snippets that follow the official style guide, for TypeScript, templates, and RxJS.

Angular Language Service – This extension provides a rich editing experience for Angular templates, both inline and external templates. This extension is brought to you by members of the Angular team. It is fantastic at helping write solid code in the html templates.

Editor Config – EditorConfig for VS Code. Great for maintaining consistent editor settings.

tslint – Integrates the tslint linter for the TypeScript language into VS Code. Extremely helpful at detecting and helping fix TypeScript issues.

Chrome Debugger – VS Code debugger for Chrome.

Bracket Pair Colorizer – This extension allows matching brackets to be identified with colors. This is super helpful when you have nested functions and objects.

JSON to TypeScript – Convert JSON object to typescript interfaces. I don’t use this a lot, but it can be helpful. I tend to use the VS Code multi cursor to refactor JSON, but this is easier for this infrequent task.

Path Intellisense – Visual Studio Code plugin that autocompletes filenames. Hopefully, VS Code will bake this in at some point. Until then, this is a keeper.

Angular Emmet – Support zen-coding syntax for Angular 2 typescript files. Again, hopefully, VS Code will bake this in at some point. Until then, this is a keeper.

Angular Inline – Visual Studio Code language extension for javascript/typescript files that use Angular2.

Source:: johnpapa