Monthly Archives: March 2018

How To Make Netflix-Like Swipers in Vue

By Chris Nwamba

If you have been building for the web for a little while, you would have like me encountered at least some issues when making swipers – for some reason, they always seem to have a mind of their own for a while and they come around. It’s either, the swiper isn’t as responsive as you would have liked or the extensive styling you had to do before making it look half as good as you expected.

Now if you have ever used Netflix, you have also seen how elegant and fluid their movie swipers are. Thanks to Vue and this awesome swiper you don’t have to feel some type of way when making your swipers. By the time you finish reading this article, you would be well equipped to making your own Netflix-like Swipers using Vue.

What We Will Build

The demo below shows the example we will use the swiper to build:

You can also play with the live version here

Getting Started

Before getting started, you would need the following :

  • Node installed on your machine
  • Node Package Manager ( NPM ) installed on your machine

To confirm your installation, run the following command in your terminal :

    node --version
    npm --version

If you get their version numbers as results then you’re good to go.

Tools We’ll Use

To help us solve this “Swiper Problem”, we are going to make use of some other tools. Let’s take a look at them now,

Installing VueJS
If you have never used Vue before, don’t fear. Vue is a progressive frontend framework that helps us build interactive and wonderful interfaces. You can learn more about Vue here

To install Vue on your machine, you need to run the following command :

    npm install -g vue-cli

This installs Vue globally on your machine. To confirm your Vue installation, run the following command in your terminal :

    vue --version

If you get a version number as the result then you’ve installed Vue on your machine.

Creating your Project

Now that we have installed Vue on our machines, we are ready to start building. To create our application, we use the Vue CLI to get us started. We need to run the following in our terminal :

    vue init webpack netflix-like-swipers

This shows a prompt for us to complete and once we’re done w the prompts, it creates a Vue sample project with webpack for us to tweak and build our application.

Movie Component

The purpose of components is to make parts of our UI reusable. In this case, we are going to have so many movies so why not create a movie component and then reuse the component as we wish during the application.

Our Movie Component is will look like this :

Single Movie Component

To make the movie component, we create a Movies.vue file in our src/components/ directory

    cd src/components/
    touch Movie.vue

In our Movie.vue , we build our component as follows :

    // Movie.vue
    <script>
    export default {
      name: 'Movie',
      props : [
        'image',
        'title',
        'description',
        'duration'
      ],
    }
    </script>

Here, we name our component and also specify the props for the component which will be added each time the component is used.

The component has the following template

    // Movie.vue
    <template>
        <div class="movie" v-bind:style="{ backgroundImage: 'url(' + image + ')' }">
        <div class="content">
          <h1 class="title">{{ title }}</h1>
          <p class="description">{{ description }}</p>
          <p class="duration">{{ duration }}</p>
        </div>
      </div>
    </template>

And has some scoped styling that looks like this :

    <!-- Movie.vue -->
    <style scoped>
    h1, h2 {
      font-weight: normal;
    }
    .movie{
      display: flex;
      flex-direction: column;
      align-items: flex-start;
      justify-content: flex-end;
      text-align: left;
      padding: 10px;
      width : 350px;
      height : 500px;
      background-color: rgba(255,255,255,0.7);
      background-repeat: no-repeat;
      background-blend-mode: overlay;
      background-size: cover;
    }
    </style>

HomePage Component – Using the Vue Awesome Swiper

Now that we have successfully created our movie component, the next thing we want to do is to integrate the swipers into our application.

Installing the Vue Awesome Swiper Module
We need to first install the module and to do this, we run the following command :

    npm install vue-awesome-swiper --save

Once we do this, we have successfully installed the module for use in our application.

Using the Installed Swiper Module

Now, we create a HomePage.vue component in the src/components directory in which we will use the swiper.

    cd src/components
    touch HomePage.vue

Now in our HomePage.vue , we first create the component and also import the other components Movie, swiper, swiperSlide – these are gotten after importing them from the swiper module we installed in the project.

We also configure the slider using the data properties for the component.

    // HomePage.vue
    <script>
    import Movie from './Movie'
    import 'swiper/dist/css/swiper.css'
    import { swiper, swiperSlide } from 'vue-awesome-swiper'

    export default {
      name: 'HomePage',
      components: {
        Movie,
        swiper,
        swiperSlide
      },
      data() {
        return {
          swiperOption: {
            direction : 'vertical',
            pagination: {
              el: '.swiper-pagination',
              type: 'bullets'
            },
          }
        }
      }
    }
    </script>

In this case, we specified we wanted our swipers vertical and the pagination style should be bullets

Now, our HomePage.vue template will look like this :

    <!-- HomePage.vue -->
    <template>
        <swiper :options="swiperOption">
          <!-- slides -->
          <swiper-slide>
            <Movie image="http://res.cloudinary.com/og-tech/image/upload/s--4NgMf3RF--/v1521804358/avengers.jpg" title="Avengers : Infinity War" description="Thanos is around" duration="2hrs"/>
          </swiper-slide>
          <swiper-slide>
            <Movie image="http://res.cloudinary.com/og-tech/image/upload/s--BmgguRnX--/v1521804402/thor.jpg" title="Thor : Ragnarok" description="Thor lost his hair" duration="2hrs30mins"/>
          </swiper-slide>
          <!-- more movies --> 
          <swiper-slide>
            <Movie image="http://res.cloudinary.com/og-tech/image/upload/s--qXaW5V3E--/v1521804426/wakanda.jpg" title="Black Panther" description="Wakanda Forever" duration="2hrs15mins"/>
          </swiper-slide>

          <!-- Optional controls -->
          <div class="swiper-pagination"  slot="pagination"></div>
        </swiper>
    </template>

Pro Tip :
When building applications, Cloudinary is your friend! Notice how we made use of Cloudinary when serving our images. With Cloudinary, you can transform your images on the fly without having to do any extra work and all you have to do is to edit the image URL like below do things like this :


http://res.cloudinary.com/og-tech/image/upload/s--nGy7I6oU--/c_scale,w_150/v1521804358/avengers.jpg

There’s so much you can do with Cloudinary and you can check them out here.


We use our component and have many ``'``s inside it. We also added a div to house the pagination element.

The HomePage component has the following style :

    <!-- HomePage.vue -->
    <!-- Add "scoped" attribute to limit CSS to this component only -->
    <style scoped>
    .swiper-slide{
      display: flex;
      justify-content: center;
      flex-direction: column;
    }
    .swiper-container {
      height : 500px;
    }
    </style>

Rendering Our Components

Now to render our components, we need to include them and use them in the src/App.vue as follows :

    // App.vue
    <script>
    import HomePage from './components/HomePage'
    export default {
      name: 'App',
      components: {
        HomePage
      },
      data() {
        return {
          swiperType : 'Easy Vertical Swiper'
        }
      }
    }
    </script>

Our src/App.vue has the following template

    <!-- App.vue -->
    <template>
      <div id="app">
        <h1>{{ swiperType }}</h1>
        <HomePage/>
      </div>
    </template>

and some styling

    <style>
    #app {
      font-family: 'Avenir', Helvetica, Arial, sans-serif;
      -webkit-font-smoothing: antialiased;
      -moz-osx-font-smoothing: grayscale;
      text-align: center;
      color: #2c3e50;
      margin-top: 60px;
      display: flex;
      align-items: center;
      flex-direction: column;
    }
    </style>

Testing Swipers

To see how things are going with our application, we run the following command from our terminal:

    npm run dev

When we do this, a development server is started and our app can be previewed usually on localhost:8080

Vertical Swiper Demo

More Swipers

Now that we’ve seen how the simple swipers work, let’s explore more options. We’ll take a look at Horizontal 3D CoverFlow Swiper Effects and Nested Swipers. For more swiper examples, you can head over here

3D CoverFlow Swiper Effects
To achieve this, we need to tweak the slider options in our HomePage.vue to look like this :

    // HomePage.vue
    <script>
    [..]
    export default {
      name: 'HomePage',
      [...]
      data() {
        return {
          swiperOption: {
              effect: 'coverflow',
              grabCursor: true,
              centeredSlides: true,
              slidesPerView: '5',
              coverflowEffect: {
                rotate: 50,
                stretch: 0,
                depth: 100,
                modifier: 1,
                slideShadows : false
              },
              pagination: {
                el: '.swiper-pagination'
              }
            }
        }
      }
    }
    </script>

These can be tweaked at any time to suit your specific needs using the data properties for the swiperOption

Horizontal Swipers with CoverFlow Effect

Nested Swipers
Now you may be asking yourself, “what if I wanted to have swipers in my swipers”. With vue-awesome-swiper you don’t need to do the most anymore, all you need to do is to tweak your HomePage.vue as follows :

    // HomePage.vue
    <script>
    [...]
    export default {
      [...]
      data() {
        return {
          swiperOptionh: {
            spaceBetween: 50,
            pagination: {
              el: '.swiper-pagination-h',
              clickable: true
            }
          },
          swiperOptionv: {
            direction: 'vertical',
            spaceBetween: 50,
            pagination: {
              el: '.swiper-pagination-v',
              clickable: true
            }
          }
        }
      }
    }
    </script>

We specify the configurations for the different swipers and then in our Template, we have a structure like this :

    <template>
        <swiper :options="swiperOptionh">
            <swiper-slide>
              [...]
            </swiper-slide>
            [...]
            <swiper-slide>
              <swiper :options="swiperOptionv">
                <swiper-slide>
                  [...]
                </swiper-slide>
                 [...]
                <div class="swiper-pagination swiper-pagination-v" slot="pagination"></div>
              </swiper>
            </swiper-slide>
            [...]
            <div class="swiper-pagination swiper-pagination-h" slot="pagination"></div>
        </swiper>
    </template>

Notice how we use :options=``"``swiperOptionh``" to specify the configurations for the horizontal swiper and :options=``"``swiperOptionv``" for the vertical swiper

Nested Swipers Example

Making Netflix-Like Swipers

Now that we have seen some basic swiper examples, we’re well underway to building Netflix like swipers.

We will continue using the Vue Awesome Swiper we have been playing around with so you don’t need to worry about any extra knowledge/setup. We have our movie component and we’ll apply the Vue Swipers on the component to give the Netflix-like swipe result.

We need to edit our HomePage.vue to look like this :

    // HomePage.vue
    <script>
    [...]
    export default {
      [...]
      data() {
        return {
          swiperOptions : {
            slidesPerView: 5,
            spaceBetween: 0,
            freeMode: true,
            loop: true,
            navigation: {
              nextEl: '.swiper-button-next',
              prevEl: '.swiper-button-prev'
            }
          }
        }
      }
    }
    </script>

We changed the options for our swiper by selecting the number of movies we want in each view, we also set the spaceBetween them to 0. To also give the ‘endless’ swipe feel, we set loop to true. We also specified the class names of our navigation buttons – this adds functionality to the buttons

Now, our template has the following structure :

    <!-- HomePage.vue --> 
    <template>
        <swiper :options="swiperOptions">
            <swiper-slide>
              <Movie image="http://res.cloudinary.com/og-tech/image/upload/s--4NgMf3RF--/v1521804358/avengers.jpg" title="Avengers : Infinity War" description="Thanos is around" duration="2hrs"/>
            </swiper-slide>
            [...]
            <swiper-slide>
              <Movie image="http://res.cloudinary.com/og-tech/image/upload/s--qXaW5V3E--/v1521804426/wakanda.jpg" title="Black Panther" description="Wakanda Forever" duration="2hrs15mins"/>
            </swiper-slide>
            <div class="swiper-button-prev" slot="button-prev"></div>
            <div class="swiper-button-next" slot="button-next"></div>
        </swiper>
    </template>

When we go back to our development server, our app looks like this!

Netflix Like Swipers

Conclusion

In this article, we saw how implement swipers extensively in our Vue applications. There are so many applications for swipers and now you no longer have to dread implementing them anymore since you have the keys now.

Here’s a link to the Github repository. Feel free to leave a comment below if you have any further questions.

Source:: scotch.io

Weekly Node.js Update - #11 - 03.23, 2018

By Tamas Kadlecsik

Weekly Node.js Update - #11 - 03.23, 2018

Below you can find RisingStack‘s collection of the most important Node.js news, updates, projects & tutorials from this week:

Node v9.9.0 (Current) is released

Notable Changes:

  • assert:
    • From now on all error messages produced by assert in strict mode will produce a error diff. (Ruben Bridgewater)
    • From now on it is possible to use a validation object in throws instead of the other possibilities. (Ruben Bridgewater)
  • crypto:
    • allow passing null as IV unless required (Tobias Nießen)
  • fs:
    • support as and as+ flags in stringToFlags() (Sarat Addepalli)
  • tls:
    • expose Finished messages in TLSSocket (Anton Salikhmetov)
  • tty:
    • Add getColorDepth function to determine if terminal supports colors. (Ruben Bridgewater)
  • util:
    • add util.inspect compact option (Ruben Bridgewater)

March 2018 Security Releases

The Node.js project will be releasing new versions for each of its supported release lines on, or shortly after, the 27th of March, 2018 (UTC). These releases will incorporate a number of security fixes and will also likely include an upgraded version of OpenSSL.

Creating an API with Node.js using GraphQL

In this article we focus on putting the pieces together with GraphQL, by building a GraphQL API with Node.js.

How to write powerful schemas in JavaScript

Introducing schm, a functional and highly composable library for creating schemas in JavaScript and Node.js
Weekly Node.js Update - #11 - 03.23, 2018

Deploy a Node.js App with Heroku

In this article you can read a step-by-step guide on how to set up a deployment of a Node.js app with Heroku.

History of Node.js on a Timeline

A recap on what exactly happened to Node.js so far, from the point where it was born. The history of Node.js on a timeline: 2009-2018.

Weekly Node.js Update - #11 - 03.23, 2018

Build a Referral System in Node-Express and Postgres

An increasing number of companies are using referral programs for targeted promotions and marketing. How are these referral programs built? In this article you’ll go through that by building a simple but complete referral system of our own.

Sebastian Golasch: Crazy hacks suit IoT just as much as frontend 👾

An interview with Sebastian Golasch, one of AmsterdamJS’s speakers and Specialist Senior Manager Software Developer at Deutsche Telekom. Sebastian is a JS / IoT pro you shouldn’t miss out on!

Previously Node.js Updates:

In the previous Weekly Node.js Update, we collected great articles, like

  • Node v9.8.0 (Current) released;
  • Building a React App from Scratch, Live – Part 1.;
  • HPlaying With Node.js and the Runscope API on Glitch;
  • Node.js Quickstart;

& more…

We help you to stay up-to-date with Node.js on a daily basis too. Check out our Node.js news page and its Twitter feed!

Source:: risingstack.com

Training Machine Learning Models with MongoDB

By Nicholas Png

Over the last four months, I attended an immersive data science program at Galvanize in San Francisco. As a graduation requirement, the last three weeks of the program are reserved for a student-selected project that puts to use the skills learned throughout the course. The project that I chose to tackle utilized natural language processing in tandem with sentiment analysis to parse and classify news articles. With the controversy surrounding our nation’s media and the concept of “fake news” floated around every corner, I decided to take a pragmatic approach to address bias in the media.

My resulting model identified three topics within an article and classified the sentiments towards each topic. Next, for each classified topic, the model returned a new article with the opposite sentiment, resulting in three articles provided to the user for each input article. With this model, I hoped to negate some of the inherent bias within an individual news article by providing counter arguments from other sources.

The algorithms used were the following (in training order):

Initially, I was hesitant to use any database, let alone a non-relational one. However, as I progressed through the experiment, managing the plethora of CSV tables became more and more difficult. I needed the flexibility to add additional features to my data as the model engineered them. This is a major drawback of relational databases. Using SQL, there are two options: generate a new table for each new feature and use a multitude of JOINs to retrieve all the necessary data, or use ALTER TABLE to add a new column for each new feature. However, due the the varied algorithms I used, some features were generated one data point at a time, while others were returned as a single python list. Neither option was well suited to my needs. As a result, I turned to MongoDB to resolve my data storage, processing, and analysis issues.

To begin with, I used MongoDB to store the training data scraped from the web. I stored raw text data as individual documents on an AWS EC2 instance running a MongoDB database. Running a simple Python script on my EC2 instance, I generated a list of public news articles URLs to scrape and stored the scraped data (such as the article title and body) into my MongoDB database. I appreciated that, with MongoDB, I could employ indexes to ensure that duplicate URLs, and their associated text data, were not added to the database.

Next, the entire dataset needed to be parsed using NLP and passed in as training data for the TFIDF Vectorizer (in the scikit-learn toolkit) and the Latent Dirichlet Allocation (LDA) model. Since both TFIDF and LDA require training on the entire dataset (represented by a matrix of ~70k rows x ~250k columns), I needed to store a lot of information in memory. LDA requires training on non-reduced data in order to identify correlations between all features in their original space. Scikit Learn’s implementations of TFIDF and LDA are trained iteratively, from the first data point to the last. I was able to reduce the total load on memory and allocate more to actual training, by passing a Python generator function to the model that called my MongoDB database for each new data point. This also enabled me to use a smaller EC2 instance, thereby optimizing costs.

Once the vectorizer and LDA model were trained, I utilized the LDA model to extract 3 topics from each document, storing the top 50 words pertaining to each topic back in MongoDB. These top 50 words were used as the features to train my hierarchical clustering algorithm. The clustering algorithm functions much like a decision tree, and I generated pseudo-labels for each document by determining which leaf the document fell into.Since I could use dimensionally reduced data at this point, memory was not an issue, but all these labels needed to be referenced later in other parts of the pipeline. Rather than assigning several variables and allowing the labels to remain indefinitely in memory, I inserted new key-value pairs for the top words associated with each topic, topic labels according the clustering algorithm, and sentiment labels into each corresponding document in the collection. As each article was analyzed, the resulting labels and topic information were stored in the article’s document in MongoDB. As a result, there was no chance of data loss and any method could query the database for needed information regardless of whether other processes running in parallel were complete.

Sentiment analysis was the most difficult part of the project. There is currently no valuable labeled data related to politics and news so I initially tried to train the base models on a data set of Amazon product reviews. Unsurprisingly, this proved to be a poor choice of training data because the resulting models consistently graded sentences such as “The governor’s speech reeked of subtle racism and blatant lack of political savvy” as having positive sentiment with a ~90% probability, which is questionable at best. As a result I had to manually label ~100k data points, which was time-intensive, but resulted in a much more reliable training set. The model trained on manual labels significantly outperformed the base model, trained on the Amazon product reviews data. No changes were made to the sentiment analysis algorithm itself; the only difference was the training set. This highlights the importance of accurate and relevant data for training ML models – and the necessity, more often than not, of human intervention in machine learning. Finally, by code freeze, the model was successfully extracting topics from each article and clustering the topics based on similarity to the topics in other articles.

Conclusion

In conclusion, MongoDB provides several different capabilities such as: flexible data model, indexing and high-speed querying, that make training and using machine learning algorithms much easier than with traditional, relational databases. Running MongoDB as the backend database to store and enrich ML training data allows for persistence and increased efficiency.

A final look at the MongoDB pipeline used for this project

If you are interested in this project, feel free to take a look at the code on GitHub, or feel free to contact me via LinkedIn or email.

This post originally appeared on the MongoDB blog. This post is sponsored by MongoDB via SyndicateAds.

About the author – Nicholas Png

Nicholas Png is a Data Scientist recently graduated from the Data Science Immersive Program at Galvanize in San Francisco. He is a passionate practitioner of Machine Learning and Artificial Intelligence, focused on Natural Language Processing, Image Recognition, and Unsupervised Learning. He is familiar with several open source databases including MongoDB, Redis, and HDFS. He has a Bachelors of Science in Mechanical Engineering as well as multiple years experience in both software and business development.

Download the AI and Deep Learning white paper

Source:: scotch.io

Enterprise Integration Patterns: Designing, Building, And Deploying Messaging Solutions By Gregor Hohpe And Bobby Woolf

Ben Nadel reviews Enterprise Integration Patterns: Designing, Building, and Deploying Messaging Solutions by Gregor Hohpe and Bobby Woolf. This book may be 700 pages of technical writing; but, it’s clear, concise, and very consumable even for someone who has no messaging background. And, with the way software architectures are evolving, the patterns outlined in this book are only going to become more relevant….

Source:: bennadel.com

Build A Pomodoro Timer with Vue.js (Solution to Code Challenge #6)

By William Imoh

Tried the code challenge #6? Last week, we put out the challenge to build a Pomodoro timer using any tool or technology.

You can check out the amazing entries for the challenge using the hashtag #scotchchallenge on twitter on in the comment section under the challenge’s post. In this post, we shall be solving the challenge using Vue.js

Why Vue.js?

Vue is a progressive JavaScript framework for developing interactive user interfaces. Vue offers the ease to manipulate DOM elements as well as apply methods to these elements, seamlessly.

The Base

The base code provided consists of HTML and CSS code to structure and style the timer in its default state.

HTML

The HTML structure of the timer is divided into several parts for clarity and better control. The parts are the body, the timer numbers, and the timer control buttons.

<!-- our template -->
<section id="app" class="hero is-info is-fullheight is-bold">
<div class="hero-body">
<div class="container has-text-centered">

<!-- The bulk of the timer -->

</div>
</div>
</section>

First, we create the display text on the timer and the timer numbers with:

<h2 class="title is-6">Let the countdown begin!!</h2>

    <!--  THE TIMER NUMBERS  -->
  <div id="timer">
    <span id="minutes">25</span>
    <span id="middle">:</span>
    <span id="seconds">00</span>
  </div>

Next, we include the buttons for control with:

<!--  THE BUTTONS  -->
<div id="buttons">

<!--  START BUTTON    -->
<button 
  id="start" 
  class="button is-dark is-large"> 
    <i class="far fa-play-circle"></i>
</button>

<!--   PAUSE BUTTON   -->
<button 
  id="stop" 
  class="button is-dark is-large"> 
    <i class="far fa-pause-circle"></i>
</button>

<!--  RESET BUTTON   -->
<button 
  id="reset" 
  class="button is-dark is-large"> 
    <i class="fas fa-undo"></i>
</button>
</div>

CSS

Basic CSS was used to style the timer. Bulma and Fontawesome were imported as external sheets to the pen. The timer was styled with:

#message {
  color: #DDD;
  font-size: 50px;
  margin-bottom: 20px;
}

#timer {
  font-size: 200px;
  line-height: 1;
  margin-bottom: 40px;
}

The Technique

We are required to countdown the time from 25minutes 00seconds. To implement this dynamism, Vue was required to manipulate the timer values as well as implement the controls of pause, play, and restart on the timer.

Creating The Vue Application

First, we create a new Vue instance and mount it on the DOM element with the id of app, using the Vue property, el. In the Vue instance, we also define the required lifecycle methods and properties.

const app = new Vue({
  el: '#app',
  // ========================
  data: {

  },
  // ========================
  methods: {

  },
  // ========================
  computed: {

  }
})

The data property holds the state data as well as dynamic data we would like to pass into the DOM. The methods property contains all methods required by the timer, this property will hold the controls of the timer. Many times we would like to perform further manipulation on the items in the data property before using them elsewhere in an application, the computed property holds all manipulated data.

The Data

In the data object, we created all the basic data required by the timer as well as the timer state.

timer: null,
totalTime: (25 * 60),
resetButton: false,
title: "Let the countdown begin!!"

totalTime which is the total time to be run by the timer in seconds will be further split into minutes and seconds. The timer property holds the state of the timer and resetButton holds the state of the reset button with which we can toggle the button. The title property is used to dynamically set the display text above the timer.

The Methods

We configure the timer controls in the methods object with:

startTimer: function() {
  this.timer = setInterval(() => this.countdown(), 1000);
  this.resetButton = true;
},
stopTimer: function() {
  clearInterval(this.timer);
  this.timer = null;
  this.resetButton = true;
},
resetTimer: function() {
  this.totalTime = (25 * 60);
  clearInterval(this.timer);
  this.timer = null;
  this.resetButton = false;
},
padTime: function(time) {
  return (time < 10 ? '0' : '') + time;
},
countdown: function() {
  this.totalTime--;
}

Each timer method makes use of the state values in data to create actions on the timer. A setInterval() function starts the timer and calls the countdown method which in turn decrements the total time every 1000milliseconds (1 second). The stopTimer methods employs a clearInterval() function to stop the timer currently running.

The resetTimer method simply returns the total time on the timer to its initial value, stops the timer from running and removes the reset button (obviously isn’t needed anymore). To make the timer look better, we introduce a padTime function which includes a leading zero (0) to either the second or the minute value once it is less than 10.

In the computed property, we calculate the value of minutes and seconds from the totalTime. This value of minutes and seconds will be passed to the DOM.

Passing Timer Variables and Methods to the DOM

So far we have created all the required variables, state values, and methods for controls. Vue provides a clear way to pass this data into the DOM using handlebars-like syntax. We pass in the minutes and seconds variable with:

<div id="timer">
    <span id="minutes">{{ minutes }}</span>
    <span id="middle">:</span>
    <span id="seconds">{{ seconds }}</span>
</div>

We currently have three buttons in place, but not all should display at once. We would display the reset button and either of the pause and play button at any time. To achieve this, we conditionally render the buttons with respect to the state of the timer, using the v-if Vue directive.

<div id="buttons">
<!--     Start TImer -->
    <button 
      id="start" 
      class="button is-dark is-large" 
      v-if="!timer"
      @click="startTimer">
        <i class="far fa-play-circle"></i>
    </button>
    <!--     Pause Timer -->
    <button 
      id="stop" 
      class="button is-dark is-large" 
      v-if="timer"
      @click="stopTimer">
        <i class="far fa-pause-circle"></i>
    </button>
    <!--     Restart Timer -->
    <button 
      id="reset" 
      class="button is-dark is-large" 
      v-if="resetButton"
      @click="resetTimer">
        <i class="fas fa-undo"></i>
    </button>
</div>

The conditional v-if directive renders the button element if its assigned value resolves to either true or false depending on which is chosen. This value is set dynamically with Vue. The @click directive (short for v-on:click) triggers the assigned method when clicked.

Bonus

To make the timer more fun, and of course, inspiring, we dynamically set the text above the timer to change with each state of the timer. We already set this text using Vue in the data property. We then proceed to change the value of this text once a method is triggered. Just to stress more on setting DOM values with Vue, in the methods property we have:

startTimer: function() {
    this.timer = setInterval(() => this.countdown(), 1000);
    this.resetButton = true;
    this.title = "Greatness is within sight!!"
},
stopTimer: function() {
    clearInterval(this.timer);
    this.timer = null;
    this.resetButton = true;
    this.title = "Never quit, keep going!!"
},
resetTimer: function() {
    this.totalTime = (25 * 60);
    clearInterval(this.timer);
    this.timer = null;
    this.resetButton = false;
    this.title = "Let the countdown begin!!"
},

Notice how this.title changes with each method.

You can see the final product here https://codepen.io/Chuloo/pen/xWVpqq

Conclusion

Pomodoro timers are great for productivity, in this challenge we built out one using Vue. We also demonstrated several features that makes Vue interesting, including Vue’s lifecycle methods and properties. Feel free to leave your comments under this post and let’s look forward to the next challenge. Happy coding!

Source:: scotch.io

Integrating legacy and CQRS

By Golo Roden

Integrating legacy and CQRS

The architecture pattern CQRS suggests an application structure that differs significantly from the approach commonly used in legacy applications. How can the two worlds still be integrated with each other?

The full name of the design pattern CQRS is Command Query Responsibility Segregation. This describes the core of the pattern to separate actions and queries of an application already on an architectural level. While the actions called commands change the state of the application, queries are responsible for reading the state and transferring it to the caller.

As they complement each other well, CQRS is often combined with the concepts DDD (domain-driven design) and event-sourcing. Events play an important role in this context, as they inform about the facts that have happened within the application. To learn about these concepts as well as their interaction, there’s a free brochure on DDD, event-sourcing and CQRS written by the native web that you might be interested in.

The consequent separation of commands as actions and events as reactions leads to asynchronous user interfaces, which confront the developer with special challenges. In this context, for example, the question of how to deal with (asynchronous) errors is interesting, if you don’t want to make the user wait regularly in the user interface until the event matching the command sent has been received.

Legacy systems rarely work according to CQRS

On the other hand, there are countless legacy applications that are practically always based on architecture patterns other than CQRS. The classic three-layer architecture with CRUD as the method for accessing data is particularly common. However, this often leads to unnecessarily complex, monolithic applications in which CRUD keeps getting cultivated, although it was not sufficient any more after a short period of time already.

Unfortunately, the integration possibilities with such applications are as expected: poor. Even web applications have often been developed without APIs, since no value has been attached to them and the technologies used have promoted the limited field of vision. From today’s point of view this seems irresponsible, but over the years and decades this has been an accepted procedure. The sad thing about it is that development towards networked applications and services has been going on for many years, but too many developers and companies have deliberately ignored them.

The price to pay for this are the legacy applications of today, which do not have any APIs and whose integration possibilities are practically non-existent. It can therefore be stated that a modern service-based architecture based on CQRS differs fundamentally from what has been implemented in most cases in the past. In addition, there is the lack of scalability of applications based on a three-tier architecture.

Developing in the greenfield

Unfortunately, legacy applications don’t just disappear into thin air, which is why in many cases you have to live with them and make arrangements. The only exception to this is greenfield development, in which an application is completely redeveloped from scratch, without having to take legacy sites into account. However, this strategy is dangerous, as the well-known entrepreneur Joel Spolsky describes in his extremely worth reading blog entry Things You Should Never Do, Part I.

In the actual case of a greenfield development, the question arises at best about the suitability or necessity of CQRS. A guide to this can be found at When to use CQRS?!. It is also necessary to clarify whether CQRS can be usefully supplemented with domain-driven design and event sourcing. At this point, however, the simple part ends already, because the scenario of a greenfield development is always simple – precisely because there are no dependencies in the past.

Already the simple case of the complete replacement of an existing system by a new development raises complicated questions when the new application is based on CQRS. In practice, the separation of commands and queries in CQRS often leads to a physical separation of the write and read sides, which corresponds to the use of two databases. While one contains normalized data and serves the purpose of ensuring consistency and integrity when writing, the other contains data that is optimized for reading, i.e. denormalized data.

If you want to replace an existing application, you have to think about how to migrate the legacy data. It is obvious that this is not easy when switching from a CRUD-based, classic, relational database to two databases, each fulfilling a specific task. It is therefore necessary to analyze the existing data in detail, structure it and then decide how it can be mapped to the new databases without having to compromise on CQRS.

The database as an integration point

However, it becomes really difficult when the old and the new application have to coexist in parallel and have to be integrated with each other because, for example, a replacement is only to take place gradually. Another reason for the scenario is the addition of another application to an existing application without the need to replace it at all. How can CQRS be integrated with legacy applications in these cases?

One obvious option is integration via the database. This can work for applications based on the classic CRUD model, but is inconvenient for CQRS, because the problem of different data storage is also relevant here. In this case, however, the comparison is even more difficult, since not only the existing semantics must be mapped to a new one, but the new one must also continue to work for the existing application.

In addition, there are general concerns that need to be mentioned independently of the architecture of the applications. This includes in particular side-effects regarding the referential integrity, which can quickly trigger a boomerang effect. In addition, the applications are actually only seemingly decoupled from each other, as the effects of future changes to the data schema are intensified. Another point that makes integration via the database more difficult is the lack of documentation of the extensive and complex schemata.

Moreover, since the database was rarely planned as an integration point, direct access to it usually feels wrong. After all, the user avoids all domain concepts, tests and procedures that are implemented in the application and are only available in the database as implicit knowledge. The procedure is therefore to be regarded as extremely fragile, particularly from a domain point of view.

Another point of criticism about an integration via the database is the lack of possibilities for applications to actively inform each other about domain events. This could only be solved with a pull procedure, but this can generally be regarded as a bad idea due to the poor performance and the high network load. In summary, it becomes clear that the integration of a CQRS application with a legacy application via the database is not a viable way.

APIs instead of databases

An alternative is integration via an API. As already explained, it can be assumed that very few legacy applications have a suitable interface. However, this does not apply to the new development. Here it is advisable to have an API from the beginning – anything else would be grossly negligent in the 21st century. Typically, such an API is provided as a REST interface based on HTTPS or HTTP/2. Pure, i.e. unencrypted HTTP, can be regarded as outdated for a new development.

If you add concerns such as OpenID Connect to such a Web API, authentication is also easy. This also provides an interface based on an open, standardized and platform-independent protocol. This simplifies the choice of technology, since the chosen technology only has to work for the respective context and no longer represents a systemic size.

With the help of such an API, commands can be easily sent to the CQRS application. Executing queries is also easy. The two operations correspond to HTTP requests based on the verbs POST and GET. The situation is much more difficult if, in addition to commands and queries, events also need to be supported. The HTTP API is then required to transmit push messages, but the HTTP protocol was never designed for this purpose. As a way out, there are several variants, but none of which works completely satisfactorily.

How to model an API for CQRS?

There are countless ways to model the API of a CQRS application. For this reason, some best practices that can be used as a guide are helpful. In the simplest case, an API with three endpoints that are responsible for commands, events and queries is sufficient.

The npm module tailwind provides a basic framework for applications based on CQRS. The approach used there can easily be applied to technologies other than Node.js, so that a cross-technology, compatible standard can be created.

For commands there is the POST route /command, which is only intended for receiving a command. Therefore, it acknowledges receipt with the HTTP status code 200, but this does not indicate whether the command could be processed successfully or not. It just arrived. The format of a command is described by the npm module commands-events.

A command has a name and always refers to an aggregate in a given context. For example, to perform a ping, the command could be called ping and refer to the aggregate node in the context network. In addition, each command has an ID and the actual user data stored in the data block. The user property is used to append a JWT token to enable authentication at command level. Metadata such as a timestamp, a correlation ID and a causation ID complete the format:

{
  "context": {
    "name": "network"
  },
  "aggregate": {
    "name": "node",
    "id": "85932442-bf87-472d-8b5a-b0eac3aa8be9"
  },
  "name": "ping",
  "id": "4784bce1-4b7b-45a0-87e4-3058303194e6",
  "data": {
    "ttl": 10000
  },
  "custom": {},
  "user": null,
  "metadata": {
    "timestamp": 1421260133331,
    "correlationId": "4784bce1-4b7b-45a0-87e4-3058303194e6",
    "causationId": "4784bce1-4b7b-45a0-87e4-3058303194e6"
  }
}

The route /read/:modelType/:modelName is used to execute queries, and it is also addressed via POST. The name of the resource to be queried and its type must be specified as parameters. For example, to get a list of all nodes from the previous example, the type would be list and the name would be nodes. The answer is obtained as a stream, with the answer in ndjson format. This is a text format in which each line represents an independent JSON object, which is why it can be easily parsed even during streaming.

Finally, the route /events is available for events, which must also be called via POST. The call can be given a filter, so that the server does not send all events. The ndjson format is also used here – in contrast to executing queries, the connection remains permanently open so that the server can transfer new events to the client at any time. The format of the events is similar to that of the commands and is also described by the module commands-events.

All these routes are bundled under the endpoint /v1 to have some versioning for the API. If you want to use websockets instead of HTTPS, the procedure works in a very similar way. In this case, too, the module tailwind describes how the websocket messages should be structured.

Selecting a transport channel

To transfer push data, the most sustainable approach is still long polling, but it is admittedly quite dusty. The concept of server-sent events (SSE) introduced with HTML5 solves the problem elegantly at first glance, but unfortunately there is no possibility to transfer certain HTTP headers, which makes token-based authentication hard if not impossible. In turn, JSON streaming works fine in theory and solves the problems mentioned above, but fails because today’s browsers do not handle real streaming, which, depending on the number of events, gradually leads to a shortage of available memory. The streams API promised for this purpose has been under development for years, and there is no end in sight.

Often, websockets are mentioned as an alternative, but they are only supported by newer platforms. Since this case is explicitly about integration with legacy applications, it is questionable to what extent they support the technology. Provided that the retrieval is carried out exclusively on the server side and a platform with good streaming options is available, JSON streaming is probably the best choice at present.

Irrespective of the type of transport chosen, the basic problem remains that access to the CQRS-based application can only be granted from the legacy application, since no API is available for the other way around. But even if you ignore this disadvantage, there are other factors that make the approach questionable: fragile connections that can only be established and maintained temporarily may cause data to be lost during offline phases. To prevent this, applications need a concept for handling offline situations gracefully. This, in turn, is unlikely to be expected in legacy applications.

A message queue as a solution?

Another option is to use a message queue, which is a common procedure for integrating different services and applications. Usually, it is mentioned as a disadvantage that the message queue would increase the complexity of the infrastructure by adding an additional component. In the present context, however, this argument only applies in exceptional cases, since CQRS-based applications are usually developed as scalable distributed systems that use a message queue anyway.

There are different protocols for message queues. For the integration of applications, AMQP (Advanced Message Queueing Protocol) is probably the most common solution, supported by RabbitMQ and others. As this is an open standard, there is a high probability of finding an appropriate implementation for almost any desired platform.

A big advantage of message queues is that the exchange of messages works bidirectionally. If an application can establish a connection, it can use the message queue as a sender and receiver, so that not only the legacy application can send messages to the new application, but also vice versa. Another advantage is that message queues are usually designed for high availability and unstable connections. They therefore take care of the repetition of a failed delivery and guarantee it to a certain extent.

From a purely technical point of view, message queues can therefore be regarded as the optimal procedure that solves all problems. However, this does not apply from a domain point of view, because this is where the real problems begin, which are completely independent of the underlying transport mechanism. Since two applications are to be integrated with each other, it is also necessary to integrate different data formats and, above all, different domain languages. For example, the legacy application can work with numeric IDs, while the CQRS application can work with UUIDs, which requires bidirectional mapping at the border between the systems.

Mapping contexts between applications

In the linguistic field, this can be particularly difficult if domain concepts are not only given different names, but are even cut differently. Finding a common language is already difficult in a small interdisciplinary team – how much more difficult is it if the modeling of the two languages takes place independently in different teams, separated by several years or decades? The real challenge is to coordinate the semantics of the two applications and to develop semantically suitable adapters.

This is done using context mapping, i. e. mapping one language to another at the border between two systems. Since the two systems are separate applications in this case, it makes sense to implement context mapping in adapters as independent processes between the applications. The use of a message queue then plays out its advantages, since neither the two applications nor the adapter need to know each other. It is sufficient if each of the three components involved has access to the message queue to be able to send and receive messages.

In simple cases, an adapter is nothing more than a process that responds to incoming messages by translating the attached data into the target domain language and sending a new message, in accordance with the if-this-then-that concept. In the case of long-lasting, stateful workflows, however, this procedure is not enough, since the decision which message to send can no longer be made on the basis of the incoming message alone. In addition, the history is also required, for example, to be able to place the received message in a context.

In this case, it is advisable to implement an adapter as a state machine, whereby the incoming messages are the triggers for different state transitions. However, this means that the adapter also has a persistence option and must be designed for high availability. When modeling states and transitions, complexity increases rapidly if all potential variants are considered.

In order to keep the complexity of the adapters manageable, it is advisable to initially only consider the regular case that the workflow is processed successfully and only recognize error states – without having to process them automatically. In the simplest case, it may be sufficient to send a message to an expert who can then take care of the state of the workflow by hand. It is always helpful to keep in mind that context mapping in other parts is a domain problem and not a technical problem, which should therefore be solved professionally.

Who knows the truth?

Finally, the question of who knows the ultimate truth and has the last word in case of doubt is a fundamental question. Do the data and processes of the existing application have priority, or is the CQRS application granted the sovereignty over the truth? If the CQRS application works with event-sourcing, it is advisable to give preference to it, since event-sourcing enables extremely flexible handling of the data, which is far superior to the existing CRUD approach.

However, it is not possible to answer the question in general terms, since this ultimately depends on the individual situation. In any case, however, it is important to consider the question of conflict resolution and to clarify how to deal with contradictions in data and processes. But that too, however, is a technical and not a technical problem.

In summary, message queues and APIs are the only way to integrate legacy and CQRS applications in a clean way. The major challenges are not so much technical, but rather domain issues in nature and can hardly be solved sustainably without the advice of the respective experts. The long time since the development of the legacy application may be aggravating. Hope can be given at this point that professionalism may be less subject to change than the technology used, although this depends very much on the domain in question.

Source:: risingstack.com