Monthly Archives: February 2018

Infinite Scroll in React Using Intersection Observer

By Chris Nwamba

The most dreaded features when building frontend apps are features that need you to take control of the scroll events, behavior and properties. Not only are they hard to implement because of numbers crunching, but more often they are prone to affect performance badly. A lot of us head to libraries for help but they still can’t help you with performance. This is because they use the main thread to manage the scroll events which are called frequently.

A more efficient and simpler alternative is the in-built Intersection Observer which is fairly new but can be polyfilled.

Intersection and Observation

The Intersection Observer does not require an event. Rather, it waits for a rectangle you want to observe to get into view before running any code. You can call this rectangle the target and the view to be entered, the root.

From the image above, root can be the entire page or a portion (eg. div) in the page. By default, observation starts once the target enters the root.

Here is a code example that shows how to achieve the intersection in the above diagram:

const options = {
  root: document.querySelector('#divRoot'), /* or `null` for page as root */

const observer = new IntersectionObserver(callback, options);

After setting up the observer, you can start observing the target:

const target = document.querySelector('#divTarget');

For each time an intersection happens, the callback function in the IntersectionObserver constructor is called:

const callback = (entities, options) => {
  console.log(entities, options)

The Threshold

The threshold refers to how much of an intersection has been observed. This illustration should help you understand better:

The threshold for page A is 25%. B is 50% while C is 75%. These are not just figures, you can tell your browser to start observing at any threshold. By default, the observations starts at 0.0 but you can ignore the first half of the target and start observing at the second half (0.5).

To set the threshold, all that needs to be done is to add the threshold to the options object:

const options = {
  root: document.querySelector('#divRoot'), /* or `null` for page as root */
  threshold: 1.0 // Only observe when the entire box is in view

Now let’s see how to use this in a real example.

React State

In your React’s App component, add the following constructor:

constructor(props) {
  this.state = {
    users: [],
    page: 0,
    loading: false,
    prevY: 0

The state object contains the following:

  • users: Stores a list of users on Github
  • page: The start page for the list of users from Github
  • loading: When true shows a div with a loading text. This will be helpful when fetching data
  • prevY: This is where the last intersection y position will be stored for reference

Fetching Users from Github

Add axios as dependency via npm then add a componentDidMount in the App class:

componentDidMount() {

The method is calling getUsers which we will create next:

getUsers(page) {
  this.setState({ loading: true });
    .then(res => {
      this.setState({ users: [...this.state.users,] });
      this.setState({ loading: false });

This method takes a page parameter and uses the parameter to query the Github API for users. It then updates the state with the users. Notice how the method is flipping the loading state to show a loading text.

Go ahead to render the list in the browser:

render() {
  const loadingCSS = {
    height: '100px',
    margin: '30px'
  const loadingTextCSS = { display: this.state.loading ? 'block' : 'none' };
  return (
    <div className="container">
      <div style={{ minHeight: '800px' }}>
          { => <li key={}>{user.login}</li>)}
        ref={loadingRef => (this.loadingRef = loadingRef)}
        <span style={loadingTextCSS}>Loading...</span>

Notice loadingRef div which shows a loading text. We’ll use it later as a target for our Intersection Observer.

Implementing Infinite Scroll with IO

We want the observer to start after a component is mounted, therefore we should set it up in componentDidMount:

componentDidMount() {

  // Options
  var options = {
    root: null, // Page as root
    rootMargin: '0px',
    threshold: 1.0
  // Create an observer = new IntersectionObserver(
    this.handleObserver.bind(this), //callback
  //Observ the `loadingRef`;

Just as we saw earlier, is the instance of IntersectionObserver. We are also using the instance to observe the loadingRef we created in the render method above. This makes loadingRef the target.

The callback is named handleObserver. Let’s create it like so:

handleObserver(entities, observer) {
  const y = entities[0].boundingClientRect.y;
  if (this.state.prevY > y) {
    const lastUser = this.state.users[this.state.users.length - 1];
    const curPage =;
    this.setState({ page: curPage });
  this.setState({ prevY: y });

The method completes each of the following:

  1. The if statement makes sure the method body is only called when scrolling down and not when scrolling up.
  2. Paging in Github uses the IDs instead of page number. In that case, we need the last ID of the current list to make decisions for the next. The lastUser and curePage variables help to maintain this.
  3. When the curPage has been updated, we can then call getUsers with its value and update the state with setState.

Have another look at the example:

Final Words

It’s important to mention that as much as this is exciting to use, it’s not ready and it’s yet to be supported well enough across browsers. Keep this in mind and use a polyfill. You can refer to this to learn more about support:


Weekly Node.js Update - #8 - 02.23, 2018

By Tamas Kadlecsik

Weekly Node.js Update - #8 - 02.23, 2018

Below you can find RisingStack‘s collection of the most important Node.js news, updates, projects & tutorials from this week:

Node v9.6.1 (Current) is released

This is a special release to fix potentially Semver-Major regression that was released in v9.6.0

  • events:
    • events.usingDomains being set to false by default was removed in 9.6.0 which was a change in behavior compares to 9.5.0. This behavior change has been reverted and the events object now has usingDomains preset to false, which is the behavior in 9.x prior to 9.6.0 (Myles Borins)

Show-stopping bug appears in npm Node.js package manager

A new release of npm, fatally changes file permissions. This has been fixed already, but the entire messy process revealed more fundamental problems.

By running sudo npm under a non-root user (root users do not have the same effect), filesystem permissions are being heavily modified.

3 simple tricks for smaller Docker images

When you’re building Docker containers, you should aim for as small images as possible. As smaller size images that share layers are quicker to transfer and deploy.

Weekly Node.js Update - #8 - 02.23, 2018

Using Promise.prototype.finally in Node.js

This article will show you how to use Promise.prototype.finally() and how to write your own simplified polyfill.

How to Build Twitter’s Real-time Likes Feature with Node.js and Pusher

In this article, you can take a glance on how to implement your own real-time post statistics (we’ll limit ourselves to Likes) for Twitter in a simple Node.js app.
Weekly Node.js Update - #8 - 02.23, 2018

Protected routes and Authentication with React and Node.js

In this tutorial, we’ll quickly implement the basic authentication flow using JSON Web Tokens that a Strapi API provides.

Also, we’ll check how to use authentication providers (Facebook, GitHub, Google…) with Strapi to authenticate your users.

How Fintonic uses Node.js, MongoDB & Kubernetes to scale

The 2nd episode of the Top of the Stack is out!

We interviewed Angel Manuel Cereijo Martinez and Roberto Ansuini to get some great architectural insights about Fintonic and learn how they built their backend using #Nodejs & #Kubernetes, and #MongoDB.

Learn Node.js & Microservices

Would you like to know more about the Node.js Fundamentals, Microservices, Kubernetes, Angular, or React? We have good news for you!

8 React Interview Questions for 2018

How to prepare for a React Interview? Here’s our inspiration for both interviewers and interviewees.

These questions help us at RisingStack to find enthusiastic, knowledgeable React.js developers.

Previously Node.js Updates:

In the previous Weekly Node.js Update, we collected great articles, like

  • Node v6.13.0 (LTS) was released;
  • Top 1000 most depended-upon packages;
  • 5 Practical Ways To Share Code: From NPM To Lerna And Bit;
  • Reducing GraphQL response size by… a lot;

& more…

We help you to stay up-to-date with Node.js on a daily basis too. Check out our Node.js news page and its Twitter feed!


Build an Elastic Range Input with SVG and anime.js

By Luis Manuel

Dribble Shot by Stan Yakusevich

Among all the fascinating web components we can found in any UI, forms are usually the most boring part. In the past, there used to be only a few text input elements, where the user had to enter the data manually. Then with HTML5 things improved a lot, since new types of input appeared, such as color, date and range, among many others.

Although functionally these new types of input works, we can say that they often do not meet the aesthetic needs of web applications, then many proposals have emerged to replace these elements and achieve a better appearance, and looking almost identical in all modern browsers.

In this tutorial we will see how we can simulate the behavior of a range input, with an elegant component like this:

This original animation, which we have used as inspiration, can be found on this dribble shot by Stan Yakusevich.

To code it, we will mainly use SVG to draw the paths, and anime.js to perform the animations.

Above is the final product we’ll make. Let’s start!

Coding the Markup: HTML and SVG

Next we will see the main HTML structure that we will use. Please read the comments so you do not miss a single detail:

<!-- Wrapper for the range input slider -->
<div class="range__wrapper">
    <!-- The real input, it will be hidden, but updated properly with Javascript -->
    <!-- For a production usage, you may want to add a label and also put it inside a form -->
    <input class="range__input" type="range" min="30" max="70" value="64"/>

    <!-- All the other elements will go here -->

As we can see, our component contains an actual input of type range, which we will update properly with Javascript. Having this input element and our component into a common HTML form, allows us to send the value of the input (along with the other form data) to server on submit.

Now let’s see the SVG elements that we need, commented for a better understanding:

<!-- SVG elements -->
<svg class="range__slider" width="320px" height="480px" viewBox="0 0 320 480">
        <!-- Range marks symbol, it will be reused below -->
        <symbol id="range__marks" shape-rendering="crispEdges">
            <path class="range__marks__path" d="M 257 30 l 33 0"></path>
            <path class="range__marks__path" d="M 268 60 l 22 0"></path>
            <path class="range__marks__path" d="M 278 90 l 12 0"></path>
            <path class="range__marks__path" d="M 278 120 l 12 0"></path>
            <path class="range__marks__path" d="M 278 150 l 12 0"></path>
            <path class="range__marks__path" d="M 278 180 l 12 0"></path>
            <path class="range__marks__path" d="M 278 210 l 12 0"></path>
            <path class="range__marks__path" d="M 278 240 l 12 0"></path>
            <path class="range__marks__path" d="M 278 270 l 12 0"></path>
            <path class="range__marks__path" d="M 278 300 l 12 0"></path>
            <path class="range__marks__path" d="M 278 330 l 12 0"></path>
            <path class="range__marks__path" d="M 278 360 l 12 0"></path>
            <path class="range__marks__path" d="M 278 390 l 12 0"></path>
            <path class="range__marks__path" d="M 268 420 l 22 0"></path>
            <path class="range__marks__path" d="M 257 450 l 33 0"></path>
        <!-- This clipPath element will allow us to hide/show the white marks properly -->
        <!-- The `path` used here is an exact copy of the `path` used for the slider below -->
        <clipPath id="range__slider__clip-path">
            <path class="range__slider__path" d="M 0 480 l 320 0 l 0 480 l -320 0 Z"></path>
    <!-- Pink marks -->
    <use xlink:href="#range__marks" class="range__marks__pink"></use>
    <!-- Slider `path`, that will be morphed properly on user interaction -->
    <path class="range__slider__path" d="M 0 480 l 320 0 l 0 480 l -320 0 Z"></path>
    <!-- Clipped white marks -->
    <use xlink:href="#range__marks" class="range__marks__white" clip-path="url(#range__slider__clip-path)"></use>

If this is the first time you use the SVG path element or you don’t understand how they work, you can learn more in this excellent tutorial in MDN.

And finally, we need another piece of code to show the values and texts that appears in the original animation:

<!-- Range values -->
<div class="range__values">
    <div class="range__value range__value--top">
        <!-- This element will be updated in the way: `100 - inputValue` -->
        <span class="range__value__number range__value__number--top"></span>
        <!-- Some text for the `top` value -->
        <span class="range__value__text range__value__text--top">
            <span>You Need</span>
    <div class="range__value range__value--bottom">
        <!-- This element will be updated with the `inputValue` -->
        <span class="range__value__number range__value__number--bottom"></span>
        <!-- Some text for the `bottom` value -->
        <span class="range__value__text range__value__text--bottom">
            <span>You Have</span>

As you can see, the HTML code is quite easy to understand if we follow the comments. Now let’s look at the styles.

Adding styles

We will start styling the wrapper element:

.range__wrapper {
  user-select: none; // disable user selection, for better drag & drop

  // More code for basic styling and centering...

As you can see, apart from the basic styles to achieve a proper appearance and centering the element, we have disabled the user’s ability to select anything within our component. This is very important, since we will implement a “drag and drop” type interaction, and therefore if we allow the “select” functionality, we can get unexpected behaviors.

Next we will hide the actual input element, and position the svg (.range__slider) element properly:

// Hide the `input`
.range__input {
  display: none;

// Position the SVG root element
.range__slider {
  position: absolute;
  left: 0;
  top: 0;

And to color the SVG elements we use the following code:

// Slider color
.range__slider__path {
  fill: #FF4B81;

// Styles for marks
.range__marks__path {
  fill: none;
  stroke: inherit;
  stroke-width: 1px;

// Stroke color for the `pink` marks
.range__marks__pink {
  stroke: #FF4B81;

// Stroke color for the `white` marks
.range__marks__white {
  stroke: white;

Now let’s see the main styles used for the values. Here the transform-origin property plays an essential role to keep the numbers aligned with the text in the desired way, as in the original animation.

// Positioning the container for values, it will be translated with Javascript
.range__values {
  position: absolute;
  left: 0;
  top: 0;
  width: 100%;

// These `transform-origin` values will keep the numbers in the desired position as they are scaled
.range__value__number--top {
  transform-origin: 100% 100%; // bottom-right corner
.range__value__number--bottom {
  transform-origin: 100% 0; // top-right corner

// More basic styles for the values...

Adding interactions with Javascript

Now it’s time to add the interactions, start animating things and having fun 🙂

First, let’s see the code needed to simulate the drag and drop functionality, listening to corresponding events, doing maths work and perform animations. Please note we are not including the whole code, but only the fundamental parts to understand the behavior.

// Handle `mousedown` and `touchstart` events, saving data about mouse position
function mouseDown(e) {
    mouseY = mouseInitialY = e.targetTouches ? e.targetTouches[0].pageY : e.pageY;
    rangeWrapperLeft = rangeWrapper.getBoundingClientRect().left;

// Handle `mousemove` and `touchmove` events, calculating values to morph the slider `path` and translate values properly
function mouseMove(e) {
    if (mouseY) {
        // ... Some code for maths ...
        // After doing maths, update the value

// Handle `mouseup`, `mouseleave` and `touchend` events
function mouseUp() {
    // Trigger elastic animation in case `y` value has changed
    if (mouseDy) {
    // Reset values
    mouseY = mouseDy = 0;

// Events listeners
rangeWrapper.addEventListener('mousedown', mouseDown);
rangeWrapper.addEventListener('touchstart', mouseDown);
rangeWrapper.addEventListener('mousemove', mouseMove);
rangeWrapper.addEventListener('touchmove', mouseMove);
rangeWrapper.addEventListener('mouseup', mouseUp);
rangeWrapper.addEventListener('mouseleave', mouseUp);
rangeWrapper.addEventListener('touchend', mouseUp);

Now we can take a look at the updateValue function. This function is responsible for updating the component values and moving the slider in correspondence with the cursor position. We have commented exhaustively every part of it, for a better understanding:

// Function to update the slider value
function updateValue() {
    // Clear animations if are still running
    anime.remove([rangeValues, rangeSliderPaths[0], rangeSliderPaths[1]]);

    // Calc the `input` value using the current `y`
    rangeValue = parseInt(currentY * max / rangeHeight);
    // Calc `scale` value for numbers
    scale = (rangeValue - rangeMin) / (rangeMax - rangeMin) * scaleMax;
    // Update `input` value
    rangeInput.value = rangeValue;
    // Update numbers values
    rangeValueNumberTop.innerText = max - rangeValue;
    rangeValueNumberBottom.innerText = rangeValue;
    // Translate range values = 'translateY(' + (rangeHeight - currentY) + 'px)';
    // Apply corresponding `scale` to numbers = 'scale(' + (1 - scale) + ')'; = 'scale(' + (1 - (scaleMax - scale)) + ')';

    // Some maths calc
    if (Math.abs(mouseDy) < mouseDyLimit) {
        lastMouseDy = mouseDy;
    } else {
        lastMouseDy = mouseDy < 0 ? -mouseDyLimit : mouseDyLimit;

    // Calc the `newSliderY` value to build the slider `path`
    newSliderY = currentY + lastMouseDy / mouseDyFactor;
    if (newSliderY < rangeMinY || newSliderY > rangeMaxY) {
        newSliderY = newSliderY < rangeMinY ? rangeMinY : rangeMaxY;

    // Build `path` string and update `path` elements
    newPath = buildPath(lastMouseDy, rangeHeight - newSliderY);
    rangeSliderPaths[0].setAttribute('d', newPath);
    rangeSliderPaths[1].setAttribute('d', newPath);

As we have seen, within the previous function there is a call to the buildPath function, which is an essential piece in our component. This function will let us build the path for the slider, given the following parameters:

  • dy: distance in the y axis that mouse has been moved since the mousedown or touchstart event.
  • ty: distance in the y axis that the path must be translated.

It also uses the mouseX value to draw the curve to the cursor position on the x axis, and return the path in String format:

// Function to build the slider `path`, using the given `dy` and `ty` values
function buildPath(dy, ty) {
    return 'M 0 ' + ty + ' q ' + mouseX + ' ' + dy + ' 320 0 l 0 480 l -320 0 Z';

Finally, let’s see how to achieve the interesting elastic effect:

// Function to simulate the elastic behavior
function elasticRelease() {
    // Morph the paths to the opposite direction, to simulate a strong elasticity
        targets: rangeSliderPaths,
        d: buildPath(-lastMouseDy * 1.3, rangeHeight - (currentY - lastMouseDy / mouseDyFactor)),
        duration: 150,
        easing: 'linear',
        complete: function () {
            // Morph the paths to the normal state, using the `elasticOut` easing function (default)
                targets: rangeSliderPaths,
                d: buildPath(0, rangeHeight - currentY),
                duration: 4000,
                elasticity: 880

    // Here will go a similar code to:
    // - Translate the values to the opposite direction, to simulate a strong elasticity
    // - Then, translate the values to the right position, using the `elasticOut` easing function (default)

As you can see, it was necessary to implement two consecutive animations to achieve an exaggerated elastic effect, similar to the original animation. This is because a single animation using the elasticOut easing function is not enough.

Summing up

And finally we are done!

We have developed a component to simulate the behavior of an input of type range, but with an impressive effect, similar to the original animation:

You can check the final result, play with the code on Codepen, or get the full code on Github.

Please note that to make the tutorial a bit more fun and easy to follow, we have not explain here every single line of code used. However, you can find the complete code in the Github repository.

We sincerely hope that you liked the tutorial, and it has served as inspiration!


Extend the *ngIf Syntax to Create a Custom Permission Directive

By Juri Strumpflohner

So our use case is to create a directive, which shows/hides elements on the page based on our currently authenticated user’s permissions. In this article we will go over a very simple use case, but which could easily be extended and used in a real production application. By creating such directive we’ll also take a deeper look at the syntax that Angular’s ngIf and ngFor directives use.


How Fintonic uses Node.js, MongoDB & Kubernetes to scale

By Tamas Kadlecsik

How Fintonic uses Node.js, MongoDB & Kubernetes to scale

At RisingStack, we are highly interested in building scalable and resilient software architectures. We know that a lot of our readers share our enthusiasm, and that they want to learn more about the subject too.

To expand our blogging & training initiatives, we decided to launch a new series called Top of the Stack which focuses on architecture design, development trends & best practices for creating scalable applications.

In the first episode of Top of the Stack, we interviewed Patrick Kua, the CTO of N26, a successful banking startup from Germany.

In the second episode, we interviewed Angel Cereijo & Roberto Ansuini from Fintonic!

During our ~30 mins long conversation we discussed a wide range of topics, including the reasons behind going with Node.js, tests they run to ensure quality, the process of migrating to Kubernetes, and the way issues are handled in their architcture.

The conversation is available in a written format – no audio this time. For the transcript, move on!

To help you navigate a little, we list the topics we cover with the anchors you can use:

Fintonic Interview Transcript

Welcome everybody on the second episode of the Top of the Stack Podcast, where we are talking about services and infrastructures that developers build. I am Csaba Balogh your host, sitting with our co-host, Tamas Kadlecsik CEO of RisingStack.

Today we are are going to talk about the architecture of Fintonic – a successful Spanish startup. Fintonic is a personal finance management app, which helps users by sending them overviews and alerts about their expenses.

Fintonic is currently available in Spain a Chile and has more than 450.000 users at this point. Our guest today are Angel Cereijo – Node.js Engineering Lead and Roberto Ansuini Chief Software Architect at Fintonic.

It’s a pleasure to have you here Angel and Roberto! Can you guys please tell us more about how you became a part of Fintonic and how you started out there?

Roberto: Yes, well, this is Roberto, I started at Fintonic in October 2011 as the Development Director during the early stages of the project. We developed the base architecture for the PFM (Personal Finance Management) system, which is the core of our platform. So we had our provider, and we wanted to test what could we do with the information we obtained using the framework of our provider.

The first stages of the project were mainly the development of the aggregation and classification of financial data. Given this, we presented summarized information on our user expenses and developed an alerting system around it. We started with a very small team, in the first few weeks, it was 2 people, me and my tech lead and then we had 2 more people, one back-end developer and one front-end developer. We started with only the web application, and later on, we added the iOS and the Android application.

RisingStack: What are the main languages you use for developing the project?

Roberto: When Fintonic was launched, we started mainly with Java and the Spring framework, and later on, we started adding more features and developing the loan service where we gave users the possibility to quote a loan, a consumer loan. To do so, we partnered with a fintech named Wanna (it’s a consumer loan fintech) to integrate their products into our platform. During this time we developed the first iteration of what we called the Fintonic Integration API (finia) developed in Node.js by my teammate Angel Cereijo.

RisingStack: What made you decide to use Node.js instead of Java?

Roberto: The reason to choose Node.js for this part of our platform was because of the nature of the Integration API. It proxied and added some business logic to our partners. The deadline was very tight and Node.js allowed us to have a running MVP in a very short timeframe.

RisingStack: So basically, right now you exclusively use Node.js on the backend, right?

Roberto: We are using Node.js mainly as the core technology for what we call our Marketplace of financial products (loans, insurances, etc.)

RisingStack: Then, any other logic or infrastructural parts like payments or so are implemented in Java right now, right?

Roberto: Yes, Java is totally for the PFM (Personal Finance Management System), that is the aggregation service, the alerting, and the core functionality of Fintonic. What we are building around the core application of Fintonic is the so-called marketplace of Fintonic. This marketplace is for every product, let’s say, loans, insurances, or credit cards, debit accounts, etc.. Everything that we’ll include in here is probably going to be in Node.js.

RisingStack: I see. Do you have any shared infrastructural code between your services?

Roberto: We have some parts in Java, yes. We have main libraries for this. And we also have an automation infrastructure with Chef, and we’re doing some Ansible now where we automate the configuration of the infrastructure.

Angel: We have Sinopia or npm private repository, and we have a lot of custom packages. Some of them are just a layer of another package, and the rest of them are codes shared between the projects. We have around twenty-something custom modules.

RisingStack: About databases: What database do you operate with?

Angel: For Node.js we use MongoDB. Fintonic has been using MongoDB since it began. And for us in the Node.js part, it fits quite well. Sometimes we use Mongoose and other times we just make queries and something like that.

RisingStack: Do you use managed MongoDB or do you host it yourself?

Roberto: We have own-hosted MongoDB cluster, but we are evaluating the enterprise edition or maybe Atlas or some other cluster. So far we have maintained our own clusters on Amazon.

RisingStack: Have you had any difficulties when maintaining your cluster?

Roberto: Oh, we have learned a lot over the years, we had our pitfalls. MongoDB has improved a lot since we started using it. So far it’s been kind to us, except for little issues, but it’s okay.

RisingStack: Can you tell us what kind of communication protocols do you use between your services?

Roberto: It’s mainly HTTP REST. We tried Apache Thrift, but now we are mainly on HTTP REST.

RisingStack: I see and what were the problems with it (Thrift)?

Roberto: Ah because on the Java side we want to start using some more features that bring the Netflix Oss, that comes with the SpringCloud, so they are more suitable for HTTP REST protocols. We have a lot of services that have big latencies, and these kind of services with strong latencies are not a good fit for Thrift.

RisingStack: Do you use maybe messaging queues between your services, or only HTTP?

Roberto: We also have a RabbitMQ with AMQP protocol to communicate between services. It’s mostly for load balancing, for having control of the throughput of services and scaling workers, so yes. We have a lot of use cases right now with Rabbitmq.

RisingStack: When we built Trace at RisingStack, we’d quite often seen problems with it when it had to handle a lot of messages or maybe even store a lot of messages. So when workers ran fast enough to process the messages, and it had to write to disc, it quite often went down altogether. Have you met any problems like that or any other?

Roberto: No, no.

RisingStack: At RisingStack, our guys take testing a code very seriously and deploy only after running tests multiple times so it would be great if you could share with us how you handle testing and what kind of tests do you have in place right now.

Angel: Okay. In the Node.js part we have, I think, 90% of our code covered. We unit test our code. We run testing on the local machine and then we make a push on GitLab. And we run all the test code and also check the code style with some rules we have. So we take it very seriously. Right now we use Mocha and Chai for testing. In the front end, we have a very high-coverage, around 90%, I’d say.

RisingStack: Do you have any other kind of tests, like integration tests in-between, or smoke tests?

Angel: We use some mocked servers to test contracts, but we also have Staging environments where we test all of the services in an end-to-end manner.

RisingStack: I am not sure I understand it correctly. When you say that you test everything together, we are talking about end-to-end tests here, right?

Roberto: Yes. We have several stages.

The first one is the unit tests stage where we have this coverage we were talking about before. Then we have some tests that perform some kind of integration with other APIs. They are automated with our GitLab environment. And then, in the front-end side – as most of our applications are used on the Android and iOS applications the test are covered there. So they have some interface tests, which they used to test against our pre-production development environments.

And for frameworks, well, we don’t use that much end-to-end testing. It’s mostly manual testing right now, and we want to start doing some mobile testing maybe with some tools like the device Swarm or something like that, but it’s not yet done.

RisingStack: Let’s assume you have, say, 2 services that depend on each other. So you want to test the integration between them – the service boundary. But the downstream service also depends on another one, and so forth and so forth. So, how do you handle these cases? Do you make sure that only the 2 services in question are tested, and you mock the downstream dependencies somehow? Or do you run integration tests on full dependency trees?

Angel: We are not very mature yet.

When we have to call another service, we mock the dependency, because it’s quite tricky to start several services and run tests on them. I think we have to study more and consider how we can implement other kinds of tests.

Roberto: On the Java side we are doing some WireMocks and some mock testing, but we have to mature in that.

Angel: In the Node.js side we have a library dependency, I think it’s called Nock. (This library is used to create a mock call to services to make sure we are respecting contracts between services.) We call some endpoints with some parameter and some headers, and we can say what the response we want to get (body, HTTP code, headers) is.

RisingStack: Do you use any specific CI tools?

Roberto: Yes, we started with Jenkins, but right now we have migrated by the end of 2017, we migrated our pipelines to GitLab CI, it’s very cool, and we are happy with it. And we are doing right now, CI and CD delivery. We are building and deploying our containers in the staging environment, and we are releasing them in a container registry so we can use it locally or in any environment. That one is working quite well, we are very happy with it.

RisingStack: Can you tell us where your application is deployed?

Roberto: Right now we are using AWS. We are in Spain and also we’re in Chile. We have 2 environments for this. The one in Spain is based on EC2 instances, where we have our applications deployed with Docker. So we have our autoscaling groups, and we have load balancers and stuff. In Chile, we are testing out what we want to be our new platform which is on Docker and Kubernetes. So we just finished that project by the end of 2017. And we’re monitoring it, so we can bring it to Spain, which is a much larger platform.

RisingStack: Can you tell us a little bit about how easy or difficult was it to set up Kubernetes on AWS?

Roberto: Actually, it was quite easy. We’re using Kops. It was quite straightforward. They did a great job with developing this script. We had to figure it out, do some learning, figure out the network protocol, how to control the ingresses… It was tricky at the beginning, but once you did it a couple of times, it’s easy.

RisingStack: I see. Maybe it would be interesting to our listeners – how much time did it approximately take to set up Kubernetes?

Roberto: We deployed the project in production by the end of August, then we started doing the Gitlab CI migration in September. Then, we did a lot of work by adapting our projects so they fit in a Docker environment. Then, by the end of November and start of December we started doing the Kubernetes part. Within 1 month we had it all up an running in production.

RisingStack: Can you tell us how many developers were needed for that?

Roberto: We have 3 people in the DevOps team, and for the rest, we had the development team making some adaptations, like health checks, configurations, etc..

RisingStack: Did you face any scaling problems in your architecture? Or in the previous one?

Roberto: With the previous one, the problem was mainly versioning the deployments. How to control, what version was deployed, what wasn’t. Right now we are trying to fix this problem with the container registry and controlling the versioning of the deployments in Kubernetes. That’s how we are trying to solve all those issues.

RisingStack: What do you base the versioning of your containers on?

Roberto: We are doing several kinds. We are versioning by tagging the containers. We are also doing some versioning with some infrastructure files with Ansible. And in Kubernetes you can do some tricks to do versioning with the deployments – so, if you have different names for the deployment, you can roll out the version of the services. Each deployment has a config map associated with it and an image associated with it so you can have a deployment and a new version at any specific time. So that’s helping a lot also.

RisingStack: I see – and what is the tag of your containers?

Roberto: We are just starting off with it. When we promote the container to production we have a production tag for it, and then we do some versioning with timestamps. We are trying to do something based on the commits, so we can track the commit to the container to do versioning of the deployment.

RisingStack: What infrastructural elements or deployment strategies do you use to ensure the reliability of your product?

Roberto: Well, that’s mainly what we are doing right now. We are really trying to mature and have all the information possible of having a specific version that we can know exactly what is deployed, what configuration we had at the deployment’s time. That has given us a lot of peace of mind. So we can roll back and we can see what is running.

RisingStack: Do you automate the rollbacks or you do it by hand if there is an error?

Roberto: It’s manual to a certain point, since its done on-demand. But the process is very automated. All we have to do is configure our scripts to redeploy a given version of a container on our ASG (for the Spanish platform) or a deployment on our Kubernetes cluster (for the rest).

RisingStack: How do you prevent errors from cascading between your services – in case the barriers fail, and the error starts cascading? What kind of tools do you use to locate the root cause – like logs, metrics, and distributed tracing for example?

Roberto: We use ELK to monitor application behavior and Cloudwatch to monitor infrastructure. (Recently, after our conversation, we started using Datadog, and it looks promising.) What we basically do is monitoring the latency of the services, we have some processes that perform like a global check of the system. The alerting system consists of some automated scripts that simulate a typical workflow of data in our system. If a service fails in any part of the chain, the workflow doesn’t complete, and an alarm is triggered so we can fix it.

When a service falls down, our system handles errors, also, shows you the information that is available. So when a service comes down it’s not affecting all of the systems, but only that part of the system. So if you take it to the app, it’s maybe only one section of the app that is not loading – it’s very isolated. Basically, the microservices approach is helping us here. Also, the use of RabbitMQ and asynchronous messages with queues help us to have our system restored without losing any of the processing.

RisingStack: Did I understand correctly, that you said you have alerts for when a message goes into the system but doesn’t come out where you expect it?

Roberto: There are some checks automated like that. The example I’ve mentioned before covers this.

RisingStack: How do you track these messages?

Roberto: We have some daemons that are connected to a Rabbit queue, and they are just checking if the messages are coming through. So if the messages are coming through, we assume that the system is performing right.

RisingStack: And how do you monitor your infrastructure? What are the main metrics to monitor on your platform right now?

Roberto: It’s mainly CPU, memory, and network, it’s not so much. Also, the performance in our message rates and queued messages in RabbitMQ helps us to monitor the health of the system. We are looking to integrate some DataDog, but it’s a project that we want to do this quarter.

RisingStack: Have you considered using a distributed tracing a platform before?

Roberto: Yes we have seen a couple of frameworks, but we haven’t done anything on that.

RisingStack: We talked a lot about your past and current architecture, so we would like to know if there are any new technologies you are excited about and you are looking forward to trying out in 2018?

Roberto: The Kubernetes project is exciting for us because we started using it by the end of 2017. We want to mature it a lot; we want to do so much more automation, maybe some operations that we can integrate with Slack, with some bots, so we can automate the deployment. Especially, what we want to do is to create environments on demand. So we can have several testing environments on demand to simplify the developers work. That’s probably going to be one of the technologically interesting projects that will come. Alerts, and the delivery – doing some more automation so we can track much better which web commits go to which deployment.

Thank you very much guys for being here and telling us all these exciting things. Thank you very much, Roberto, thank you very much, Angel, for being here.


New Codingmarks Calendar Week 7 2018

layout: post
title: New codingmarks added in the 7th week of 2018
description: “New codingmarks added in the 7th week of 2018. Keywords: cloud, encryption, firewall, phising, security, tools and user-experience”
author: ama
permalink: /ama/new-codingmarks-week-7-2018
published: true
categories: [codingmarks]
tags: [codingmarks]

New codingmarks added in the 7th week of 2018. Hot topics include:

Continue reading New Codingmarks Calendar Week 7 2018