Monthly Archives: May 2018

Build Multiple Stacking Sticky Sidebars with Pure CSS and Bootstrap 4

By Nicholas Cerminara

Multiple Sticky Sidebars

Making high performant, pure CSS sticky sidebars that stack with Bootstrap 4.

This will be a quick and pretty cool tutorial on a neat trick on how to have multiple sticky sidebars that stack without using any JavaScript!

I figured this out the other day brain storming ideas with @chrisoncode for the new Scotch website sidebar. As fun and cool as JavaScript is, it’s just not as snappy and way more bloated than say a pure CSS implementation of stuff like this (which is one of our main goals for our redesign).

In this tutorial I will discuss:

  • What the heck I mean by this mouthful: “Multiple Stacking Sticky Sidebars.
  • Reasons that you would want to do this.
  • General beefs devs have with doing it with JavaScript or plugins.
  • The technique with CSS3 (position: sticky).
  • That same technique with a simple trick I figured out to make it stackable with pure CSS.
  • Bunch of Bootstrap 4 Demos with Dead-Simple Sticky Sidebars

I’ll also have many code samples and demos throughout this post. Then some simple demos / templates to show you how easy it is with Bootstrap 4.

It’s so easy, in a lot of ways I am scared to publish this post because of the abuse of the design pattern by websites. You’ll see more as you scroll and read below, but let’s just roll with the future punches and begin.

What the heck does “Multiple Stacking Sticky Sidebars” even mean?

I’m talking specifically about this design pattern where there’s a single sidebar with multiple sticky content in it:

The word “multiple” regerences if we do more than one of these on page. Stacking references to the single sidebar having multiple stacking pieces.. I know that this is kind-of confusing and weird, but I couldn’t find or think up a better term for the concept.

Don’t worry though, we’ll get to that craziness soon enough of “Multiple Stacking Sticky Sidebars”…

So why do this?

There’s a couple reason why you would probably want to do this. Just to name a few:

  1. It’s cool, different.
  2. Maybe make a subnav easily accessible.
  3. You want to highlight your content in your sidebar more as a user scrolls.
  4. You want (have to?) increase your ad impressions, clicks, and page views.

The Old Way (with JavaScript)

There’s a ton of plugins for doing this.

You’ve probably even done this before with maybe Boostrap Affix. I find plugins like this ALWAYS are a pain to get working and / or make scrolling feel slower, laggier. Or, maybe something weird happens on resize.

It’s also pretty easy to write a small script for it yourself, but I know most people likely use plugins.

How the JS Basically Works

The general idea is your script or plugin will do something like this with JavaScript:

  1. Calculate offset of a position: relative; element from top of window on scroll.
  2. When that element’s offset is or is about 0, calculate the position (left, top, or right) of that element against the window.
  3. switch it to position: fixed; with the calculated top, left, or right.
  4. Optionally, have your JS redo these calculations when it reaches the bottom of it’s parent.
  5. Switch it to position: absolute; and position it at bottom of the container.

It’s not too bad of work, but it’s definitely a lot to juggle even if you have a plugin trying to figure this all out for you.

That Usual Snag #1: Content Jumping

I’m sure you’ve dealt with this before! There’s always a couple bugs you hit.

When you switch from position: relative; to position: fixed; the height is removed from the parent element. This means there might be a “jump” up of content.

Check out this quick and dirty demo of what I mean:

That Usual Snag #2: Width 100% vs Width Auto

If the element you want to be “sticky” is currently position: relative; and responsive to the container (width: 100%) and then you swap to position: fixed, you’ll get an element that is 100% of the window or at width auto depending on your CSS.

Here’s our demo again. This time watch the width of the “sticky” element.

Just another thing you’d have to calculate.

That Usual Snag #3: Performance Hell

Finally, if you have a crazy website, with multiple sticky sidebars on a single page, that’s a lot of JavaScript calculations happening on scroll. You are definitely going to nuke your user’s CPU.

“I love it when I visit a webpage and my computer fan starts howling, Chrome eats all my computer RAM, and my CPU goes nuclear. Lag is cool!” – Said no one ever

CSS-Tricks has a great tutorial explaining through this issue, but, to be frank and honest in my personal opinion, dealing with this “just sucks in general”. There’s got to be some serious good reason to be in this territory for most simple projects.

Here’s a codepen showing that off, forked from Chris Coyier’s Codepen in that article:

Easy Sticky Sidebars without JavaScript using “Position Sticky”

This is no big secret if you’ve been paying attention to CSS3. You can now make any element essentially behave this way with pure CSS. It’s easy.

You just need to have the CSS applied and at least one direction property which is usually the top:

.make-me-sticky {
    position: sticky;
    top: 0;

Now, all elements with the class will be “sticky to the top of the container, all the way to the bottom.

Below are some cool demos with Bootstrap 4:

Basic Sticky Sidebar with Bootstrap 4

Browser Support

Like anything fun with CSS3 its browser support is all over the place, but honestly this one is not that bad. I’m surprised it’s not used more.

If you can afford to have IE users not have a sticky elements, it’s perfect. You wouldn’t have to worry about any complex fallback because it just will simply not be sticky.

/* Cross-Browser Support */
.make-me-sticky {
    position: relative; /* I am the fallback */

    /* Give it everything you got! (use an auto prefixer...) */
    position: -webkit-sticky;
    position: -moz-sticky;
    position: -ms-sticky;
    position: -o-sticky;
    position: sticky;

    top: 0; /* Required  */

Stacking Sticky Sidebars without JavaScript

Now onto the thing that prompted me to write this entire article in the first place! Let’s use this cool new CSS3 thing with a clever technique to have a dynamic “stacking sticky sidebar” design pattern as I discussed at the top of the article.

To reiterate, our goal is:

  • Have a sticky sidebar.
  • Space out a bunch of them to show multiple items sticky separately.
  • Don’t use JavaScript.
  • Keep it simple.

Multiple Stacking Sticky Sidebar Demo with Pure CSS

First, here’s a demo of it working in action. This is 100% CSS:

How it Works

It’s actually pretty simple. We know how to use CSS sticky and that it just affixes an element to the top and the bottom of it’s container based on scroll position.

So all we need to do, is add a bunch of containers evenly split and put sticky content inside of them. See this blueprint of what we want to create with them marked container 1, container 2, container 3, and container n:

An easy trick to simulate the equal heights would be using position: absolute;. Here’s some sample code:

/* "TITLE HERE" from above */
.title-section {
    width: 100%;
    height: auto

/* "CONTENT" From Above */
.content-section {

     /* size of my container minus sidebar width */
    width: calc(100% - 300px);

    /* Estimated height of largest sidebar in case of short content */
    min-height: 800px;

.sidebar-section {
    position: absolute;
    right: 0;
    top: 0;
    width: 300px;
    height: 100%; /* Super important! */

/* "SIDEBAR CONTAINER" in the blueprint */
.sidebar-item {
    position: absolute;
    top: 0;
    left: 0;
    width: 100%;
    height: 25%;

    /* Position the items */
    &:nth-child(2) { top: 25%; }
    &:nth-child(3) { top: 50%; }
    &:nth-child(4) { top: 75%; }

Then, the HTML:

    <div class="container-fluid">
        <div class="col-md-12">

            <div class="title-section">
                <h1>TITLE HERE</h1>

            <div class="inner-guts">

                <div class="content-section">
                    <h2>Content Section</h2>

                <div class="sidebar-section">

                    <div class="sidebar-item">
                        <h3>Item 1</h3>
                    <div class="sidebar-item">
                        <h3>Item 2</h3>
                    <div class="sidebar-item">
                        <h3>Item 3</h3>
                    <div class="sidebar-item">
                        <h3>Item 4</h3>



And finally a demo:

Next, Let’s Make it Sticky

Now, to make it sticky we just need that CSS property in each of the sidebar-items. So, add these lines of code:

.make-me-sticky {
    position: sticky;
    top: 0;
<!-- Repeat this -->
<div class="sidebar-item">
    <div class="make-me-sticky">




That is it! Pretty simple. We’re just using position: absolute; to build the layout we want then position: sticky; to do all our work. Here is the blueprint live:

Issues you’ll face

Real quick, I just want to flag issues you’ll face with this approach:

  • Maybe the content container is shorter than the sidebar.
  • Maybe the sidebar items height are taller than their individual container
  • Maybe you have different quantities than 4 for your sidebar.

Nothing is perfect, but this is pretty great technique. You’ll have to adjust the CSS here for your individual use case, but it’s not a bad solution at all given we have no javascript.

Make it a Helper Library / Mixin for Different Quantities

For dynamically changing content, I would do something like this in CSS. You could probably make a smarter SASS / LESS mixin though:

.sidebar-item {
    position: absolute;
    top: 0;
    left: 0;
    width: 100%;
.has-4-items .sidebar-item {
    height: 25%;
    &:nth-child(2) { top: 25%; }
    &:nth-child(3) { top: 50%; }
    &:nth-child(4) { top: 75%; }
.has-3-items .sidebar-item {
    height: 33.333333%;
    &:nth-child(2) { top: 33.333333%; }
    &:nth-child(3) { top: 66.666666%; }
.has-2-items .sidebar-item {
    height: 50%;
    &:nth-child(2) { top: 50%; }

A Better Way? Bootstrap and Flexbox?

Okay, so in the above example you’ll notice I used calc of exactly 300px for the sidebar width. I did this because ad units are that size, so it’s basically become the standard on the web for a sidebar.

With Bootstrap 4, their grid system is flexbox by default. This means a bunch of awsome things! The biggest for most users are columns are equal height by default with Bootstrap 4.

Check out this demo proving that:

Pretty amazing right? Now, let’s leverage that with sticky sidebars.

Instead of doing the calc trick, we’ll just use a Bootstrap 4 grid system. The only thing we need to update on our code is to make sure the .make-me-sticky has left and right padding that matches your column gutters.

So default setup will be:

.make-me-sticky {
    position: sticky;
    top: 0;

    /* For Bootstrap 4 */
    padding: 0 15px;

Check out the demo below:

Multiple Stacking Sticky Sidebars

Okay, let’s just get crazy with this Bootstrap example. Let’s do this in multiple columns at different quantities with just CSS.

This is super performant, snappy, and super easy! This however does scare me a bit for the future of a lot of website designs if this type of pattern catches on and is abused. Ads galore…


I didn’t handle for responsive here or that much in any of the demos, but you can probably figure that one out on your own. You simply need to just disable sticky whenever you have your columns stack.

“Wait… Couldn’t I just use Flexbox to split the sidebar sections automatically instead of calculating them?” – You

So the most annoying thing about this is having to configure the has-4-items, has-3-items, etc. What if you could use Flexbox to space the .sidebar-item evenly and automatically regardless of count?

It would be something like this:

.sidebar-section {
    display: flexbox;
    flex-direction: column;
    justify-content: space-between;

This would have been a game changer, but despite this great idea, sticky just doesn’t work well with Flexbox. Give it a try if you’d like. If anyone can figure it out, please let me know!


This was a fun, quick tutorial on how to use position: sticky; with Bootstrap 4 to create multiple sticky sidebars or multiple multiple sticky sidebars.

It’s really easy to do. I have no doubt that most if not all ad based websites will start doing this more and more. I just hope it’s not abused.

Either way, thank you for following this tut. I hope you enjoyed it as much as I did!


Node.js Cron Jobs By Examples

By Chris Nwamba

Cron Job Running a task every minute

Ever wanted to do specific things on your application server at certain times without having to physically run them yourself. You want to spend more of your time worrying about productive tasks instead of remembering that you want to move data from one part of the server to another every month. This is where Cron jobs come in.

In your Node applications, the applications of these are endless as they save. In this article, we’ll look at how to create and use Cron jobs in Node applications. To do this, we’ll make a simple application that automatically deletes auto-generated error.log files from the server. Another advantage of Cron jobs is that you can schedule the execution of different scripts at different intervals from your application.


To follow through this tutorial, you’ll need the following:

  • Node installed on your machine
  • NPM installed on your machine
  • Basic knowledge of JavaScript

Getting Started

To get started, create a new Node application by opening your terminal and creating a new folder for your project. Then initialize it by running the commands:

    mkdir cron-jobs-node cd cron-jobs-node
    npm init -y

Install Node Modules

To make this application work we are going to need a couple of dependencies. You can install them by running the following commands:

    npm install express node-cron fs

express – powers the web server

node-cron – task scheduler in pure JavaScript for node.js

fs – node file system module

Building the backend server

Create an index.js file and then import the necessary node modules:

    touch index.js

Edit the index.js file to look like this:

    // index.js
    const cron = require("node-cron");
    const express = require("express");
    const fs = require("fs");

    app = express();


Now here’s where node-cron comes in. After a while, we want to delete the error log files at intervals without having to do it physically. We will use node-cron to do this. Let’s take a look a simple task first. Add the following to your index.js file:

    // index.js
    // schedule tasks to be run on the server   
    cron.schedule("* * * * *", function() {
      console.log("running a task every minute");


Now, when we run the server, we get the following result:

    > node index.js

    running a task every minute
    running a task every minute

Different intervals for scheduling tasks

With node-cron, we can schedule tasks for different intervals. Let’s see how to schedule task using different intervals. In the example above, we created a simple Cron job, the parameters passed to the .schedule() function were * * * * * . These parameters have different meanings when used:

     * * * * * *
     | | | | | |
     | | | | | day of week
     | | | | month
     | | | day of month
     | | hour
     | minute
     second ( optional )

Using this example, if we want to delete the log file from the server on the 21st of every month, we update the index.js to look like this:

    // index.js
    const cron = require("node-cron");
    const express = require("express");
    const fs = require("fs");

    app = express();

    // schedule tasks to be run on the server
    cron.schedule("* * 21 * *", function() {
      console.log("Running Cron Job");
      fs.unlink("./error.log", err => {
        if (err) throw err;
        console.log("Error file succesfully deleted");


Now, when the server is run, you get the following output:

Cron Job automatically deleting error file

NB: To simulate the tasks, intervals were set to shorter period by setting the number of minutes in the parameter for the task scheduler

You can run any actions inside the scheduler. Actions ranging from creating a file, to sending emails and running scripts. Let’s take a look at more use cases

Use Case 2 – Backing Up Database

Ensuring the accessibility of user data is very key to any business. If an unforeseen event happens and your database becomes corrupt, all hell will break loose if you don’t have any form of existing backup for your business. To save yourself the stress in the occurrence of such, you can also use Cron jobs to periodically backup the existing data in your database. Let’s take a look at how to do this.

For the ease of explanation, we are going to use SQLite database

First, we need to install a node module that allows us to run shell scripts:

    npm install shelljs

And also install SQLite if you haven’t:

    npm install sqlite3

Now create a sample database by running the command:

    sqlite3 database.sqlite

To backup your database at 11:59pm every day, update your index.js file to look like this:

    // index.js
    const fs = require("fs");
    let shell = require("shelljs");
    const express = require("express");

    app = express();

    // To backup a database
    cron.schedule("59 23 * * *", function() {
      console.log("Running Cron Job");
      if (shell.exec("sqlite3 database.sqlite  .dump > data_dump.sql").code !== 0) {
        shell.echo("Database backup complete");

Now, when you run the server using the command:

    node index.js

You get the following result:

Server Running a Backup of Database

Use Case 3 – Sending emails every n-time interval

You can also use Cron jobs to keep your users up to date as to what is going on with your business by sending them emails at different intervals. For example, you can curate a list of interesting links and then send them to users every Sunday. To do something like this, you’ll need to do the following.

Install nodemailer by running the command:

    npm install nodemailer

Once that is done, update the index.js file to look like this:

    // index.js
    const cron = require("node-cron");
    const express = require("express");
    let nodemailer = require("nodemailer");

    app = express();

    // create mail transporter
    let transporter = nodemailer.createTransport({
      service: "gmail",
      auth: {
        user: "",
        pass: "userpass"

    // sending emails at periodic intervals
    cron.schedule("* * * * Wednesday", function(){
      console.log("Running Cron Job");
      let mailOptions = {
        from: "",
        to: "",
        subject: `Not a GDPR update ;)`,
        text: `Hi there, this email was automatically sent by us`
      transporter.sendMail(mailOptions, function(error, info) {
        if (error) {
          throw error;
        } else {
          console.log("Email successfully sent!");


NOTE: You will need to temporarily allow non-secure sign-in for your Gmail account for testing purposes here

Now, when you run the server using the command node index.js , you get the following result:

Server Running Cron Job

Email Automatically sent by Cron Job


In this article, we have seen an introduction to Cron jobs and how to use them in your Node.js applications. Here’s a link to the GitHub repository. Feel free to add a suggestion or leave a comment below.


Modern Distributed Application Deployment with Kubernetes and MongoDB Atlas

By Jay Gordon

Figure 1: MongoDB Atlas runs in most GCP regions

Storytelling is one of the parts of being a Developer Advocate that I enjoy. Sometimes the stories are about the special moments when the team comes together to keep a system running or build it faster. But there are less than glorious tales to be told about the software deployments I’ve been involved in. And for situations where we needed to deploy several times a day, now we are talking nightmares.

For some time, I worked at a company that believed that deploying to production several times a day was ideal for project velocity. Our team was working to ensure that advertising software across our media platform was always being updated and released. One of the issues was a lack of real automation in the process of applying new code to our application servers.

What both ops and development teams had in common was a desire for improved ease and agility around application and configuration deployments. In this article, I’ll present some of my experiences and cover how MongoDB Atlas and Kubernetes can be leveraged together to simplify the process of deploying and managing applications and their underlying dependencies.

Let’s talk about how a typical software deployment unfolded:

  1. The developer would send in a ticket asking for the deployment
  2. The developer and I would agree upon a time to deploy the latest software revision
  3. We would modify an existing bash script with the appropriate git repository version info
  4. We’d need to manually back up the old deployment
  5. We’d need to manually create a backup of our current database
  6. We’d watch the bash script perform this “Deploy” on about six servers in parallel
  7. Wave a dead chicken over my keyboard

Some of these deployments would fail, requiring a return to the previous version of the application code. This process to “rollback” to a prior version would involve me manually copying the repository to the older version, performing manual database restores, and finally confirming with the team that used this system that all was working properly. It was a real mess and I really wasn’t in a position to change it.

I eventually moved into a position which gave me greater visibility into what other teams of developers, specifically those in the open source space, were doing for software deployments. I noticed that — surprise! — people were no longer interested in doing the same work over and over again.

Developers and their supporting ops teams have been given keys to a whole new world in the last few years by utilizing containers and automation platforms. Rather than doing manual work required to produce the environment that your app will live in, you can deploy applications quickly thanks to tools like Kubernetes.

What’s Kubernetes?

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. Kubernetes can help reduce the amount of work your team will have to do when deploying your application. Along with MongoDB Atlas, you can build scalable and resilient applications that stand up to high traffic or can easily be scaled down to reduce costs. Kubernetes runs just about anywhere and can use almost any infrastructure. If you’re using a public cloud, a hybrid cloud or even a bare metal solution, you can leverage Kubernetes to quickly deploy and scale your applications.

The Google Kubernetes Engine is built into the Google Cloud Platform and helps you quickly deploy your containerized applications.

For the purposes of this tutorial, I will upload our image to GCP and then deploy to a Kubernetes cluster so I can quickly scale up or down our application as needed. When I create new versions of our app or make incremental changes, I can simply create a new image and deploy again with Kubernetes.

Why Atlas with Kubernetes?

By using these tools together for your MongoDB Application, you can quickly produce and deploy applications without worrying much about infrastructure management. Atlas provides you with a persistent data-store for your application data without the need to manage the actual database software, replication, upgrades, or monitoring. All of these features are delivered out of the box, allowing you to build and then deploy quickly.

In this tutorial, I will build a MongoDB Atlas cluster where our data will live for a simple Node.js application. I will then turn the app and configuration data for Atlas into a container-ready image with Docker.

MongoDB Atlas is available across most regions on GCP so no matter where your application lives, you can keep your data close by (or distributed) across the cloud.


To follow along with this tutorial, users will need some of the following requirements to get started:

First, I will download the repository for the code I will use. In this case, it’s a basic record keeping app using MongoDB, Express, React, and Node (MERN).

bash-3.2$ git clone
Cloning into 'mern-crud'...
remote: Counting objects: 326, done.
remote: Total 326 (delta 0), reused 0 (delta 0), pack-reused 326
Receiving objects: 100% (326/326), 3.26 MiB | 2.40 MiB/s, done.
Resolving deltas: 100% (137/137), done.

cd mern-crud

Next, I will npm install and get all the required npm packages installed for working with our app:

> uws@9.14.0 install /Users/jaygordon/work/mern-crud/node_modules/uws
> node-gyp rebuild > build_log.txt 2>&1 || exit 0

Selecting your GCP Region for Atlas

Each GCP region includes a set number of independent zones. Each zone has power, cooling, networking, and control planes that are isolated from other zones. For regions that have at least three zones (3Z), Atlas deploys clusters across three zones. For regions that only have two zones (2Z), Atlas deploys clusters across two zones.

The Atlas Add New Cluster form marks regions that support 3Z clusters as Recommended, as they provide higher availability. If your preferred region only has two zones, consider enabling cross-region replication and placing a replica set member in another region to increase the likelihood that your cluster will be available during partial region outages.

The number of zones in a region has no effect on the number of MongoDB nodes Atlas can deploy. MongoDB Atlas clusters are always made of replica sets with a minimum of three MongoDB nodes.

For general information on GCP regions and zones, see the Google documentation on regions and zones.

Create Cluster and Add a User

In the provided image below you can see I have selected the Cloud Provider “Google Cloud Platform.” Next, I selected an instance size, in this case an M10. Deployments using M10 instances are ideal for development. If I were to take this application to production immediately, I may want to consider using an M30 deployment. Since this is a demo, an M10 is sufficient for our application. For a full view of all of the cluster sizes, check out the Atlas pricing page. Once I’ve completed these steps, I can click the “Confirm & Deploy” button. Atlas will spin up my deployment automatically in a few minutes.

Let’s create a username and password for our database that our Kubernetes deployed application will use to access MongoDB.

  • Click “Security” at the top of the page.
  • Click “MongoDB Users”
  • Click “Add New User”
  • Click “Show Advanced Options”
  • We’ll then add a user “mernuser” for our mern-crud app that only has access to a database named “mern-crud” and give it a complex password. We’ll specify readWrite privileges for this user:

Click “Add User”

Your database is now created and your user is added. You still need our connection string and to whitelist access via the network.

Connection String

Get your connection string by clicking “Clusters” and then clicking “CONNECT” next to your cluster details in your Atlas admin panel. After selecting connect, you are provided several options to use to connect to your cluster. Click “connect your application.”

Options for the 3.6 or the 3.4 versions of the MongoDB driver are given. I built mine using the 3.4 driver, so I will just select the connection string for this version.

I will typically paste this into an editor and then modify the info to match my application credentials and my database name:

I will now add this to the app’s database configuration file and save it.

Next, I will package this up into an image with Docker and ship it to Google Kubernetes Engine!

Docker and Google Kubernetes Engine

Get started by creating an account at Google Cloud, then follow the quickstart to create a Google Kubernetes Project.

Once your project is created, you can find it within the Google Cloud Platform control panel:

It’s time to create a container on your local workstation:

Set the PROJECT_ID environment variable in your shell by retrieving the pre- configured project ID on gcloud with the command below:

export PROJECT_ID="jaygordon-mongodb"
gcloud config set project $PROJECT_ID
gcloud config set compute/zone us-central1-b

Next, place a Dockerfile in the root of your repository with the following:

FROM node:boron

RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app

COPY . /usr/src/app


CMD [npm, start]

To build the container image of this application and tag it for uploading, run the following command:

bash-3.2$ docker build -t${PROJECT_ID}/mern-crud:v1 .
Sending build context to Docker daemon  40.66MB
Successfully built b8c5be5def8f
Successfully tagged

Upload the container image to the Container Registry so we can deploy to it:

Successfully tagged
bash-3.2$ gcloud docker -- push${PROJECT_ID}/mern-crud:v1The push refers to repository []

Next, I will test it locally on my workstation to make sure the app loads:

docker run --rm -p 3000:3000${PROJECT_ID}/mern-crud:v1
> mern-crud@0.1.0 start /usr/src/app
> node server
Listening on port 3000

Great — pointing my browser to http://localhost:3000 brings me to the site. Now it’s time to create a kubernetes cluster and deploy our application to it.

Build Your Cluster With Google Kubernetes Engine

I will be using the Google Cloud Shell within the Google Cloud control panel to manage my deployment. The cloud shell comes with all required applications and tools installed to allow you to deploy the Docker image I uploaded to the image registry without installing any additional software on my local workstation.

Now I will create the kubernetes cluster where the image will be deployed that will help bring our application to production. I will include three nodes to ensure uptime of our app.

Set up our environment first:

export PROJECT_ID="jaygordon-mongodb"
gcloud config set project $PROJECT_ID
gcloud config set compute/zone us-central1-b

Launch the cluster

gcloud container clusters create mern-crud --num-nodes=3

When completed, you will have a three node kubernetes cluster visible in your control panel. After a few minutes, the console will respond with the following output:

Creating cluster mern-crud...done.
Created [].
To inspect the contents of your cluster, go to:
kubeconfig entry generated for mern-crud.
mern-crud  us-central1-b  1.8.7-gke.1  n1-standard-1  1.8.7-gke.1   3          RUNNING

Just a few more steps left. Now we’ll deploy our app with kubectl to our cluster from the Google Cloud Shell:

kubectl run mern-crud${PROJECT_ID}/mern-crud:v1 --port 3000

The output when completed should be:

jay_gordon@jaygordon-mongodb:~$ kubectl run mern-crud${PROJECT_ID}/mern-crud:v1 --port 3000
deployment "mern-crud" created

Now review the application deployment status:

jay_gordon@jaygordon-mongodb:~$ kubectl get pods
NAME                         READY     STATUS    RESTARTS   AGE
mern-crud-6b96b59dfd-4kqrr   1/1       Running   0          1m

We’ll create a load balancer for the three nodes in the cluster so they can be served properly to the web for our application:

jay_gordon@jaygordon-mongodb:~$ kubectl expose deployment mern-crud --type=LoadBalancer --port 80 --target-port 3000 
service "mern-crud" exposed

Now get the IP of the loadbalancer so if needed, it can be bound to a DNS name and you can go live!

jay_gordon@jaygordon-mongodb:~$ kubectl get service
NAME         TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)        AGE
kubernetes   ClusterIP              443/TCP        11m
mern-crud    LoadBalancer   80:30684/TCP   2m

A quick curl test shows me that my app is online!

bash-3.2$ curl -v
* Rebuilt URL to:
*   Trying
* Connected to ( port 80 (#0)
> GET / HTTP/1.1
> Host:
> User-Agent: curl/7.54.0
> Accept: */*
< HTTP/1.1 200 OK
< X-Powered-By: Express

I have added some test data and as we can see, it’s part of my deployed application via Kubernetes to GCP and storing my persistent data in MongoDB Atlas.

When I am done working with the Kubernetes cluster, I can destroy it easily:

gcloud container clusters delete mern-crud

What’s Next?

You’ve now got all the tools in front of you to build something HUGE with MongoDB Atlas and Kubernetes.

Check out the rest of the Google Kubernetes Engine’s tutorials for more information on how to build applications with Kubernetes. For more information on MongoDB Atlas, click here.

Have more questions? Join the MongoDB Community Slack!

Continue to learn via high quality, technical talks, workshops, and hands-on tutorials. Join us at MongoDB World.


Getting Started with Live Coding in Visual Studio Code w/ Live Share

By James Quick

Live Share for Visual Studio Code is HOT OFF THE PRESS and publically available as of May 7th 2018! What? You’ve been living under a rock and haven’t heard of it? Don’t worry, let me fill you in.

Live Share is an extension for VS Code that enables real-time collaboration between developers.

Live Share is an extension for VS Code that enables real-time collaboration between developers. As you’ll see in a second, you’ll have the ability to share a “session” with someone else, allowing them to edit code as well as share a sever and debugging session. I’ve seen real-time collaboration in action with Cloud 9 before, but to have this now be a part of my favorite text editor is extremely exciting! So, let’s go ahead and take a look at how it works.

TLDR: How do I set up Live Share in 4 steps?

  1. Install the Live Share extension
  2. Open the command palette
  3. Start Live Share
  4. Share Link

Keep in mind that as you progress through this article you will see screenshots from two different computers to demonstrate a working example of how Live Share works. For clarification purposes, I’ll refer to person who sends the session invite and the person who accepts the invite as the the inviter and invitee respectively.

Downloading the Extension

The very first step to taking advantage of Live Share is to install it just like any other extension. In VS Code, you can open up the extensions tab, search for Live Share, click install, and then reload when the install is finished.

After that, you’ll need to sign in. As of now, you can choose to login with a Microsoft account or Github. Because I needed two computers, I logged in with Microsoft on one and Github on the other.

To sign in, use the “Sign In” button in the bottom status bar with the person icon.

Sharing and Joining a Session

After you’re all signed in, you’re ready to create a session to share with others. One thing to keep in mind when doing so is to only share live sessions with people you trust. As you’ll see you will be granting users certain access that can be detrimental if used incorrectly.

Only share live sessions with people you trust!

Start by clicking your username in the bottom status bar and choose “Start Collaboration Session” from the available options. Alternatively, you can open the Command Palette ( CMD + SHIFT + P on Mac, CTRL + SHIFT + P on Windows) and type “Start Collaboration Session”

You’ll be notified that your invite link has been copied to the clipboard.

To invite someone to your session, all you need to do is share this link with them. You can text, email, or read it verbatim over the phone… whatever works for you.

From the invitee point of you, to accept an invite, click your username in the bottom status bar and choose “Join Collaborative Session”. Alternatively, as above, you can open the Command Palette ( CMD + SHIFT + P on Mac, CTRL + SHIFT + P on Windows) and type “Join Collaborative Session”.

When prompted, enter the collaborative session link sent to you by the inviter.

The inviter will be notified when someone joins the session.

By default, on joining a session the invitee will automatically follow the inviter as he/she navigates code. This will happen until the invitee makes a move themselves. From there, both sides are free to navigate and edit as they see fit. Additionally, both sides will see a marker showing where the other editor is as shown here.

One neat trick is to select a piece of code for it to be highlighted on the other’s computer as well. You can use this to draw their attention to a section of code for example.

Limiting Collaborators

By default, when sharing a session with someone, they will have access to edit all of the files within the workspace. Obviously, this may not be ideal. It’s one thing to trust someone to edit a few files, but opening up your entire workspace to them might be a little nerve-racking. Thankfully, Live Share gives you the ability to limit what files collaborators can view and edit.

Thankfully, Live Share gives you the ability to limit what files collaborators can view and edit.

To limit collaborators, create a .vsls.json file. The basic configuration will look something like this.

    "$schema": "",
    "gitignore": "none",
    "excludeFiles": [],
    "hideFiles": []

The two keys we care most about are “excludeFiles” and “hideFiles”. EcludeFiles is an array of file names that you don’t want users to ever have access too. They will not be able to view those files ever. HideFiles is very similar except collaborators will be able to see “hidden” files under certain circumstances. Click here for more details about security.

Share a Server

If you’ve ever tried, you know it’s challenging to share with others when working on an application locally. Sure, you could check the code into Github and have the other person clone, but then they still have to install dependencies and start the server themselves. Wouldn’t it be amazing if you could start the server locally and magically the other person gets access to the same running application?! Well, now you can.

As the inviter, start your server as normal (ex. nodemon server.js). Then, click the username in the bottom status bar and choose ‘Share Server’. Alternatively, open the Command Palette and type ‘Share Server’.

As the invitee, you then can navigate to the correct local host url. In my case, I’m running my sample application, VSC Snippets (GUI for creating Code Snippets for VS Code), at port 3000.

How incredible is that? Don’t ask me how it works because, honestly, I have no idea, but it’s super cool!

Share a Terminal

Imagine a scenario where you’re trying to teach someone commands in the terminal. You know, navigating the file system, working with npm, starting your dev server, etc. As with the features above, this becomes much more complicated when doing remotely. Good thing Live Share decided to throw the feature to share a terminal in for good measure.

Sharing a terminal is similar to sharing your server. We’ve been through this before, so just find the option “Share Terminal” and click it. Then, you’ll need to choose between read only and read/write permissions for collaborators.

You can share a terminal granting either read only or read/write access to collaborators

After the terminal has been shared, collaborators will be able to view (and edit if applicable) the terminal. The screenshot below shows the invitees view of the terminal after the inviter echoed a message to the screen.

From here, as mentioned above, you could show the invitee how to start a development server, build system, or anything else that might be relevant.

Shared Debugging

One last thing you can do with this awesome extension is share a debugging session, providing yet another amazing way to teach someone. Honestly, debugging is not necessarily something that can be memorized. You get just better from experience. So, the idea of walking through a debugging session with someone more junior is a great opportunity. Just talking through your thought process as you go could be invaluable to someone looking to learn.

To share a debug session, you’ll first need to start a debug session. Makes sense, right. I’ll cover the basic idea but refer you to the VS Code docs for further reference.

I am going to be debugging a Node App using the “Attach to Process Id” configuration. This means that after running my server, I then run the debug configuration that will connect to the existing running application.

After starting the application and then starting the debug configuration, below you can see I hit a breakpoint in my code.

The cool thing about sharing debugging is that it happens automatically. After the debugging session is started on the inviter side, the invitee will then automatically be a part of the same session.


Man, that was a lot. There’s a lot of great features included in this extension that you should be really excited about. With Visual Studio Code quickly becoming the de factor editor for Web Developers, this extension potentially change the way we approach teaching, mentoring, collaboration, debugging, etc. I’m super excited to take advantage of all of these features, and I hope you are too!

As you get started and give it a try, let me know how you like it, cool things you are doing, etc. You can find me on twitter, @jamesqquick.


Lazy Load Animal Memes with Intersection Observer

By William Imoh

Last week we on the code challenge #7 we delved into lazy loading images for increased performance of web pages. Yet to take the challenge? You can do so here before looking through the spoiler below.

Once completed, you can post your entry in the comment section of the post, post it on twitter and use the hashtag #ScotchChallenge so we can see it, or post it in the #codechallenge channel of the Scotch Slack.

The Challenge

When developing websites and web pages, every byte counts and in a bit to reduce the page size and load time several techniques have been implemented to achieve this. Lazy loading is one such technique. Here, a much lower quality of an image is rendered on page load thereby reducing the overall size of the page and increasing load speed. However, when the page scrolls to the lower quality image, only then is the main (high quality) image loaded, hence ‘lazy loading’.

In this challenge, we will be lazy loading images of animal memes on a page using Intersection Observer. These images are sourced from Buzzfeed. We are provided with a base code used to quickly complete the challenge.

The Base

The base codepen consists of HTML and CSS code providing structure and style for the page. No JavaScript code was provided.


The HTML structure consists of a navbar, a div for the images and a scroll to top button. Bulma classes were specifically used to style these individual elements. Here is the parent div of each image:

<div class="box image is-5by4">
      <img src=",w_10/v1526593811/15_eegbn0.webp" data-src=",w_533/v1526593811/15_eegbn0.webp" alt="" id="top">

Each image is stored on the Cloudinary CDN and the URL dynamic transformation feature is used to serve a much-reduced version of the image initially. The data-src attribute is assigned the main image to be loaded when in the viewport.


As earlier stated, Bulma classes were used to style the HTML page, however, minimal styling was introduced to design each image card as well as the scroll to top button.

.container {
  width: 25%;

.image {
  margin: 30px auto;

.scroll-top {
  position: fixed;
  bottom: 10%;
  right: 5%;

The Technique

Techniques such as the use of getBoundingClientRect() or in-view.js would be cool to use, however a modern API shipped with the browser is the Intersection Observer API which allows us to provide an action triggered by the entrance of an element into the viewport or even a parent element.

The Intersection Observer API provides a way to asynchronously observe changes in the intersection of a target element with an ancestor element or with a top-level document’s viewport. – MDN

In this challenge, an Intersection Observer will be used to trigger a callback function once an image placeholder enters the viewport. This callback function fetches the higher quality image and replaces the placeholder.

Define Intersection Observer Options

First, we assign each element to be watched to a variable using the querySelectorAll() method. following that, we create the config options for the observer with:

const images = document.querySelectorAll('img');

const options = {
  root: null,
  rootMargin: '0px',
  threshold: 1.0

The root property holds a value of the element to be observed and setting it to nullimplies the observation of the viewport. As the name specifies, rootMargin is the margin set around the root element, this way you can create a small area within definite element dimensions to be observed.

threshold varies from 0 to 1 and is the visibility level of the image in the viewport before the callback function is triggered. A 0 means not visible, 0.25 means 25% visible, while 1 represents 100% visibility in the viewport.

Define Image Fetch and Update Function

To fetch the high-res image, we create a function which takes a parameter of the image to be loaded, utilizing a Promise, we create an image instance and assigns the src property of the image to the specified url parameter, while the onload and onerror properties are assigned the resolved and reject value respectively.

const fetchImage = url => {
  return new Promise((resolve, reject) =>{
    const newImage = new Image()
    newImage.src = url;
    newImage.onload = resolve;
    newImage.onerror = reject;

Next, we create a function to update the image once loaded. This is done by simply fetching the image with the fetchImage function created and dataset.src as it’s parameter. This way the high-res image is fetched and its src property is assigned to the parameter of the updateImage function created.

const updateImage = image => {
  let src = image.dataset.src;
    image.src = src

Now we have our higher quality images ready to be loaded.

Create The Callback Function

While we have the function to fetch the higher quality image, it is useless if it isn’t called on any entry image element (otherwise known as entries). We will proceed to create the callback function which receives the entries to the observer and the observer as parameters.

const callbackFunction = (entries, observer) =>{
  entries.forEach(entry =>{
    if(entry.intersectionRatio > 0){

The forEach loop simply loops through each observed entry, verifies that the element is currently visible in the viewport using the intersectionRatio property and calls the updateImage function on the target of the entry.

Create The Intersection Observer

We create the Observer instance and pass it the callback function and the config options as arguments.

const observer = new IntersectionObserver(callbackFunction, options)

Lastly, we apply the observer on the actual image elements using a forEach method:

images.forEach(img => {

Running the script the final product looks like:

I bet the fourth meme is pretty hilarious, lol.


So far we have implemented the lazy loading feature on the web page using the Intersection Observer API. This drastically reduced the load time of the page. Feel free to try this with more use cases including pop up modals and on an infinite scroll. Also, leave your comments and feedback in the comments section while we wait for the next challenge. Happy coding!


Capture and Report JavaScript Errors with window.onerror

By SergiuHKR

onerror is a special browser event that fires whenever an uncaught JavaScript error has been thrown. It’s one of the easiest ways to log client-side errors and report them to your servers. It’s also one of the major mechanisms by which Sentry’s client JavaScript integration (raven-js) works.

You listen to the onerror event by assigning a function to window.onerror:

window.onerror = function(msg, url, lineNo, columnNo, error) {
  // ... handle error ...
  return false;

When an error is thrown, the following arguments are passed to the function:

  • msg – The message associated with the error, e.g. “Uncaught ReferenceError: foo is not defined”
  • url – The URL of the script or document associated with the error, e.g. “/dist/app.js”
  • lineNo – The line number (if available)
  • columnNo – The column number (if available)
  • error – The Error object associated with this error (if available)

The first four arguments tell you in which script, line, and column the error occurred. The final argument, Error object, is perhaps the most valuable. Let’s learn why.

The Error object and error.stack

At first glance the Error object isn’t very special. It contains 3 standardized properties: message , fileName , and lineNumber. Redundant values that already provided to you via window.onerror.

The valuable part is a non-standard property: Error.prototype.stack. This stack property tells you at what source location each frame of the program was when the error occurred. The error stack trace can be a critical part of debugging. And despite being non-standard, this property is available in every modern browser.

Here’s an example of the Error object’s stack property in Chrome 46:

"Error: foobarn    at new bar (<anonymous>:241:11)n    at foo (<anonymous>:245:5)n    at <anonymous>:250:5n    at <anonymous>:251:3n    at <anonymous>:267:4n    at callFunction (<anonymous>:229:33)n    at <anonymous>:239:23n    at <anonymous>:240:3n    at Object.InjectedScript._evaluateOn (<anonymous>:875:140)n    at Object.InjectedScript._evaluateAndWrap (<anonymous>:808:34)"

Hard to read, right? The stack property is actually just an unformatted string.

Here’s what it looks like formatted:

Error: foobar
  at new bar (<anonymous>:241:11)
  at foo (<anonymous>:245:5)
  at callFunction (<anonymous>:229:33)
  at Object.InjectedScript._evaluateOn (<anonymous>:875:140)
  at Object.InjectedScript._evaluateAndWrap (<anonymous>:808:34)

Once it’s been formatted, it’s easy to see how the stack property can be critical in helping to debug an error.

There’s just one snag: the stack property is non-standard, and its implementation differs among browsers. For example, here’s the same stack trace from Internet Explorer 11:

Error: foobar
  at bar (Unknown script code:2:5)
  at foo (Unknown script code:6:5)
  at Anonymous function (Unknown script code:11:5)
  at Anonymous function (Unknown script code:10:2)
  at Anonymous function (Unknown script code:1:73)

Not only is the format of each frame different, the frames also have less detail. For example, Chrome identifies that the new keyword has been used, and has greater insight into eval invocations. And this is just IE 11 vs. Chrome — other browsers similar have varying formats and detail.

Luckily, there are tools out there that normalize stack properties so that it is consistent across browsers. For example, raven-js uses TraceKit to normalize error strings. There’s also stacktrace.js and a few other projects.

Browser compatibility

window.onerror has been available in browsers for some time — you’ll find it in browsers as old as IE6 and Firefox 2.

The problem is that every browser implements window.onerror differently, particularly, in how many arguments are sent to the onerror listener and the structure of those arguments.

Here’s a table of which arguments are passed to onerror in most browsers:

Browser Message URL lineNo colNo errorObj
IE 11
IE 10
IE 9, 8
Safari 10 and up
Safari 9
Android Browser 4.4

It’s probably not a surprise that Internet Explorer 8, 9, and 10 have limited support for onerror. But you might be surprised that Safari only added support for the error object in in Safari 10 (released in 2016). Additionally, older mobile handsets that still use the stock Android browser (now replaced with Chrome Mobile), are still out there and do not pass the error object.

Without the error object, there is no stack trace property. This means that these browsers cannot retrieve valuable stack information from errors caught by onerror.

Polyfilling window.onerror with try/catch

But there is a workaround — you can wrap code in your application inside a try/catch and catch the error yourself. This error object will contain our coveted stack property in every modern browser.

Consider the following helper method, invoke, which calls a function on an object with an array of arguments:

function invoke(obj, method, args) {
  return obj[method].apply(this,args);
invoke(Math, 'max', [1,2]); // returns 2

Here’s invoke again, this time wrapped in try/catch, in order to capture any thrown error:

function invoke(obj, method, args) {
  try {
    return obj[method].apply(this,args);
  } catch(e) {
    captureError(e);// report the error
    throw e;// re-throw the error

invoke(Math,'highest',[1,2]); // throws error, no method Math.highest

Of course, doing this manually everywhere is pretty cumbersome. You can make it easier by creating a generic wrapper utility function:

function wrapErrors(fn) {
  // don't wrap function more than once
  if(!fn.__wrapped__) {
    fn.__wrapped__ = function() {
        return fn.apply(this,arguments);
        captureError(e);// report the error
        throwe;// re-throw the error

  return fn.__wrapped__;

var invoke = wrapErrors(function(obj, method, args) {

invoke(Math,'highest',[1,2]);// no method Math.highest

Because JavaScript is single threaded, you don’t need to use wrap everywhere — just at the beginning of every new stack.

That means you’ll need to wrap function declarations:

  • At the start of your application (e.g., in $(document).ready if you use jQuery)
  • In event handlers (e.g., addEventListener or $
  • Timer-based callbacks (e.g., setTimeout or requestAnimationFrame)

For example:

$(wrapErrors(function () {// application start

  doSynchronousStuff1();// doesn't need to be wrapped

  setTimeout(wrapErrors(function () {
    doSynchronousStuff2();// doesn't need to be wrapped

  $('.foo').click(wrapErrors(function () {
    doSynchronousStuff3();// doesn't need to be wrapped


If that seems like a heck of a lot of work, don’t worry! Most error reporting libraries have mechanisms for augmenting built-in functions like addEventListener and setTimeout so that you don’t have to call a wrapping utility every time yourself. And, yes, raven-js does this too.

Transmitting the error to your servers

Okay, so you’ve done your job — you’ve plugged into window.onerror, and you’re additionally wrapping functions in try/catch in order to catch as much error information as possible.

There’s just one last step: transmitting the error information to your servers. In order for this to work, you’ll need to set up some kind of reporting web service that will accept your error data over HTTP, log it to a file and/or store it in a database.

If this web service is on the same domain as your web application, just use XMLHttpRequest. In the example below, we use jQuery’s AJAX function to transmit the data to our servers:

function captureError(ex){

  var errorData = {,// e.g. ReferenceError
    message:ex.line,// e.g. x is undefined
    stack:ex.stack// stacktrace string; remember, different per-browser!



Note that, if you have to transmit your error across different origins, your reporting endpoint will need to support Cross Origin Resource Sharing (CORS).


If you’ve made it this far, you now have all the tools you need to roll your own basic error reporting library and integrate it with your application:

  • How onerror works, and what browsers it supports
  • How to use try/catch to capture stack traces where onerror is lacking
  • Transmitting error data to your servers

Of course, if you don’t want to bother with all of this, there are plenty of commercial and open-source tools that do all the heavy-lifting of client-side reporting for you. (Psst: you might want to try Sentry to debug JavaScript.)

That’s it! Happy error monitoring.