Monthly Archives: January 2018

Aligning Your Team on Microservices from the Start

By Jake Lumetta

When I first mentioned the concept of microservices to my engineering team, I was surprised by how quickly everyone researched the idea and then jumped to the conclusion that our monolith should be split up into tiny APIs that were each responsible for a model in our existing Rails application.

Leaping before looking

From the research I’d done, I knew that it was dangerous to build a bunch of microservices without careful consideration about size, boundaries, and other tradeoffs. But no one on my team seemed concerned about it.

What I found was that people on my team were jumping to conclusions based on shallow but dangerously firm notions of what a microservice was. They held knowledge that microservices were small API’s pieced together to create whole systems. But they weren’t aware of the intricate tradeoffs and design considerations that can mean the difference between success and failure. They were pitching architectures with little ability to justify or reason about them.

Why does this happen? And what is a microservice, anyway?

What’s in a name?

For a profession that stresses the importance of naming things well, we’ve done ourselves a disservice with microservices. The problem is that that there is nothing inherently “micro” about microservices.

Microservices do not have to be small.

Some are, but size is relative and there’s no standard of unit of measure across organizations. A “small” service at one company might be one million lines of code while far less at another.

The misconceptions don’t just affect people who want to use microservices, it also stokes the fires of those that dismayed by the industry hopping on the microservices bandwagon without deep understanding of its concepts.

What is a Microservice?

There’s a lot of ambiguity around what microservices are in part because no precise definition exists. Like Agile, microservices are a collection of broad concepts rather than concrete practices.

The term “microservice” was discussed at a workshop of software architects near Venice in May, 2011 to describe what the participants saw as a common architectural style that many of them had been recently exploring. In May 2012, the same group decided on “microservices” as the most appropriate name.

Today’s leading definitions are fairly well aligned:

  • Microservices are small, autonomous services that work together. – Sam Newman (ThoughtWorks)
  • Fine grained SOA architecture done the UNIX way. – James Lewis (ThoughtWorks)
  • Loosely coupled service oriented architecture with bounded contexts. – Adrian Crockford

In short, the microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API.

These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies.

The problem with these definitions is that while they are helpful for introducing the idea of microservices, but when it comes to time put them into practice, they are not very helpful.

Using these definitions, how would you determine whether it makes more sense to have 10 tiny services versus five medium-sized ones?

Ambiguous descriptions of microservices aren’t useful other than as an introduction. When it comes time to put microservices into practice, you need to find other ways to align your team.

Achieving alignment

The most important thing when talking about microservices on a team is to ensure that you are grounded in a common starting point.

How do you align your team when no precise definitions of microservices exist?

On one team I worked on, we tried to not get hung up on definitions and instead, we first focused on defining the benefits we were trying to achieve by adopting microservices:

Shipping software faster

Our main application was a large codebase with several small teams of developers trying to build features for different purposes. This meant that every change had to try to satisfy the different groups. For example, a database change that was only serving one group would have to be reviewed and accepted by others that didn’t have as much context. This was tedious and slowed us down.

Having different groups of developers sharing the same codebase also meant that the code continually grew more complex in undeliberate ways. As the codebase grew larger, no one on the team could own it and make sure all the parts were organized and fit together optimally.

With a microservices architecture, we hoped to be able to divide our code up so different teams of developers could fully own them. This would enable teams to innovate much more quickly without tedious design, review, and deployment processes.

Flexibility with technology choices

Our main large application was built with Ruby on Rails and a custom JavaScript framework with complex build processes.

Several parts of our application were hitting major performance issues that were difficult to fix and bringing down the rest of the application with it. We saw an opportunity to rewrite these parts of our application using a better approach, but inter-tangled our codebase was with the affected areas, this felt too big and costly to do. As time went on, our teams grew frustrated with the feeling of being trapped in a big codebase that was too big and expensive to fix or replace.

By adopting microservices architecture, we hoped that keeping individual services smaller would mean that the cost to replace them with a better implementation would be much easier to manage. We also hoped to be able to pick the right tool for each job rather than having to go with a one-size-fits-all approach.

We’d have the flexibility to use multiple technologies across our different applications as we saw fit.

If a team wanted to use something other than Ruby for better performance, or switch from our custom JavaScript framework React, they could do so.

Honestly answering tough questions

In addition to outlining the benefits we hoped to achieve, we also made sure we were being realistic about the costs and challenges associated with building and managing microservices.

Microservices involve distributed systems which introduce a whole host of concerns such as network latency, fault tolerance, transactions, unreliable networks, and asynchronicity.

Once we defined the benefits and costs of microservices we focused on the core problems we were trying to solve.

  • How would having more services help us ship software faster in the next 6-12 months?
  • Were there strong technical advantages to using a specific tool for a portion of our system?
  • Did we foresee wanting to replace one of the systems with a more appropriate one down the line?
  • How did we want to structure our teams around services as we hired more people?
  • Was the productivity gain from having more services worth the foreseeable costs?

Decision playbook

Our experiences taught us a great deal about achieving alignment on the seemingly unwieldy topic of microservices. In summary, here are the recommended steps for aligning your team that is jumping into microservices:

  1. Learn about microservices while agreeing that there is no “right” definition.
  2. Discuss and memorialize your anticipated benefits and costs of adopting microservices.
  3. Avoid too eagerly hopping on the microservices bandwagon–be open to creative ideas and spirited debate about how best to architect your systems. Stay rooted in the benefits and costs you have previously identified.
  4. Focus on making sure the team has a concretely defined set of common goals to work off of. It’s more valuable to discuss and define what you’d like to achieve with microservices than it is to try and pin down what a microservice actually is.

Definitions matter, but when it comes to microservices, trying to settle on a precise definition isn’t necessarily a good use of your team’s valuable time. Instead, focus on functionality and costs and benefits. Keeping your discussion about microservices grounded in what you’re actually trying to achieve will allow you to determine whether the microservice architectural style is right for your team.

For more content like this, check out our free eBook, Microservices for Startups.

Source:: scotch.io

​Why Wix Code Uses JavaScript

By Chris Sevilleja

Wix has many tools and features available to its site owners. Wix Code gives you the ability to add your own code to a Wix site.

We chose JavaScript as the language we support both in both the front-end and the backend, and as the language of our APIs.

JavaScript in the Front-end

“Any application that can be written in JavaScript, will eventually be written in JavaScript.”

— Atwood’s Law, Jeff Atwood

Wix Code lets you add your own custom JavaScript to control how the front-end of your site behaves. We chose JavaScript because it is arguably the standard language for front-end web development.

JavaScript lets you add behaviors to your site that respond to user actions, without calling the server or loading a new page. This lets you implement element animations that give your site a more engaging UX.

You can also add logic to your site like client-side validation, which gives your users a more immediate response to their input.

JavaScript in the Backend with Node.js

“Node.js enables JavaScript to be used for server-side scripting… Consequently, Node.js has become one of the foundational elements of the ‘JavaScript everywhere’ paradigm, allowing web application development to unify around a single programming language, rather than rely on a different language for writing server-side scripts.”
—Wikipedia

Node.js is a strong contender as the standard for using JavaScript in the backend. We decided to use Node.js in our backend. This means that you only have one language to learn to completely customize your site’s functionality both front-end and backend.

Earlier we mentioned using JavaScript to create client-side validation in your site. While you might add client-side validation to make your site respond more quickly to user actions, you may still want to run a validation check in the backend before actually submitting data to your database. Since you use JavaScript in your backend, you can now reuse your front-end code in the backend wherever you need to.

Working with JavaScript in Wix Code

Wix Code gives you everything you need to get up and coding in your site, including both a built-in IDE and hassle-free backend, and an easy way to let your front-end and backend communicate.

Wix Code includes a built-in online IDE that makes it easy to add code to your site with zero setup. This IDE works for your front-end files and lets you add page-specific code or code that you want to run on your entire site. You can also use the IDE to add code to backend files like data hooks, custom routers, web modules, and HTTP functions, or any other files you need.

The IDE makes coding easy because all your code is automatically integrated with your site. The IDE also includes professional tools to make coding easier, like code completion for elements and their properties or methods (type Ctrl + space after our $ selector or after a selected element). Along the way, the IDE also provides JSLint feedback to help you code using best practices.

To make debugging easier, any messages you log to your console in the front-end are displayed when you preview your site in the Developer Console in Wix’s Preview mode. Logs in backend code can be seen in your browser’s developer tools console as well.

Built-in Backend

Wix Code gives you a built-in backend. That means that you don’t have to worry about creating, managing, and monitoring a backend infrastructure. We take care of setting up and provisioning the servers and monitoring their performance. We also give you built-in database functionality that you can use with or without our wix-data API.

Most importantly, we give you web modules: an easy way to let your front-end code call your backend code.

Web Modules

Web modules enable you to write functions that run server-side in the backend, which you can then easily call in your front-end code. Wix Code handles all the client-server communication required to enable this access.

To help with debugging, you can log messages to the console in web module code. These logs are displayed in the browser’s console.

Web modules also have permissions settings, so you can be sure that no one can access or use your backend code in ways that you didn’t intend, either through your site’s functionality or using a browser’s developer tools.

In case you’re curious, here’s what’s going on behind the scenes. When you import a web module on the client-side, you get a proxy function to the web module function. This proxy function uses an XMLHttpRequest to invoke the function in the backend. The runtime listens to those invocations and calls the appropriate function. The arguments and return value are serialized using JSON.

Unlike regular modules that allow you to export functions, objects, and other items, you can only export functions from web modules. Web modules also always return a promise. This is true even if, in the implementation of the function, it returns a value. For example, if your web module function returns a value, like this:

// Filename: aModule.jsw (web modules need to have a *.jsw* extension)
export function multiply(factor1, factor2) {
     return factor1 * factor2;
}

When you call the function, it still returns a promise that resolves to the value. So you need to use it like this:

import {multiply} from 'backend/aModule';
multiply(4,5).then(function(product) {
     console.log(product);
});
// Logs: 20

JavaScript Standards Support

Wix Code supports writing code using the latest JavaScript standards. You can write code using ES2015 syntax both in the backend and the front-end, including:

  • Promises
  • async/await for working with Promises
  • Support for the new modules
  • Arrow functions
  • Destructuring assignments
  • let and const declarations

Browsers are gradually adopting the new JavaScript standards. But you don’t have to worry about which browsers will understand you code. Until the new JavaScript standards are fully implemented, Wix Code transpiles your code into ES5, so it can run in current browsers.

Wix Code also supports source maps, so even though the browser runs transpiled ES5 code, you can debug your ES2015 source code in your browser’s developer tools.

Optimization

Wix Code also takes care of efficiently delivering your code to the browser. Your code is minified and source files are combined (bundled) without you having to configure anything.

Code Examples

Wix Code offers a number of APIs that let you control your site’s functionality, including APIs for our database and custom routers, and for exposing your site’s functionality as a service, among others.

The $w API is used to work with the elements on your site’s pages. For example, here we create an onClick event handler for a button. The handler shows or hides an image based on the image’s current state. It also changes the label of the button accordingly.

$w.onReady( () => {
  $w("#showHideButton").onClick( (event, $w) => {
    if( $w("#myImage").hidden ) {
      $w("#myImage").show();
      event.target.label = "Hide";
    }
    else {
      $w("#myImage").hide();
      event.target.label = "Show";
    }
  } );
} );

The wix-fetch API is used for making HTTP requests to 3rd party services. For example, here we have a backend function that receives the name of a city. It sends a request to a weather API to get the weather in that city. When a response is received, the function returns the current temperature to the calling function.

import {fetch} from 'wix-fetch';  

export function getCurrentTemp(city) {
  const url = 'https://api.openweathermap.org/data/2.5/weather?q=';
  const key = '<api key placeholder>';

  let fullUrl = url + city + '&appid=' + key + '&units=imperial'; 

  return fetch(fullUrl, {method: 'get'})
    .then(response => response.json())
    .then(json => json.main.temp);
}

Learn More

The ability to add JavaScript to either your front-end or backend means that you can build a Wix site in entirely new ways.

For more information about the advanced features Wix Code offers, see our Resource Center.

Source:: scotch.io

Zero to Deploy: A Practical Guide to Static Sites with Gatsby.js

By William Imoh

Since the advent of the modern web, performance has been a key consideration when designing a website or a web app. When a website requires no server interaction whatsoever, what is hosted on the web is served to a user as is, this is referred to as a static site.

In this post, we will simply be explaining the basics of Gatsby.js and build out a simple static blog in the process. The Blog will be deployed to the web using Netlify. Blog posts will be written in Markdown and GraphQL will be used to query markdown data into the blog. The final product will look like this:

What is a Static Site?

A static site is a site which contains fixed content. In preferred to dynamic websites( requiring client-server interaction) because of challenges ranging from speed to security, the absence of a server clearly removes issues arising from these sources. Static Site Generators are tools used develop static sites, effectively and efficiently. Recently the use of static sites is on the rise and various tools and technologies such as Nuxt, Metalsmith, Jekyll, and Gatsby are taking center stage.

Introducing Gatsby.js

Gatsby is simply a robust and fast static site generator which uses React.js to render static content on the web. Content is written as React components and is rendered at build time to the DOM as static HTML, CSS and JavaScript files. By default, Gatsby builds a PWA. Like most static site generators, Gatsby requires plugins to either extend its functionality or modify existing functionality.

Gatsby is said to be robust in a way that the static content rendered can be sourced from a large number of sources and formats including markdown, CSV and from CMS like WordPress and Drupal. All that is required are plugins to handle the data transformation. Plugins in Gatsby are of three categories.

  • Functional Plugins: These plugins simply extend the ability of Gatsby. An example is the gatsby-plugin-react-helmet which allows the manipulation of the Head of our document.
  • Source Plugins: This plugin ‘finds’ files in a Gatsby project and creates File Nodes for each of this files, these files can then be manipulated by transformer plugins. An example is the gatsby-source-filesystem which ‘sources’ files in the filesystem of a Gatsby project and creates File Nodes containing information about the file.
  • Transformer Plugins: Like we saw earlier, data in Gatsby can come from multiple sources and transformer plugins are responsible for converting these files to formats recognizable by Gatsby. An example is the gatsby-transformer-remark plugin which converts Markdown File Nodes from the filesystem into MarkdownRemark which can be utilized by Gatsby. Other plugins exist for various data sources and you can find them here.

    Prerequisites

    To build out this blog, knowledge of HTML, CSS, and JavaScript is required with a focus on ES6 Syntax and JavaScript Classes. Basic knowledge of React and GraphQL is also of advantage.

    Installation

    Since this is a node.js project, Node and its package manager NPM are required. Verify if they are installed on your machine by checking the current version of both tools with:

node -v && npm -v

Else, install Node from here.

Gatsby offers a powerful CLI tool for a faster build of static sites. The Gatsby CLI installs packages known as ‘starters’. These starters come as pre-packaged projects with essential files to speed up the development process of the static site. Install the Gatsby CLI with:

npm install -g gatsby-cli

This installs the CLI tool, then proceed to create a new project with the Gatsby default starter.

gatsby new scotch-blog

This should take a while as the tool downloads the starter and runs npm install to install all dependencies.

Once the installation is complete change directory to the project folder and start the development server with:

cd scotch-blog && gatsby develop

This starts a local server on port 8000.

The web page looks like:

Gatsby’s default starter comes with all essential files we require and you can find other starters here and even create or contribute to a starter.

For a simple blog all we require are:

  1. Have a blog homepage.
  2. Write blog posts in markdown.
  3. Display blog post titles on the homepage.
  4. View each blog post on a separate page.

For these, we will require the three plugins we stated earlier to which will manipulate the element of our blog, source markdown files and transform markdown files respectively. All styling will be done via external CSS files and in-line component styling in React, however, several other methods of styling Gatsby documents exist such as CSS modules, typography.js, and CSS-in-JS. You can read more about them here.

Install required plugins with:

npm install --save gatsby-transformer-remark gatsby-source-filesystem

Note: The gatsby-plugin-react-helmet comes preinstalled with the default Gatsby starter

Configure Plugins
Before we go ahead to create pages, let’s configure the installed plugins. Navigate to gatsby-config.js in the root directory of your project and edit it to:

module.exports = {
  siteMetadata: {
    title: 'The Gray Web Blog',
  },
  plugins: [
    'gatsby-plugin-react-helmet',
    'gatsby-transformer-remark',
    {
      resolve: `gatsby-source-filesystem`,
      options:{
        name: `src`,
        path: `${__dirname}/src/`
      }
    },
  ],
};

Gatsby runs the gatsby-config.js during build and implements all installed plugins. One great thing about Gatsby is that it comes with a hot reload feature so changes made on the source files are immediately visible on the website rendered.

Note the siteMetadata in the gatsby-config module, this can be used to set the value of any element dynamically using GraphQL, for instance – document title and page title.

Layout

One key design feature considered during development is the layout of pages. This consist of any element we would like to be consistent across all pages. They include headers, footers, navbars e.t.c. For our blog, the default Gatsby starter provides a default layout which is found in src/layouts. To make some changes to the header, edit the index.js file in layouts. first import all required dependencies with:

import React from 'react'
import PropTypes from 'prop-types'
import Helmet from 'react-helmet'
import Header from '../components/Header'
import './index.css'

Note the imported CSS file. Gatsby supports the use of external stylesheets to style React components.

Edit the React component to:

const TemplateWrapper = ({ children }) => (
  <div>
    <Helmet
      title="The Gray Web Blog"
      meta={[
        { name: 'description', content: 'Sample' },
        { name: 'keywords', content: 'sample, something' },
      ]}
    />
    <Header />
    <div
      style={{
        margin: '0 auto',
        maxWidth: 960,
        padding: '0px 1.0875rem 1.45rem',
        paddingTop: 0,
      }}
    >
      {children()}
    </div>
  </div>
)
TemplateWrapper.propTypes = {
  children: PropTypes.func,
}
export default TemplateWrapper

is a component provided by the react-helmet plugin shipped originally with Gatsby’s default starter. A Header component is imported and the div to contain all page elements is styled in-line. Gatsby offers the flexibility of creating custom components in react and these components can as well be stateful or stateless. We will stick to using stateless components in this tutorial, like the

component.

Navigate to the header component in src/components/header/index.js edit it to:

const Header = () => (
  <div
    style={{
      background: 'black',
      marginBottom: '1.45rem',
      marginTop:'0px',
      display:'block',
      boxShadow:'0px 0px 7px black',
    }}
  >
    <div
      style={{
        margin: '0 auto',
        maxWidth: 960,
        padding: '1.45rem 1.0875rem',
      }}
    >
      <h1 style={{ margin: 0, textAlign:'center' }}>
        <Link
          to="/"
          style={{
            color: 'white',
            textDecoration: 'none',
          }}
        >
          The Gray Web Blog
        </Link>
      </h1>
    </div>
  </div>
)
export default Header 

We simply made some changes to the styling by changing the background color of the header and aligning the header text to the center.

So far we have created the layout of the blog, how about we do some cool stuff by creating blog posts and displaying them on the home page.

Create Blog Posts

Blog posts are created in markdown as earlier stated. In src, create a folder titled blog-posts. This will house all blog posts to be served. Create three sample markdown files with titles. We have:

basic-web-development.md

---
title: "Basic Web Development"
date: "2018-01-01"
author: "Chris Ashî"
---
Web development is a broad term for the work involved in developing a web site for the Internet (World Wide Web) or an intranet (a private network). Web development can range from developing the simplest static single page of plain text to the most complex web-based internet applications (or just 'web apps') electronic businesses, and social network services. A more comprehensive list of tasks to which web development commonly refers, may include web engineering, web design, web content development, client liaison, client-side/server-side scripting, web server and network security configuration, and e-commerce development. 

in-the-beginning.md

---
title: "The Beginning of The Web"
date: "2018-01-10"
author: "Chuloo Will"
---
The World Wide Web ("WWW" or simply the "Web") is a global information medium which users can read and write via computers connected to the Internet. The term is often mistakenly used as a synonym for the Internet itself, but the Web is a service that operates over the Internet, just as e-mail also does. The history of the Internet dates back significantly further than that of the World Wide Web. - Wikipedia

and

vue-centered.md

---
title: "Common Vue.js"
date: "2017-12-05"
author: "Alex Chî"
---
Vue.js (commonly referred to as Vue; pronounced /vjuː/, like view) is an open-source progressive JavaScript framework for building user interfaces.[4] Integration into projects that use other JavaScript libraries is made easy with Vue because it is designed to be incrementally adoptable. Vue can also function as a web application framework capable of powering advanced single-page applications. - Wikipedia

The texts between the triple dashes are known as frontmatter and provide basic information about the markdown post. We have our markdown post, to render this data, we would employ GraphQL.

Querying Posts with GraphQL

GraphQL is a powerful yet simple query language. Since its introduction, it is fast gaining popularity and has become a widely used means of consuming data in React. Gatsby ships with GraphQL by default. Since we have previously installed the gatsby-source-filesystem plugin, all files can be queried with GraphQL and are visible as File Nodes.

GraphQL also comes with an important tool called GraphiQL an IDE with which we visualize and manipulate our data before passing it to React components. GraphiQL is available on http://localhost:8000/___graphql while the Gatsby server is running. Open up GraphiQL on that address to visualize data.

Run the query to see all files in /src/.

{
  allFile {
    edges {
      node {
        id
      }
    }
  }
}

This returns a list of all files in our src directory as we already specified that when configuring the gatsby-source-filesystem in the gatsby-config.js file. We have all the files in our source folder but we need only the markdown files and their accompanying data like frontmatter and size. The gatsby-transformer-remark plugin earlier installed comes in handy now.

The plugin transforms all markdown file nodes into MarkdownRemark nodes which can be queried for their content.

Run this query in GraphiQL to fetch all MarkdownRemark nodes and usable data in them.

{
  allMarkdownRemark {
    totalCount
    edges {
        node {
          frontmatter {
            title
            date
            author
          }
          excerpt
          timeToRead
        }
    }
  }
}

Running this query in GraphiQL will return a list of all markdown files and their corresponding data in a JSON object as requested. To pass this data to our page component, navigate to index.js in src/pages which holds the homepage. First, import all required dependencies as well as external stylesheets with:

import React from 'react'
import Link from 'gatsby-link'
import './index.css'
...

Create and export an IndexPage stateless component and pass the data object to it as an argument:

const IndexPage = ({data}) => {
  console.log(data)
  return(
  <div>
    {data.allMarkdownRemark.edges.map(({node}) => (
      <div key={node.id} className="article-box">
        <h3 className="title">{node.frontmatter.title}</h3>
        <p className="author">{node.frontmatter.author}</p>
        <p className="date">{node.frontmatter.date} {node.timeToRead}min read</p>
        <p className="excerpt">{node.excerpt}</p>
      </div>
    ))}
  </div>
  )
}
export default IndexPage

The .map() method is used to traverse the data object for data to be passed to the components elements. We passed the title, author, date, time to read and excerpt to various JSX elements. We still haven’t queried this data. After the export statement create a GraphQL query with:

export const  query = graphql`
query HomePageQuery{
  allMarkdownRemark(sort: {fields: [frontmatter___date], order: DESC}) {
    totalCount
    edges {
      node {
        frontmatter {
          title
          date
          author
        }
        excerpt
        timeToRead
      }
    }
  }
}
`

The sort query is used to sort articles by date in an ascending order so we have the earliest article on top.
In src/pages/, create the CSS file imported with:

.article-box{
    margin-bottom: 1.5em;
    padding: 2em;
    box-shadow: 0px 0px 6px grey;
    font-family: 'Helvetica';
}
.title{
    font-size: 2em;
    color: grey;
    margin-bottom: 0px;
}
.author, .date{
    margin:0px;
}
.date{
    color: rgb(165, 164, 164);
}
.excerpt{
    margin-top: 0.6em;   
}

Restart the development server and we have:

Alas, we have an awesome blog page with details and excerpt from the post content. We need to view each blog post on a separate page, let’s do that next.

Creating Blog Pages

This is just about the best part of building out this blog, but also a bit complex. We could actually create individual pages in src/pages, pass the markdown content to the document body and link the pages to the blog titles but that would be grossly inefficient. We would be creating these pages automatically from any markdown post in src/blog-posts.

To accomplish this, we will require two important APIs which ship with Gatsby and they are:

  • onCreateNode
  • createPages

We will simply be creating a ‘path’ otherwise known as ‘slug’ for each page and then creating the page itself from its slug. APIs in Gatsby are utilized by exporting a function from the Gatsby-node.js file in our root directory.

In Gatsby-node.js, export the onCreateNode function and create the file path from each File node with:

const { createFilePath } = require(`gatsby-source-filesystem`)

exports.onCreateNode = ({ node, getNode, boundActionCreators }) => {
  const { createNodeField } = boundActionCreators
  if (node.internal.type === `MarkdownRemark`) {
    const slug = createFilePath({ node, getNode, basePath: `pages` })
    createNodeField({
      node,
      name: `slug`,
      value: slug,
    })
  }
}
...

The createFilePath function ships with the gatsby-source-filesystem and enables us to create a file path from the File nodes in our project. First, a conditional statement is used to filter only the markdown file nodes, while the createFilePath creates the slug for each File node. The createNodeField function from the API adds the slug as a field to each file node, in this case, Markdown File nodes. This new field created(slug) can then be queried with GraphQL.

While we have a path to our page, we don’t have the page yet. To create the pages, export the createPages API which returns a Promise on execution.

const path = require(`path`)
...
exports.createPages = ({ graphql, boundActionCreators }) => {
  const { createPage } = boundActionCreators
  return new Promise((resolve, reject) => {
    graphql(`
      {
        allMarkdownRemark {
          edges {
            node {
              fields {
                slug
              }
            }
          }
        }
      }
    `).then(result => {
      result.data.allMarkdownRemark.edges.forEach(({ node }) => {
        createPage({
          path: node.fields.slug,
          component: path.resolve(`./src/templates/posts.js`),
          context: {
            slug: node.fields.slug,
          },
        })
      })
      resolve()
    })
  })
}

In the createPages API, a promise is returned which fetches the slugs created using a GrphQL query and then resolves to create a page with each slug. The createPage method, creates a page with the specified path, component, and context. The path is the slug created, the component is the React component to be rendered and the context holds variables which will be available on the page if queried in GraphQL.

Creating The Blog Template

To create the blog template, navigate to /src/ and create a folder called templates with a file named posts.js in it. In post.js, import all dependencies and export a functional component with:

import React from "react";
export default ({ data }) => {
  const post = data.markdownRemark;
  return (
    <div>
      <h1>{post.frontmatter.title}</h1>
      <h4 style={{color: 'rgb(165, 164, 164)'}}>{post.frontmatter.author} <span style={{fontSize: '0.8em'}}> -{post.frontmatter.date}</span></h4>
      <div dangerouslySetInnerHTML = {{ __html: post.html }}/>
    </div>
  );
};

You can see GraphQL data already being consumed, query the data using GraphQL with:

export const query = graphql`
  query PostQuery($slug: String!) {
    markdownRemark(fields: { slug: { eq: $slug } }) {
      html
      frontmatter {
        title
        author
        date
      }
    }
  }
`;

We have our blog pages and the content. Lastly, link the post titles in the homepage to their respective pages. In src/pages/index.js, edit the post title header to include the link to the post content:

...
<Link to={node.fields.slug} style={{textDecoration: 'none', color: 'inherit'}}><h3 className="title">{node.frontmatter.title}</h3></Link>
...

Since we require data on the slug, edit the GraphQL query to include the slug:

export const  query = graphql`
query HomePageQuery{
  allMarkdownRemark(sort: {fields: [frontmatter___date], order: DESC}) {
    totalCount
    edges {
      node {
        fields{
          slug
        }
        frontmatter {
          title
          date
          author
        }
        excerpt
        timeToRead
      }
    }
  }
}
`

Yikes, our static blog is ready, restart the development server and we have:

Running Gatsby build will create a production build of your site in the public directory with static HTML files and JavaScript bundles.

Deploy Blog to Netlify

So far we have built out a simple static blog with blog content and pages. It will be deployed using Netlify and Github for continuous deployment. Netlify offers a free tier which allows you deploy static sites on the web.

Note: Pushing code to Github and deploying to Netlify from Github ensures that once a change is made to the repository on Github, the updated code is served by Netlify on build.

Create an account with Github and Netlify. In Github, create an empty repository and push all files from your project folder to the repository.

Netlify offers a login option with Github. Log into Netlify with your Github account or create a new account with Netlify. Click the ‘New site from Git’ button and select your Git provider.

Github is the chosen Git Provider in this case.

Select Github and choose the repository you wish to deploy, in this case, the repository for the static blog.
Next, specify the branch to deploy from, build command, and publish directory.

Click the ‘Deploy site’ Button to deploy the static site. This may take few minutes to deploy after which the static site is deployed to a Netlify sub-domain. Here is the demo of the static blog built earlier. https://vigilant-bhaskara-66ed6e.netlify.com/.

Conclusion

In this post, you have been introduced to building a static site with Gatsby which utilizes React components to generate static content on build. Gatsby offers a robust approach to static site generation with the ability to parse data from various sources with the help of plugins. The static site was also deployed to the web using Netlify. Feel free to try out other amazing features of Gatsby including the various styling techniques as well. Comments and suggestions are welcome and you can make contributions to the source code here.

Source:: scotch.io

How to make parallel calls in Java with CompletableFuture example

By Adrian Matei

Some time ago I wrote how elegant and rapid is to make parallel calls in NodeJS with async-await and Promise.all
capabilities
.
Well, it turns out in Java is just as elegant and succinct with the help of CompletableFuture
which was introduced in Java 8. To demonstrate that let’s imagine that we need to retrieve ToDos from a REST service, given their Ids. Of course
we could iterate through the list of Ids and sequentially call the web service, but it’s much more performant to do it in parallel
with asynchronous calls.

Continue reading How to make parallel calls in Java with CompletableFuture example

Node and npm without sudo

By John Papa

When running npm and node, you may find yourself getting permission errors that ultimately lead you to using `sudo` in your commands. While this helps get around the issue in the short-term, it also places stricter permissions on those installs and it becomes a slippery slope where soon you may need sudo for more than you bargained for. Also, do you really want to be using `sudo` to install npm packages?

After discussing this with Tierney Cyren, we found a much easier way to get node and npm running on my mac than what I used to do a few years back.

First, we want to use the official Node.js install. I mean, its official, right? And if we don’t trust the Node folks with installing Node, then, well, what are we doing here anyway?

Next we create a folder for the global npm packages. I made mine ~/.npm-packages.

Then we use the npm config command to tell npm where we want them.

That’s it!

Detailed steps

Here are the steps, in detail …

  1. Install Node.js from https://nodejs.org/en/download/

  2. Update to the latest version of npm npm install npm -g

  3. Make a new folder for the npm global packages mkdir ~/.npm-packages

  4. Tell npm where to find/store them npm config set prefix ~/.npm-packages

  5. Verify the install

# this should show the versions
node -v  
npm -v  
# this should show npm and ng with no errors
npm list -g --depth=0  

Handling Multiple Node Versions

What happens when a new version of node is released? What if you need version 4.4.2 for one app and 8.9.1 for another? Did you know version 9 is out now too? Yikes! it would be great if we could manage multiple versions of node on the same computer.

Check out this post for more on how to tackle multiple versions of node.

Source:: johnpapa

Weekly Node.js Update - #4 - 01.26, 2018

By Tamas Kadlecsik

Weekly Node.js Update - #4 - 01.26, 2018

Below you can find RisingStack‘s collection of the most important Node.js news, updates, projects & tutorials from this week:

Debugging Node without restarting processes

One of the most significant drawbacks when switching contexts to Node is the lack of the Chrome Developer Tools. Luckily, there are options for enabling them, and they’ve gotten much more stable and usable in recent times.

A Node.js process started without inspect can also be instructed to start listening for debugging messages by signaling it with SIGUSR1 (on Linux and OS X).

Introducing Node clinic – a performance toolkit for Node.js developres

One of the most significant issues for developers is to figure out why their apps are slow. So far, there weren’t too many tools that deal with performance issues. Luckily, NearForm came up with a solution: Node Clinic – a performance toolkit for Node.js.

Weekly Node.js Update - #4 - 01.26, 2018

How We Simplified our Tooling Setup for Node.js Projects

Simplifying tooling setup can save a lot of time and energy when creating apps. Wrapping tools like Babel, ESLint, or Prettier provides a great solution for this issue; go with tools that come pre-configured and don’t spend unnecessary time on your setup.

Learn Node.js & Microservices from the Experts of RisingStack

Would you like to know more about the Node.js Fundamentals, Microservices, Kubernetes, Angular, or React? We have good news for you!

We organize training sessions throughout Europe in the upcoming months. Our 2 day long, in-person trainings allow you to significantly improve your JavaScript & Microservices knowledge & get expert feedback during live coding sessions.

Testing your npm package before releasing it using Verdaccio + ngrok

Making sure that your npm package works as expected after publishing on npm can be a nightmare.

In this post, we’re going to explain how to create a public npm registry dedicated for testing your npm package on both your machines and servers before releasing it officially.
Weekly Node.js Update - #4 - 01.26, 2018

Make your web app use HTTPS in 30 minutes with Let’s Encrypt and NGINX

Switching to HTTPS is crucial to ensure safety. Good news is that you can do it for free with ‘Let’s Encrypt and NGINX’. It should only take about 30 minutes… This article guides you through the process step-by-step.

I used to hate JavaScript. Now I like it.

A story of a developer who first disliked Javascript but soon discovered its usefulness.

If you see a developer that hates JavaScript, it’s probably because they perceive you as a cyclist taking over a space they believe they’re entitled to.

Previously Node.js Updates:

In the previous Weekly Node.js Update, we collected great articles, like

  • Learn Node.js — Directory of top articles from last year (v.2018);
  • A crash course on TypeScript with Node.js;
  • How does Node load built-in modules;
  • Building Secure JavaScript Applications;
  • Migrating your Node.js REST API to Serverless;

& more..

We help you to stay up-to-date with Node.js on a daily basis too. Check out our Node.js news page and its Twitter feed!

Source:: risingstack.com