Category Archives: RSS Feed

Node.js Weekly Update - December 15

By Tamas Kadlecsik

Node.js Weekly Update - December 15

Below you can find RisingStack‘s collection of the most important Node.js updates, projects & tutorials from this week:

Node v9.3.0 (Current) released

Notable changes:

  • async_hooks:
    • add trace events to async_hooks (Andreas Madsen)
    • add provider types for net server (Andreas Madsen)
  • console:
    • console.debug can now be used outside of the inspector (Benjamin Zaslavsky)
  • deps:
    • upgrade libuv to 1.18.0 (cjihrig)
    • patch V8 to 6.2.414.46 (Myles Borins)

read more!

Node.js and the Web Tooling Benchmark

The Web Tooling Benchmark is a collaborative effort to ensure that the VM implementations and application platforms that support the JavaScript ecosystem are capable of supporting the workloads of the most popular and most depended upon developer tooling frameworks.

Thanks to some recent work by the Node.js Benchmarking Working Group, the Web Tooling Benchmark is now run nightly against Node.js master.

Consumer Driven Contract Testing with Pact

At RisingStack we love working with Microservices, as this kind of architecture gives us flexibility and speed.

In our latest blog post we’ll show you why is it beneficial to do contract testing, and how can you implement it in your own Node.js microservices architecture with the Pact framework.

VS Code: Showcasing The Power of Real Time Collaboration

In this demo Chris Dias and Amanda Silver demonstrate how you can easily acquire developer tools, collaborate on Visual Studio Live Share to fix a bug, and work with new extensions in Visual Studio Code to work with Azure App Service, Docker, Azure Functions, and Cosmos DB.

Securing real time applications with JSON Web Tokens

This post focuses on a great challenge for real-time tools: how to really make communication secure, how to control user identity, how to manage permissions and access control.

Why To Choose Node.js For Server Side Programming

This article explains why a lot of companies have already started using Node.js. It elaborates on why to use Node.js for development and in what cases it might not be the best idea.

How I built a replacement for Heroku and cut my platform costs by 4X

Read the story of a self-taught developer, former photographer, who had enough of paying a fortune for server providers. After a few months of planning, designing, building, deleting, he built his own server, CaptainDuckDuck, that was launched in October, 2017.

Node.js Weekly Update - December 15

Previously in the Node.js Weekly Update

In the previous Node.js Weekly Update, we brough you the finest articles of the past week. This included the release of Node v8.9.2 and v6.12.1 (LTS), OpenCV’s deep neural networks module, health checks and graceful shutdown for Node.js applications, and improving speed with native addons. Click if you missed it!

We help you to stay up-to-date with Node.js on a daily basis too. Check out our Node.js news page and its Twitter feed!

Source:: risingstack.com

End of Year Giveaway 2017

By Chris Sevilleja

Hey everyone! We’re kicking off our end of year giveaway. For those that have followed us for a few years, this is Scotchmas!

Thanks for all your support throughout the years. Our entire team here appreciates the amazing community we have and are thankful every day. Cheers to you!

We’ll be giving a way a boatload of Scotch swag this year as we got a lot of great feedback after sharing some of the new Scotch gear on Twitter:

Thoughts on new swag? Anyone interested in hoodies and hats?

The #scotchmas 2017 giveaway this year is going to be MASSIVE! pic.twitter.com/SfkZIyvZ7Y

— Scotch.io (@scotch_io) November 22, 2017

What You Can Win

  • 5 Winners: Hat!
  • 5 Winners: Shirt!
  • 5 Winners: Zip Hoodie
  • 3 Winners: Mug
  • 10 Winners: Stickers (Set of 3)
  • 10 Winners: 3-Month Scotch School Membership
  • 3 Winners: 1-Year Scotch School Membership

The Giveaway

a Rafflecopter giveaway

Source:: scotch.io

Building a Slack Bot with Modern Node.js Workflows

By Calvin Karundu

reporterbot demo

I recently started getting more requests from my workmates to refresh a “particular report” through one of the many custom reporting scripts we have. At first I didn’t mind, but eventually, I felt it would be better if people could get these reports themselves. There was no need to have this back and forth before getting a report, plus I’m certain I’m not the only developer who frowns upon the idea of being plugged out of my coding thoughts.

I decided to build a Slack app using Node.js that anyone at work could use to get reports on demand. This would save me from doing mundane reporting tasks and make for a better experience for everyone involved… a.k.a laziness for the win :p

In this tutorial, we’ll be building a Slack app called reporterbot that provides functionality to get reports on demand. Our app will need to allow a user to state what report they want, generate the actual report and finally send it back to the user. Aside from building the actual Slack app, this tutorial will also cover some modern practices when working on Node.js applications.

Prerequisites

Before getting started you will need to have node and npm installed on your machine. This is ridiculously easy when using a tool called nvm – node version manager. If you have node and npm installed but have never heard of nvm, please go check it out as it makes managing node installations quite simple.

You’ll also need to have basic knowledge of using the Express web framework as well as ngrok or any other secure tunneling service to make your localhost publicly reachable.

Create Slack App

Our bot will be packaged as a Slack app. Slack apps allow you to add more functionality into your Slack workspace. In our case, we’ll be adding our custom reporting functionality.

To create your new Slack app, go to this page and set your app name as well as the workspace you’ll be developing your app in. If you don’t have a workspace, you can create one here.

Set App Name and Select Development Workspace

Once you finish this step, you’ll have a new Slack app to manage.

Enable Slack App Features

Next we’ll need to configure our new slack app with the features we’ll be using to achieve our needed functionality. For our reporterbot to work, we’ll need to enable the following features:

  • Bot Users – This will enable our app to interact with users in a more conversational manner
  • Slash Commands – This will allow users to invoke app methods that we expose. We’ll use this to allow the user to initiate the get report flow (on demand)
  • Interactive Components – This will help us make the experience more interactive using components like drop down menus to help the user select a report instead of having them type it in

Slack App Features

To enable bot users, click on the “Bots” feature button as shown in the image above. Set your bots Display Name and Default Username to reporterbot or whatever name you find fitting.

For slash commands and interactive components to work, our app we’ll need to provide webhooks that slack can post to. These features involve actions that are initiated by the user inside Slack. Every time an action happens through a slash command or interactive component, Slack will make a POST request to our registered webhook and our application will need to respond accordingly.

To enable slash commands, click on the “Slash Commands” feature button as shown in the image above. We’ll create a new slash command called /report and set the webhook to our server side endpoint that will handle this action:

https://**your-ngrok-domain-here**/slack/command/report

Whenever a user sends a message beginning with /report, slack will make a POST request to the configured webhook and our app will initiate the get report flow.

Once the get report flow is initiated, we’ll make use of “Interactive Components” to allow the user to select the report they want.

To enable interactive components, click on the “Interactive Components” feature button as shown in the image above and set the webhook to our server side endpoint that will handle this action:

https://**your-ngrok-domain-here**/slack/actions

Enable Bot Users

Enable Slash Commands

Enable Interactive Components

Install Slack App to workspace

With our features configured, we’ll now need to install the app into our Slack workspace. This will make the app available to the users in the workspace plus generate all the necessary tokens our application will need to make authenticated requests to Slacks API.

Install App to workspace

After installing the app to your workspace, click the “OAuth & Permissions” menu item available to the right of the screen to get the authentication tokens we’ll be using. Our application will only be making use of the “Bot User OAuth Access Token”, copy it and save it privately.

Get Bot Access Token

Set Up Node.js Application

Great! Now that our Slack app is successfully configured and installed, we can begin working on the Node.js application. Let’s begin by recapping what the scope of our Node.js application will be. It will need to:

  1. Process POST requests from Slack as a result of our users sending the /report slash command or selecting a report through an interactive component
  2. Generate the selected report
  3. Send the report back to the user

Enough mambo jambo! Let’s look at some code 😀

Open your terminal and create a new folder called reporterbot or whatever name you find fitting. Navigate into the folder and initialize the project by running npm init. This will give you a few prompts, no need to answer all of them, just keep pressing enter to use the default values.

mkdir reporterbot
cd reporterbot
npm init

Babel & ESlint Installation and Configuration

We’ll be building our Node.js application using some sweet ES6 features. If you’re anything like I used to be, you’re probably cringing at the idea of using ES6, please don’t. ES6 has been out for quite some time and learning the new syntax and features will make you a more productive Node.js developer.

Some of the ES6 features we’ll be using are not supported in older Node.js environments. To solve this, we’ll use Babel – a JavaScript transpiler – to convert our ES6 code into plain old ES5 which is more widely supported. Install Babel by running:

npm i babel-cli babel-preset-env babel-polyfill

We need all the above dependencies to work with Babel. babel-preset-env is a handy tool that makes configuring Babel a breeze. babel-polyfill as the name suggests, provides polyfills for features that are not supported in older environments.

Create a new file, .babelrc in the project root folder and copy the following into it:

{
  "presets": [
    ["env", {
      "targets": {
        "node": "current"
      }
    }]
  ]
}

The Babel configuration file is quite straight forward thanks to babel-preset-env. We’re instructing Babel to convert only what’s necessary for the current Node.js version installed in our machines. Check out their docs for more information.

Up next, we’re going to set up ESlint. From their documentation site:

“JavaScript code is typically executed in order to find syntax or other errors. Linting tools like ESLint allow developers to discover problems with their JavaScript code without executing it.”

ESlint improves your development experience by helping you catch bugs early. ESlint also helps with enforcing a style guide. What is a style guide?… I’m glad you asked 😀

A style guide is a set of rules on how to write code for a particular project. Style guides are important to make sure you’re writing code that is visually consistent and readable. This makes it easier for other developers to understand your code. IMHO, writing clean, easy to read code makes you that much more of a professional and more importantly, it makes you a better human being. Install ESlint by running:

npm i eslint eslint-config-airbnb-base eslint-plugin-import

There’s a number of JavaScript style guides we can use. These style guides are mostly made by teams that have massive JavaScript code bases… Airbnb, jQuery, Google e.t.c

eslint-config-airbnb-base will configure ESlint to run with the style guide provided by Airbnb; I find their style guide effective for my work flow but you’re free to use whatever style guide you want. eslint-plugin-import will set rules on how to use the new ES6 module import syntax.

Create a new file, .eslintrc in the project root folder and copy the following into it:

{
  "extends": "airbnb-base",
  "plugins": [
    "import"
  ],
  "env": {
    "browser":false,
    "node": true
  },
  "rules": {
    "indent": [2, 2],
    "import/no-extraneous-dependencies": [2, {
      "devDependencies": true
    }]
  }
}

The ESlint configuration file is also quite straight forward. We’re instructing ESlint to use the Airbnb style guide, to use the import plugin, setting the environment our project will be running in and setting a few other rules. Check out their docs for more information.

We’ve set up a pretty good base to build our application logic, before we jump into more code, our project file structure should look like this:

.
|-- node_modules
|-- .babelrc
|-- .eslintrc
|-- package-lock.json
|-- package.json

Node.js Application Logic

Let’s jump into our application logic. Create two folders config and src in the project root folder. config will have our configuration files and src will have our application logic. Inside config we’ll add a default.json file and a development.json file. Inside src we’ll add an index.js file.

mkdir config src
touch config/default.json config/development.json src/index.js

Our project file structure should now look like this:

.
|-- config
    |-- default.json
    |-- development.json
|-- node_modules
|-- src
    |-- index.js
|-- .babelrc
|-- .eslintrc
|-- package-lock.json
|-- package.json

Open config/default.json and copy the following into it:

{
  "baseUrl": "http://127.0.0.1:8000",
  "host": "127.0.0.1",
  "port": 8000,
  "reportFilesDir": "reportFiles"
}

Open conig/development.json and copy the following into it:

{
  "slack": {
    "fileUploadUrl": "https://slack.com/api/files.upload",
    "reporterBot": {
        "fileUploadChannel": "#reporterbot_files",
        "botToken": "YOUR-BOT-TOKEN-HERE"
    }
  }
}

Make sure to add the development.json file to your .gitgnore as your Slack tokens should be private. The file upload channel as you might have guessed is where we’ll eventually upload the report to. We’ll come back to this later.

We need to install a couple of modules:

npm i express config morgan tracer csv-write-stream mkdirp

express as mentioned earlier is the web framework we’ll be using. The config module will help us manage our environment variables by reading the files in the config folder. morgan and tracer will provide us with logging functionality. csv-write-stream will help us write data into csv files and mkdirp will help us create folders dynamically.

Open src/index.js and copy the following into it:

import 'babel-polyfill';

import config from 'config';
import express from 'express';
import http from 'http';

import bootstrap from './bootstrap';
import { log, normalizePort } from './utils';

const app = express();
app.start = async () => {
  log.info('Starting Server...');
  const port = normalizePort(config.get('port'));
  app.set('port', port);
  bootstrap(app);
  const server = http.createServer(app);

  server.on('error', (error) => {
    if (error.syscall !== 'listen') throw error;
    log.error(`Failed to start server: ${error}`);
    process.exit(1);
  });

  server.on('listening', () => {
    const address = server.address();
    log.info(`Server listening ${address.address}:${address.port}`);
  });

  server.listen(port);
};

app.start().catch((err) => {
  log.error(err);
});

export default app;

We’ve created our express server but it won’t run yet as there are some missing files. There’s a bootsrap.js and utils.js file that we’re importing objects from but haven’t created yet. Let’s create those inside the src folder:

touch src/utils.js src/bootstrap.js

utils.js as the name suggests has utility methods like logging that we’ll be using across the project. bootsrap.js is where we’ll set up our application routes.

Open src/utils.js and copy the following into it:

import fs from 'fs';
import path from 'path';
import config from 'config';
import csvWriter from 'csv-write-stream';
import morgan from 'morgan';
import mkdirp from 'mkdirp';
import tracer from 'tracer';

export const log = (() => {
  const logger = tracer.colorConsole();
  logger.requestLogger = morgan('dev');
  return logger;
})();

export const normalizePort = (val) => {
  const port = parseInt(val, 10);
  if (Number.isNaN(port)) return val;
  if (port >= 0) return port;
  return false;
};

export const delay = time => new Promise((resolve) => {
  setTimeout(() => { resolve(); }, time);
});

export const fileExists = async (filePath) => {
  let exists = true;
  try {
    fs.accessSync(filePath);
  } catch (err) {
    if (err.code === 'ENOENT') {
      exists = false;
    } else {
      throw err;
    }
  }
  return exists;
};

export const writeToCsv = ({ headers, records, filePath }) => {
  const writer = csvWriter({ headers });
  writer.pipe(fs.createWriteStream(filePath));
  records.forEach(r => writer.write(r));
  writer.end();
};

export const getReportFilesDir = () => {
  let reportFilesDir;
  try {
    reportFilesDir = path.join(__dirname, `../${config.get('reportFilesDir')}`);
    mkdirp.sync(reportFilesDir);
    return reportFilesDir;
  } catch (err) {
    throw err;
  }
};

log and normalizePort are the two objects we imported from the index.js file earlier. We’ll use delay, fileExists, writeToCsv and getReportFilesDir later but from their names you can guess what they’ll be used for.

Next we need to configure our routes in the bootstrap.js file. Before doing this we’ll have to install a module called body-parser to help parse incoming request bodies.

npm i body-parser

Now open src/bootstrap.js and copy the following into it:

import bodyParser from 'body-parser';

import { log } from './utils';
import routes from './routes';

export default function (app) {
  app.use(bodyParser.json());
  app.use(bodyParser.urlencoded({ extended: true }));

  // Routes
  app.use(routes);

  // 404
  app.use((req, res) => {
    res.status(404).send({
      status: 404,
      message: 'The requested resource was not found',
    });
  });

  // 5xx
  app.use((err, req, res) => {
    log.error(err.stack);
    const message = process.env.NODE_ENV === 'production'
      ? 'Something went wrong, we're looking into it...'
      : err.stack;
    res.status(500).send({
      status: 500,
      message,
    });
  });
}

There’s a routes.js file we’re importing the actual routes from but haven’t created yet. Let’s create it inside the src folder:

touch src/routes.js

Open src/routes.js and copy the following into it:

import express from 'express';

import { log } from './utils';
import { reportsList } from './modules/reports';

const router = new express.Router();

router.post('/slack/command/report', async (req, res) => {
  try {
    const slackReqObj = req.body;
    const response = {
      response_type: 'in_channel',
      channel: slackReqObj.channel_id,
      text: 'Hello :slightly_smiling_face:',
      attachments: [{
        text: 'What report would you like to get?',
        fallback: 'What report would you like to get?',
        color: '#2c963f',
        attachment_type: 'default',
        callback_id: 'report_selection',
        actions: [{
          name: 'reports_select_menu',
          text: 'Choose a report...',
          type: 'select',
          options: reportsList,
        }],
      }],
    };
    return res.json(response);
  } catch (err) {
    log.error(err);
    return res.status(500).send('Something blew up. We're looking into it.');
  }
});

export default router;

Ok that’s quite a bit of code, let’s break it down. Close to the top of the file we have this line:

import { reportsList } from './modules/reports';

We’re importing a reportsList that we’ll be using shortly. Notice we’re importing it from a reports module we haven’t created yet. We’ll do this soon.

The first route we create is /slack/command/report. This is the webhook that will handle the /report slash command; we configured this earlier in our Slack app. When a user sends a message beginning with /report, Slack will make a POST request to this route.

The first thing we do is capture what Slack has posted to our application in the slackReqObj. This object has a lot of data but all we need for now is the channel_id. This channel_id represents the channel the user on Slack sent the /report command on. This could be a public channel, private channel or a direct message. Our app doesn’t really care at this point, all we need is the channel_id so that we can return the response to the sender.

Next we construct our response object to send back to Slack. I’d like to highlight four keys within this object:

  • response.channel – This is where we set the channel we want to send our response to.
  • response.attachments – This is where we’ll add our interactive component to allow the user to select a report.
  • response.attachments[0].callback_id – We’ll use this later after the user selects a report from the interactive component.
  • response.attachments[0].actions – This is where we create our interactive component. It’s a select menu and it’s options are in the reportsList object that we’ll create shortly.

Let’s create that reportsList object. Inside the src folder, we’ll create a folder called modules and inside modules we’ll add another folder called reports with an index.js file.

mkdir src/modules src/modules/reports
touch src/modules/reports/index.js

Open src/reports/index.js and copy the following into it:

import path from 'path';
import config from 'config';

import { log, delay, fileExists, getReportFilesDir } from '../../utils';

// Reports
import getUserActivity from './getUserActivity';

const slackConfig = config.get('slack');

const REPORTS_CONFIG = {
  userActivity: {
    name: 'User Activity',
    namePrefix: 'userActivity',
    type: 'csv',
    func: getUserActivity,
  },
};

export const reportsList = Object.entries(REPORTS_CONFIG)
  .map(([key, value]) => {
    const report = {
      text: value.name,
      value: key,
    };
    return report;
  });

Let’s break down the index.js file we just created. Think of this file as a control center for all the reports we could potentially have. Close to the top of the file we import a function called getUserActivity which as you might have guessed is a function that will generate a user activity report. We haven’t made this function yet but we’ll do so shortly.

The REPORTS_CONFIG variable is an object that has data on all the available reports. Every key within this object maps to a particular report; In this case userActivity. The name, namePrefix and type keys are going to come in handy when we’re uploading the report to Slack. In the func key, we store the function needed to generate the actual report. We’ll call this function later after the user selects a report.

To create the reportsList variable we use a handy little feature called Object.entries to loop through the REPORTS_CONFIG variable and get all available reports. Eventually reportsList will be an array that looks like this:

const reportsList = [{
  text: 'User Activity',
  value: 'userActivity'
}];

This is the reportsList object we used to configure our interactive component earlier. Every object in the reportsList corresponds to a select option the user will be presented with.

When we send this response back to Slack, our user should see a message that looks like this:

Begin get report flow

Select report

We’re halfway done, all we need to do is add the logic for what happens after a user selects a particular report. Before we move on, let’s create that getUserActivity function. This function will generate a simple csv report with some random user activity data. Inside the reports folder create a getUserActivity.js file:

touch src/modules/reports/getUserActivity.js

Open getUserActivity.js and copy the following into it:

import { log, writeToCsv } from '../../utils';

const generateData = async ({ startDate, endDate, totalRecords }) => {
  try {
    const userActivity = [];
    for (let index = 0; index < totalRecords; index += 1) {
      userActivity.push({
        username: `user_${index + 1}`,
        startDate,
        endDate,
        loginCount: Math.floor(Math.random() * 20),
        itemsPurchased: Math.floor(Math.random() * 15),
        itemsReturned: Math.floor(Math.random() * 5),
      });
    }

    return userActivity;
  } catch (err) {
    throw err;
  }
};

export default async (options) => {
  try {
    const {
      startDate = '2017-11-25',
      endDate = '2017-11-28',
      totalRecords = 20,
      reportFilePath,
    } = options;

    const userActivity = await generateData({
      startDate,
      endDate,
      totalRecords,
    });

    if (userActivity.length > 0) {
      const headers = [
        'Username',
        'Start Date',
        'End Date',
        'Login Count',
        'Items Purchased',
        'Items Returned',
      ];

      const records = userActivity.map(record => [
        record.username,
        record.startDate,
        record.endDate,
        record.loginCount,
        record.itemsPurchased,
        record.itemsReturned,
      ]);

      const filePath = reportFilePath;
      writeToCsv({ headers, records, filePath });
      log.info(`${records.length} records compiled into ${filePath}`);
    }
  } catch (err) {
    throw err;
  }
};

All we’re doing here is generating some random data and organizing it in a format that the csv-write-stream module we installed earlier can consume. Notice we’re making use of the writeToCsv function we created earlier in the utils.js file. One more thing to point out at the top of the default function is the use of Object destructuring. This allows us to do all sorts of cool tricks like setting default values for our parameters.

OK, let’s keep moving. First, make sure your project file structure looks like this:

.
|-- config
    |-- default.json
    |-- development.json
|-- node_modules
|-- src
    |-- modules
        |-- reports
            |-- getUserActivity.js
            |-- index.js
    |-- bootstrap.js
    |-- index.js
    |-- routes.js
    |-- utils.js
|-- .babelrc
|-- .eslintrc
|-- package-lock.json
|-- package.json

The second route we’ll create is /slack/actions. This is the webhook that will handle actions on interactive components; we configured this earlier in our Slack app. When a user selects a report, Slack will make a POST request to this route. Copy the following into the routes.js file under the /slack/command/report route we just created:

router.post('/slack/actions', async (req, res) => {
  try {
    const slackReqObj = JSON.parse(req.body.payload);
    let response;
    if (slackReqObj.callback_id === 'report_selection') {
      response = await generateReport({ slackReqObj });
    }
    return res.json(response);
  } catch (err) {
    log.error(err);
    return res.status(500).send('Something blew up. We're looking into it.');
  }
});

Once again, we capture what Slack has posted to our app in the slackReqObj. Notice that this time round we derive the slackReqObj by parsing the payload key in the request body. Now we need to compose a response based on what action this request represents.

Remember the response.attachments[0].callback_id key I highlighted earlier, that’s what we use to figure out what action we’re responding to. The interactive components webhook can be used for more than one action, using this callback_id let’s us know what action we’re handling.

If the callback_id === "``report_selection``", we know that this is an action from the select report interactive component we had sent out earlier; this means our user has selected a report. We need to give the user feedback that we have received their request and begin generating the report they selected.

To do this we’ll use a generateReport function. Calling this function will return a confirmation message that we send back to the user as well as invoke the report func we saved earlier in the REPORTS_CONFIG to begin generating the selected report.

Open src/reports/index.js and copy the following under the reportsList object:

export const generateReport = async (options) => {
  try {
    const { slackReqObj } = options;
    const reportKey = slackReqObj.actions[0].selected_options[0].value;
    const report = REPORTS_CONFIG[reportKey];

    if (report === undefined) {
      const slackReqObjString = JSON.stringify(slackReqObj);
      log.error(new Error(`reportKey: ${reportKey} did not match any reports. slackReqObj: ${slackReqObjString}`));
      const response = {
        response_type: 'in_channel',
        text: 'Hmmm :thinking_face: Seems like that report is not available. Please try again later as I look into what went wrong.',
      };
      return response;
    }

    const reportTmpName = `${report.namePrefix}_${Date.now()}.${report.type}`;
    const reportFilesDir = getReportFilesDir();
    const reportFilePath = path.join(reportFilesDir, reportTmpName);

    const reportParams = {
      reportName: report.name,
      reportTmpName,
      reportType: report.type,
      reportFilePath,
      reportFunc() {
        return report.func({ reportFilePath });
      },
    };

    // Begin async report generation
    generateReportImplAsync(reportParams, { slackReqObj });

    const response = {
      response_type: 'in_channel',
      text: `Got it :thumbsup: Generating requested report *${report.name}*nPlease carry on, I'll notify you when I'm done.`,
      mrkdwn: true,
      mrkdwn_in: ['text'],
    };
    return response;
  } catch (err) {
    throw err;
  }
};

Now open the src/routes.js file and add the generateReport function in the same import statement as the reportsList

import { reportsList, generateReport } from './modules/reports';

The first thing we do in the generateReport function is figure out what report the user selected. The object Slack posts to the interactive component webhook has the selected value and we get it here slackReqObj.actions[0].selected_options[0].value;. We extract this value into a reportKey variable as it corresponds to the key that’s in our REPORTS_CONFIG. In our case this will be userActivity. We use this key to get the report object from the REPORTS_CONFIG.

If the reportKey doesn’t match a particular report, we log an error and send the user a failure message. If it matches a report, we proceed to generate it.

First we create a temporary name for the report in the reportTmpName variable using the current timestamp; the temporary name will look like this userActivity_1511870113670.csv. Next we get the directory where we’ll be saving the report. We created a utility function earlier called getReportFilesDir for this. Finally we compose the full reportFilePath by joining the reportFilesDir path and the reportTmpName

Next we create a reportParams object with some metadata and the report function to call. We pass this object to a generateReportImplAsync function that we’ll create shortly. The generateReportImplAsync function will execute the function to generate the report asynchronously. It’s important we do it asynchronously as we don’t know how long the report generation will take. As the report is being generated we send the user a confirmation message.

Let’s create the generateReportImplAsync function. Copy the following above the generateReport function:

const generateReportImplAsync = async (options, { slackReqObj }) => {
  const {
    reportName,
    reportTmpName,
    reportType,
    reportFilePath,
    reportFunc,
  } = options;

  try {
    // Initiate report function
    await reportFunc();

    /*
      FIX ME::
      Delay hack to ensure previous fs call is done processing file
    */
    await delay(250);
    const reportExists = await fileExists(reportFilePath);

    if (reportExists === false) {
      const message = {
        responseUrl: slackReqObj.response_url,
        replaceOriginal: false,
        text: `There's currently no data for report *${reportName}*`,
        mrkdwn: true,
        mrkdwn_in: ['text'],
      };
      return postChatMessage(message)
        .catch((ex) => {
          log.error(ex);
        });
    }

    /*
      FIX ME::
      Delay hack to ensure previous fs call is done processing file
    */
    await delay(250);
    const uploadedReport = await uploadFile({
      filePath: reportFilePath,
      fileTmpName: reportTmpName,
      fileName: reportName,
      fileType: reportType,
      channels: slackConfig.reporterBot.fileUploadChannel,
    });
    const message = {
      responseUrl: slackReqObj.response_url,
      replaceOriginal: false,
      text: 'Your report is ready!',
      attachments: [{
        text: `<${uploadedReport.file.url_private}|${reportName}>`,
        color: '#2c963f',
        footer: 'Click report link to open menu with download option',
      }],
    };
    return postChatMessage(message)
      .catch((err) => {
        log.error(err);
      });
  } catch (err) {
    log.error(err);
    const message = {
      responseUrl: slackReqObj.response_url,
      replaceOriginal: false,
      text: `Well this is embarrassing :sweat: I couldn't successfully get the report *${reportName}*. Please try again later as I look into what went wrong.`,
      mrkdwn: true,
      mrkdwn_in: ['text'],
    };
    return postChatMessage(message)
      .catch((ex) => {
        log.error(ex);
      });
  }
};

Let’s break down whats going on here. First we initiate the actual report function with await reportFunc(); this corresponds to the getUserActivity function we made earlier. This will generate the report and save it in the reportFilePath we created in the generateReportFunction.

Next we make use of a delay hack; this is the delay utility function we created earlier. We need this hack as the file system module behaves inconsistently. The report file would get created but on the next call where we check if the file exists, it would resolve to false. Adding a slight delay between file system calls will create a buffer to guarantee that the file system is indeed done processing the file. I’m sure there’s something I’m missing here so if anyone knows anything, please comment or better yet create a PR 🙂

After the report function is done, we check to make sure the report actually exists and if it doesn’t we assume there’s no data for the report and respond to the user. To send this response we use a postChatMessage function that we haven’t created yet. We’ll do this shortly.

If the report exists we upload it to Slack using an uploadFile function that we’ll create soon. One interesting thing to note here is that we upload the file to the fileUploadChannel in our Slack config. This has to do with how Slack handles file permissions. By uploading a file to a public channel, the file automatically becomes public within the workspace and can be shared to anyone within the same workspace. You’ll need to create a channel on your Slack workspace to handle file uploads if you are looking to share the uploads with more than one user.

After the file gets uploaded, we need to send the user a notification that their report is ready and a link to the actual file. The response object from the upload function contains a file url that we can use to reference the file we just uploaded. We compose the message to send back to the user using some Slack message formatting features and finally use the postChatMessage function to send the message.

We are almost at the finish line guys! Let’s create the postChatMessage and uploadFile functions. Inside the src/modules folder, we’ll create a folder called slack with an index.js file:

mkdir src/modules/slack
touch src/modules/slack/index.js

We’ll also be using the request module to make HTTP calls. Install it by running:

npm i request

Now open src/modules/slack/index.js and copy the following into it:

import fs from 'fs';
import config from 'config';
import request from 'request';

const slackConfig = config.get('slack');

export const postChatMessage = message => new Promise((resolve, reject) => {
  const {
    responseUrl,
    channel = null,
    text = null,
    attachments = null,
    replaceOriginal = null,
  } = message;

  const payload = {
    response_type: 'in_channel',
  };

  if (channel !== null) payload.channel = channel;
  if (text !== null) payload.text = text;
  if (attachments !== null) payload.attachments = attachments;
  if (replaceOriginal !== null) payload.replace_original = replaceOriginal;

  request.post({
    url: responseUrl,
    body: payload,
    json: true,
  }, (err, response, body) => {
    if (err) {
      reject(err);
    } else if (response.statusCode !== 200) {
      reject(body);
    } else if (body.ok !== true) {
      const bodyString = JSON.stringify(body);
      reject(new Error(`Got non ok response while posting chat message. Body -> ${bodyString}`));
    } else {
      resolve(body);
    }
  });
});

export const uploadFile = options => new Promise((resolve, reject) => {
  const {
    filePath,
    fileTmpName,
    fileName,
    fileType,
    channels,
  } = options;

  const payload = {
    token: slackConfig.reporterBot.botToken,
    file: fs.createReadStream(filePath),
    channels,
    filetype: fileType,
    filename: fileTmpName,
    title: fileName,
  };

  request.post({
    url: slackConfig.fileUploadUrl,
    formData: payload,
    json: true,
  }, (err, response, body) => {
    if (err) {
      reject(err);
    } else if (response.statusCode !== 200) {
      reject(body);
    } else if (body.ok !== true) {
      const bodyString = JSON.stringify(body);
      reject(new Error(`Got non ok response while uploading file ${fileTmpName} Body -> ${bodyString}`));
    } else {
      resolve(body);
    }
  });
});

Let’s break down the postChatMessage function. We begin by initializing some optional parameters to null. There’s an interesting responseUrl we are using. We get this responseUrl from the object Slack posts to our webhook. Every message has a corresponding responseUrl that you can use to reply to it later. This works for us as we told the user we were generating the report and would notify them when we are done. We then construct the payload to send to Slack and finally fire off the request.

Next we have the uploadFile function. Here we simply stream the file to Slack on the file upload url that’s set in the slack config and use our botToken to authorize the request.

Now open src/modules/reports/index.js and add both the postChatMessage and uploadFile functions under the utils.js import:

import { postChatMessage, uploadFile } from '../slack';

Run it!

Time to run our bot. Open your package.json file and add these scripts:


{
  "name": "reporterbot",
  "version": "1.0.0",
  "main": "./dist/index.js",

  // add the scripts key to your package.json

  "scripts": {
    "dev": "NODE_ENV=development babel-node src/index.js",
    "build": "rm -rf ./dist/ && babel src --out-dir dist/ --copy-files",
    "prod": "NODE_ENV=production node dist/index.js",
    "lint": "eslint src"
  },
  "dependencies": {
    ...
  }
}

Let’s break down these scripts.

The dev script runs our process using babel-node. This is a handy tool to use while in development that will run our src files through babel as well as run the node process in one go. You should not use this while in a production environment as it is considerably slower.

The build script uses babel to convert everything in the src folder into a dist folder. Make sure to add this dist folder to your .gitgnore as it doesn’t make sense to push it up to git.

The prod script runs our process using the normal node binary. This however relies on the already built files. Notice that the node call points to the index.js file in the dist folder.

Finally the lint script uses ESlint to check our src files for any linting errors. A better workflow would be integrating ESlint directly into your editor for faster feedback instead of having to manually run this command. Check out the ESlint integration docs for more info.

You can now run the dev command together with your ngrok secure tunnel and head on over to your Slack workspace to test things out. If everything checks out your reporterbot should be ready for use and is waiting to greet you with a warm smile! 😀

Conclusion

Slack can be used for all types of interesting things, reporting is one of the more common use cases. Slack has great documentation so make sure to dig through them to see all the interesting features available to you. Hopefully this tutorial has taught you a thing or two about building Slack applications and Node.js development in general. If you have any improvements / thoughts please drop me a comment.

Cheers!

Source:: scotch.io

Build a Secure To-Do App with Vue, ASP.NET Core, and Okta

By Nate Barbettini

Check tool versions in terminal

I love lists. I keep everything I need to do (too many things, usually) in a big to-do list, and the list helps keep me sane throughout the day. It’s like having a second brain!

There are hundreds of to-do apps out there, but today I’ll show you how to build your own from scratch. Why? It’s the perfect exercise for learning a new language or framework! A to-do app is more complex than “Hello World”, but simple enough to build in an afternoon or on the weekend. Building a simple app is a great way to stretch your legs and try a language or framework you haven’t used before.

Why Vue.js and ASP.NET Core?

In this article, I’ll show you how to build a lightweight, secure to-do app with a Vue.js frontend and an ASP.NET Core backend. Not familiar with these frameworks? That’s fine! You’ll learn everything as you go. Here’s a short introduction to both:

Vue.js is a JavaScript framework for building applications that run in the browser. It borrows some good ideas from both Angular and React, and has been gaining popularity recently. I like Vue because it’s easy to pick up and get started. Compared to both Angular and React, the learning curve doesn’t feel as steep. Plus, it has great documentation! For this tutorial, I’ve borrowed from Matt Raible’s excellent article The Lazy Developer’s Guide to Authentication with Vue.js.

ASP.NET Core is Microsoft’s new open-source framework for building web apps and APIs. I like ASP.NET Core on the backend because it’s type-safe, super fast, and has a large ecosystem of packages available. If you want to learn the basics, I wrote a free ebook about version 2.0, which was released earlier this year.

Ready to build an app? Let’s get started!

Install the tools

You’ll need Node and npm installed for this tutorial, which you can install from the Node.js official site.

You’ll also need dotnet installed, which you can install from the Microsoft .NET site.

To double check that everything is installed correctly, run these commands in your terminal or shell:

npm -v

dotnet --version

I’m using Visual Studio Code for this project, but you can use whatever code editor you feel comfortable in. If you’re on Windows, you can also use Visual Studio 2017 or later.

Set up the project

Instead of starting from absolute zero, you can use a template to help you scaffold a basic, working application. Mark Pieszak has an excellent ASP.NET Core Vue SPA starter kit, which we’ll use as a starting point.

Download or clone the project from GitHub, and then open the folder in your code editor.

Initial project structure

Run npm install to restore and install all of the JavaScript packages (including Vue.js).

Configure the environment and run the app

You’ll need to make sure the ASPNETCORE_ENVIRONMENT variable is set on your machine. ASP.NET Core looks at this environment variable to determine whether it’s running in a development or production environment.

  • If you’re on Windows, use PowerShell to execute $Env:ASPNETCORE_ENVIRONMENT = "Development"
  • If you’re on Mac or Linux, execute export ASPNETCORE_ENVIRONMENT=Development

In Visual Studio Code, you can open the Integrated Terminal from the View menu to run the above commands.

Now you’re ready to run the app for the first time! Execute dotnet run in the terminal. After the app compiles, it should report that it’s running on http://localhost:5000:

Execute dotnet run in terminal

Open up a browser and navigate to http://localhost:5000:

Vue starter template app

Optional: Install Vue Devtools

If Chrome is your preferred browser, I’d highly recommend the Vue Devtools extension. It adds some great debugging and inspection features to Chrome that are super useful when you’re building Vue.js applications.

Build the Vue.js app

It’s time to start writing some real code. If dotnet run is still running in your terminal, press Ctrl-C to stop it.

Delete all the files and folders under the ClientApp folder, and create a new file called boot-app.js:

import Vue from 'vue'
import App from './components/App'
import router from './router'
import store from './store'
import { sync } from 'vuex-router-sync'

// Sync Vue router and the Vuex store
sync(store, router)

new Vue({
  el: '#app',
  store,
  router,
  template: '<App/>',
  components: { App }
})

store.dispatch('checkLoggedIn')

This file sets up Vue, and serves as the main entry point (or starting point) for the whole JavaScript application.

Next, create router.js:

import Vue from 'vue'
import Router from 'vue-router'
import store from './store'
import Dashboard from './components/Dashboard.vue'
import Login from './components/Login.vue'

Vue.use(Router)

function requireAuth (to, from, next) {
  if (!store.state.loggedIn) {
    next({
      path: '/login',
      query: { redirect: to.path }
    })
  } else {
    next()
  }
}

export default new Router({
  mode: 'history',
  base: __dirname,
  routes: [
    { path: '/', component: Dashboard, beforeEnter: requireAuth },
    { path: '/login', component: Login },
    { path: '/logout',
      async beforeEnter (to, from, next) {
        await store.dispatch('logout')
      }
    }
  ]
})

The Vue router keeps track of what page the user is currently viewing, and handles navigating between pages or sections of your app. This file configures the router with three paths (/, /login, and /logout) and associates each path with a Vue component.

You might be wondering what store, Dashboard, and Login are. Don’t worry, you’ll add them next!

Add components

Components are how Vue.js organizes pieces of your application. A component wraps up some functionality, from a simple button or UI element to entire pages and sections. Components can contain HTML, JavaScript, and CSS styles.

In the ClientApp folder, create a new folder called components. Inside the new folder, create a file called Dashboard.vue:

<template>
  <div class="dashboard">
    <h2>{{name}}, here's your to-do list</h2>

    <input class="new-todo"
        autofocus
        autocomplete="off"
        placeholder="What needs to be done?"
        @keyup.enter="addTodo">

    <ul class="todo-list">
      <todo-item v-for="(todo, index) in todos" :key="index" :item="todo"></todo-item>
    </ul>

    <p>{{ remaining }} remaining</p>
    <router-link to="/logout">Log out</router-link>
  </div>
</template>

<script>
import TodoItem from './TodoItem'

export default {
  components: { TodoItem },
  mounted() {
      this.$store.dispatch('getAllTodos')
  },
  computed: {
    name () {
      return this.$store.state.userName
    },
    todos () {
      return this.$store.state.todos
    },
    complete () {
      return this.todos.filter(todo => todo.completed).length
    },
    remaining () {
      return this.todos.filter(todo => !todo.completed).length
    }
  },
  methods: {
    addTodo (e) {
      var text = e.target.value || ''
      text = text.trim()

      if (text.length) {
        this.$store.dispatch('addTodo', { text })
      }

      e.target.value = ''
    },
  }
}
</script>

<style>
.new-todo {
  width: 100%;
  font-size: 18px;
  margin-bottom: 15px;
  border-top-width: 0;
  border-left-width: 0;
  border-right-width: 0;
  border-bottom: 1px solid rgba(0, 0, 0, 0.2);
}
</style>

The Dashboard component is responsible for displaying all the user’s to-do items, and rendering an input field that lets the user add a new item. In router.js, you told the Vue router to render this component on the / path, or the root route of the application.

This component has HTML in the section, JavaScript in the section, and CSS in the section, all stored in one .vue file. If your Vue components become too large or unwieldy, you can choose to split them into separate HTML, JS, and CSS files as needed.

When you use {{moustaches}} or attributes like v-for in the component’s HTML, Vue.js automatically inserts (or binds) data that’s available to the component. In this case, you’ve defined a handful of JavaScript methods in the computed section that retrieve things like the user’s name and the user’s to-do list from the data store. That data is then automatically rendered by Vue. (You’ll build the data store in a minute!)

Notice the components: { TodoItem } line? The Dashboard component relies on another component called TodoItem. Create a file called TodoItem.vue:

<template>
  <li class="todo" :class="{ completed: item.completed }">
    <input class="toggle"
      type="checkbox"
      :checked="item.completed"
      @change="toggleTodo({ id: item.id, completed: !item.completed })">

    <label v-text="item.text"></label>

    <button class="delete" @click="deleteTodo(item.id)">
      <span class="glyphicon glyphicon-trash"></span>
    </button>
  </li>
</template>

<script>
import { mapActions } from 'vuex'

export default {
  props: ['item'],
  methods: {
    ...mapActions([
      'toggleTodo',
      'deleteTodo'
    ])
  }
}
</script>

<style>
  .todo {
    list-style-type: none;
  }

  .todo.completed {
    opacity: 0.5;
  }

  .todo.completed label {
    text-decoration: line-through;
  }

  button.delete {
    color: red;
    opacity: 0.5;
    -webkit-appearance: none;
    -moz-appearance: none;
    outline: none;
    border: 0;
    background: transparent;
  }
</style>

The TodoItem component is only responsible for rendering a single to-do item. The props: ['item'] line declares that this component receives a prop or parameter called item, which contains the data about a to-do item (the text, whether it’s completed, and so on).

In the Dashboard component, this line creates a TodoItem component for each to-do item:

<todo-item v-for="(todo, index) in todos" :key="index" :item="todo"></todo-item>

There’s a lot going on in this syntax, but the important bits are:

  • The tag, which refers to the new TodoItem component.
  • The v-for directive, which tells Vue to loop through all the items in the todos array and render a for each one.

  • The :item="todo" attribute, which binds the value of todo (a single item from the todos array) to an attribute called item. This data is passed into the TodoItem component as the item prop.

Using components to split your app into small pieces makes it easier to organize and maintain your code. If you need to change how to-do items are rendered in the future, you just need to make changes to the TodoItem component.

Add another component called Login.vue:

<template>
  <div>
    <h2>Login</h2>
    <p v-if="$route.query.redirect">
      You need to login first.
    </p>

    <form @submit.prevent="login" autocomplete="off">
      <label for="email">Email</label>
      <input id="email" v-model="email" placeholder="you@example.com">
      <label for="password">Password</label>
      <input id="password" v-model="password" placeholder="password" type="password">
      <button type="submit">login</button>
      <p v-if="loginError" class="error">{{loginError}}</p>
    </form>
  </div>
</template>

<script>
export default {
  data () {
    return {
      email: '',
      password: '',
      error: false
    }
  },
  computed: {
    loginError () {
      return this.$store.state.loginError
    }
  },
  methods: {
    login () {
      this.$store.dispatch('login', {
        email: this.email,
        password: this.password
       })
    }
  }
}
</script>

<style scoped>
.error {
  color: red;
}

label {
  display: block;
}

input {
  display: block;
  margin-bottom: 10px;
}
</style>

The Login component renders a simple login form, and shows an error message if the login is unsuccessful.

Notice the scoped attribute on the tag? That’s a cool feature called scoped CSS. Marking a block of CSS as scoped means the CSS rules only apply to this component (otherwise, they apply globally). It’s useful here to set display: block on the input and label elements in this component, without affecting how those elements are rendered elsewhere in the app.

The Dashboard and Login components (and the router configuration) refer to something called store. I’ll explain what the store is, and how to build it, in the next section.

Before you get there, you need to build one more component. Create a file called App.vue in the components folder:

<template>
  <div class="app-container">
    <div class="app-view">
      <template v-if="$route.matched.length">
        <router-view></router-view>
      </template>
    </div>
  </div>
</template>

<script>
export default {
  computed: {
    loggedIn () {
      return this.$store.state.loggedIn
    }
  }
}
</script>

<style>
html, body {
    margin: 0;
    padding: 0;
}

body {
    font: 14px -apple-system, BlinkMacSystemFont, Helvetica, Arial, sans-serif;
    line-height: 1.4em;
    background: #F3F5F6;
    color: #4d4d4d;
}

ul {
  padding: 0;
}

h1, h2 {
  text-align: center;
}

.app-container {
  display: flex;
  align-items: center;
  justify-content: center;
}

.app-view {
  background: #fff;
  min-width: 400px;
  padding: 20px 25px 15px 25px;
  margin: 30px;
    position: relative;
    box-shadow: 0 2px 4px 0 rgba(0, 0, 0, 0.2),
                0 5px 10px 0 rgba(0, 0, 0, 0.1);
}
</style>

The App component doesn’t do much — it just provides the base HTML and CSS styles that the other components will be rendered inside. The element loads a component provided by the Vue router, which will render either the Dashboard or Login component depending on the path in the address bar.

If you look back at boot-app.js, you’ll see this line:

import App from './components/App'

This statement loads the App component, which is then passed to Vue in new Vue(...).

You’re all done building components! It’s time to add some state management. First, I’ll explain what state management is and why it’s useful.

Add Vuex for state management

Components are a great way to break up your app into manageable pieces, but when you start passing data between many components, it can be hard to keep all of that data in sync.

The Vuex library helps solve this problem by creating a store that holds data in a central place. Then, any of your components can get the data they need from the store. Bonus: if the data in the store changes, your components get the updated data immediately!

If you’ve used Flux or Redux, you’ll find Vuex familiar: it’s a state management library with strict rules around how state can be mutated (modified) inside your app.

The template you started with already has Vuex installed. To keep things tidy, create a new folder under ClientApp called store. Then, create a file inside called index.js:

import Vue from 'vue'
import Vuex from 'vuex'
import { state, mutations } from './mutations'
import { actions } from './actions'

Vue.use(Vuex)

export default new Vuex.Store({
  state,
  mutations,
  actions
})

This file initializes Vuex and makes it available to your Vue components, and that’s it. The real meat of Vuex is in mutations and actions, but you’ll write those in separate files to keep everything organized.

Create a file called mutations.js:

import router from '../router'

export const state = {
  todos: [],
  loggedIn: false,
  loginError: null,
  userName: null
}

export const mutations = {
  loggedIn(state, data) {
    state.loggedIn = true
    state.userName = (data.name || '').split(' ')[0] || 'Hello'

    let redirectTo = state.route.query.redirect || '/'
    router.push(redirectTo)
  },

  loggedOut(state) {
    state.loggedIn = false
    router.push('/login')
  },

  loginError(state, message) {
    state.loginError = message
  },

  loadTodos(state, todos) {
    state.todos = todos || [];
  }
}

This file defines two things: the state or data that’s shared across the app, and mutations that change that state. Vuex follows a few simple rules:

  • State is immutable — it can’t be changed except by a mutation.
  • Mutations can change state, but they must be synchronous. Async code (like API calls) must run in an action instead.
  • Actions run asynchronous code, then commit mutations, which change state.

Enforcing this hierarchy of rules makes it easier to understand how data and changes flow through your app. If some piece of state changes, you know that a mutation caused the change.

As you can see, this app uses the Vuex store to keep track of both the to-do list (the state.todos array), and authentication state (whether the user is logged in, what their name is). The Dashboard and Login components access this data with computed properties like:

todos () {
  return this.$store.state.todos
},

The mutations defined here are only half the story, because they only handle updating the state after an action has run. Create another file called actions.js:

import axios from 'axios'

const sleep  = ms => {
  return new Promise(resolve => setTimeout(resolve, ms))
}

export const actions = {
  checkLoggedIn({ commit }) {
    // Todo: commit('loggedIn') if the user is already logged in
  },

  async login({ dispatch, commit }, data) {
    // Todo: log the user in
    commit('loggedIn', { userName: data.email })
  },

  async logout({ commit }) {
      // Todo: log the user out
      commit('loggedOut')
  },

  async loginFailed({ commit }, message) {
    commit('loginError', message)
    await sleep(3000)
    commit('loginError', null)
  },

  async getAllTodos({ commit }) {
    // Todo: get the user's to-do items
    commit('loadTodos', [{ text: 'Fake to-do item' }])
  },

  async addTodo({ dispatch }, data) {
    // Todo: save a new to-do item
    await dispatch('getAllTodos')
  },

  async toggleTodo({ dispatch }, data) {
    // Todo: toggle to-do item completed/not completed
    await dispatch('getAllTodos')
  },

  async deleteTodo({ dispatch }, id) {
    // Todo: delete to-do item
    await dispatch('getAllTodos')
  }
}

Most of these actions are marked with // Todo (no pun intended), because you’ll need to revisit them after you have the backend API in place. For now, the getAllTodos action commits the loadTodos mutation with a fake to-do item. Later, this action will call your API to retrieve the user’s to-do items and then commit the same mutation with the items returned from the API.

Note: Because the starter template includes Babel and the transform-async-to-generator plugin, the new async/await syntax in ES2017 is available. I love async/await, because it makes dealing with async things like API calls much easier (no more big chains of Promises, or callback hell). As you’ll see in the next section, C# uses the same syntax!

Run the app

You still need to add the backend API and authentication bits, but let’s take a quick break and see what you’ve built so far. Start up the app with dotnet run and browse to http://localhost:5000. Log in with a fake username and password:

Logged in with a fake user

Tip: If you need to fix bugs or make changes, you don’t need to stop and restart the server with dotnet run again. As soon as you modify any of your Vue or JavaScript files, the frontend app will be recompiled automatically. Try making a change to the Dashboard component and see it appear instantly in your browser (like magic).

The to-do item is fake, but your app is very real! You’ve set up Vue.js, built components and routing, and added state management with Vuex. The next step is adding the backend API with ASP.NET Core. Grab a refill of coffee and let’s dive in!

Add APIs with ASP.NET Core

The user’s to-do items will be stored in an online database so they can be accessed anywhere. Your frontend app won’t access this data directly. Instead, the Vuex actions will call your backend API, which will retrieve the data and return it to the frontend.

This pattern (JavaScript code calling a backend API) is a common way to architect modern apps. The API can be written in any language you prefer. In this tutorial, you’ll write the API in C# using the ASP.NET Core framework.

Tip: If you want an introduction to ASP.NET Core from the ground up, check out my free Little ASP.NET Core Book!

The template you started from already includes the scaffolding you need for a basic ASP.NET Core project:

  • The Startup.cs file, which configures the project and defines the middleware pipeline. You’ll modify this file later.
  • A pair of controllers in the aptly-named Controllers folder.

In ASP.NET Core, Controllers handle requests to specific routes in your app or API. The HomeController contains boilerplate code that handles the root route / and renders your frontend app. You won’t need to modify it. The SampleDataController, on the other hand, can be deleted. Time to write a new controller!

Create a new file in the Controllers folder called TodoController.cs:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.Mvc;

namespace Vue2Spa.Controllers
{
    [Route("api/[controller]")]
    public class TodoController : Controller
    {
        // Handles GET /api/todo
        [HttpGet]
        public async Task<IActionResult> GetAllTodos()
        {
            // TODO: Get to-do items and return to frontend
        }
    }
}

The Route attribute at the top of the controller with a value of "api/[controller]" tells ASP.NET Core that this controller will be handling the route http://yourdomain/api/todo. Inside the controller, the GetAllTodos method is decorated with the HttpGet attribute to indicate that it should handle an HTTP GET. When your frontend code makes a GET to /api/todo, the GetAllTodos method will run.

Define a model

Before you return to-do items from the method, you need to define the structure of the object, or model, you’ll return. Since C# is a statically-typed language (as opposed to a dynamic language like JavaScript), it’s typical to define these types in advance.

Create a new folder at the root of the project (next to to the Controllers folder) called Models. Inside, create a file called TodoItemModel.cs:

using System;

namespace Vue2Spa.Models
{
    public class TodoItemModel
    {
        public Guid Id { get; set; }

        public string Text { get; set; }

        public bool Completed { get; set; }
    }
}

This model defines a few simple properties for all to-do items: an ID, some text, and a boolean indicating whether the to-do is complete. When you fill this model with data and return it from your controller, ASP.NET Core will automatically serialize these properties to JSON that your frontend code can easily consume.

Define a service

You could return a model directly from the GetAllTodos method, but it’s common to add another layer. In the next section, you’ll add code to look up the user’s profile in Okta and retrieve their to-do items. To keep everything organized, you can create a service that will wrap up the code that retrieves the user’s to-do items.

Create one more folder in the project root called Services, and add a file called ITodoItemService.cs:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Vue2Spa.Models;

namespace Vue2Spa.Services
{
    public interface ITodoItemService
    {
        Task<IEnumerable<TodoItemModel>> GetItems(string userId);

        Task AddItem(string userId, string text);

        Task UpdateItem(string userId, Guid id, TodoItemModel updatedData);

        Task DeleteItem(string userId, Guid id);
    }
}

This file describes an interface — a feature in C# (and many other languages) that defines the methods available in a particular class without providing the “concrete” implementation. This makes it easy to test and swap out implementations during development.

Create a new file called FakeTodoItemService.cs that will be a temporary implementation until you add the connection to Okta in the next section:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Vue2Spa.Models;

namespace Vue2Spa.Services
{
    public class FakeTodoItemService : ITodoItemService
    {
        public Task<IEnumerable<TodoItemModel>> GetItems(string userId)
        {
            var todos = new[]
            {
                new TodoItemModel { Text = "Learn Vue.js", Completed = true },
                new TodoItemModel { Text = "Learn ASP.NET Core" }
            };

            return Task.FromResult(todos.AsEnumerable());
        }

        public Task AddItem(string userId, string text)
        {
            throw new NotImplementedException();
        }

        public Task DeleteItem(string userId, Guid id)
        {
            throw new NotImplementedException();
        }

        public Task UpdateItem(string userId, Guid id, TodoItemModel updatedData)
        {
            throw new NotImplementedException();
        }
    }
}

This dummy service will always return the same to-do items. You’ll replace it later, but it will let you test the app and make sure everything is working.

Use the service

With the controller, model, and service all in place, all you need to do is connect them together. First, open up the Startup.cs file and add this line anywhere in the ConfigureServices method:

services.AddSingleton<ITodoItemService, FakeTodoItemService>();

You’ll also need to add this using statement to the top of the file:

using Vue2Spa.Services;

Adding the new service to the ConfigureServices method makes it available throughout the ASP.NET Core project. In your TodoController, add this code at the top of the class:

public class TodoController : Controller
{
    private readonly ITodoItemService _todoItemService;

    public TodoController(ITodoItemService todoItemService)
    {
        _todoItemService = todoItemService;
    }

    // Existing code...

Adding this code causes ASP.NET Core to inject an ITodoItemService object into the controller. Because you’re using the interface here, your controller doesn’t know (or care) which implementation of the ITodoItemService it receives. It’s currently the FakeTodoItemService, but later it’ll be a more interesting (and real) implementation.

Add this using statement at the top of the file:

using Vue2Spa.Services;

Finally, add this code to the GetAllTodos method:

var userId = "123"; // TODO: Get actual user ID
var todos = await _todoItemService.GetItems(userId);

return Ok(todos);

When a request comes into the GetAllTodos method, the controller calls the ITodoItemService to get the to-do items for the current user, and then returns HTTP OK (200) with the to-do items.

That takes care of the backend API (for now). The frontend now needs to be updated to call the /api/todo route to get the user’s to-do items. In actions.js, update the getAllTodos function:

async getAllTodos({ commit }) {
  let response = await axios.get('/api/todo')

  if (response && response.data) {
    let updatedTodos = response.data
    commit('loadTodos', updatedTodos)
  }
},

The new action code uses the axios HTTP library to make a request to the backend on the /api/todo route, which will be handled by the GetAllTodos method on the TodoController. If data is returned, the loadTodos mutation is committed and the Vuex store is updated with the user’s to-do items. The Dashboard view will automatically see the updated data in the store and render the items in the browser.

Ready to test it out? Run the project with dotnet run and browse to http://localhost:5000:

Logged in with an Okta user

The data is still fake, but less fake than before! You’ve successfully connected the backend and frontend and have data flowing between them.

The final step is to add authentication and real data storage to the app. You’re almost there!

Add identity and security with Okta

Okta is a cloud-hosted identity API that makes it easy to add authentication, authorization, and user management to your web and mobile apps. You’ll use it in this project to:

  • Add functionality to the Login component
  • Require authentication on the backend API
  • Store each user’s to-do items securely

To get started, sign up for a free Okta Developer account. After you activate your new account (called an Okta Organization, or Org for short), click Applications at the top of the screen. Choose Single-Page App and change both the base URI and login redirect URI to http://localhost:5000:

Okta application settings

After you click Done, you’ll be redirected to the new application’s details. Scroll down and copy the Client ID — you’ll need it in a minute.

Add a custom user profile field

By default, Okta stores basic information about your users: first name, last name, email, and so on. If you want to store more, Okta supports custom profile fields that can store any type of user data you need. You can use this to store the to-do items for each user right on the user profile — no extra database needed!

To add a custom field, open the Users menu at the top of the screen and click on Profile Editor. On the first row (Okta User), click Profile to edit the default user profile. Add a string attribute called todos:

Add custom field in Okta profile

Next, you need to connect your frontend code to Okta.

Add the Okta Auth SDK

The Okta Auth SDK provides methods that make it easy to authenticate users from JavaScript code. Install it with npm:

npm install @okta/okta-auth-js@1.11.0

Create a file in the ClientApp folder called oktaAuth.js that holds the Auth SDK configuration and makes the client available to the rest of your Vue app:

import OktaAuth from '@okta/okta-auth-js'

const org = '{{yourOktaOrgUrl}}',
      clientId = '{{appClientId}}',
      redirectUri = 'http://localhost:5000',
      authorizationServer = 'default'

const oktaAuthClient = new OktaAuth({
  url: org,
  issuer: authorizationServer,
  clientId,
  redirectUri
})

export default {
  client: oktaAuthClient
}

Replace {{yourOktaOrgUrl}} with your Okta Org URL, which usually looks like this: https://dev-12345.oktapreview.com. You can find it in the top right corner of the Dashboard page.

Next, paste the Client ID you copied from the application you created a minute ago into the clientId property.

The checkLoggedIn, login, and logout actions can now be replaced with real implementations in actions.js:

checkLoggedIn({ commit }) {
  if (oktaAuth.client.tokenManager.get('access_token')) {
    let idToken = oktaAuth.client.tokenManager.get('id_token')
    commit('loggedIn', idToken.claims)
  }
},

async login({ dispatch, commit }, data) {
  let authResponse
  try {
    authResponse = await oktaAuth.client.signIn({
      username: data.email,
      password: data.password
    });
  }
  catch (err) {
    let message = err.message || 'Login error'
    dispatch('loginFailed', message)
    return
  }

  if (authResponse.status !== 'SUCCESS') {
    console.error("Login unsuccessful, or more info required", response.status)
    dispatch('loginFailed', 'Login error')
    return
  }

  let tokens
  try {
    tokens = await oktaAuth.client.token.getWithoutPrompt({
      responseType: ['id_token', 'token'],
      scopes: ['openid', 'email', 'profile'],
      sessionToken: authResponse.sessionToken,
    })
  }
  catch (err) {
    let message = err.message || 'Login error'
    dispatch('loginFailed', message)
    return
  }

  // Verify ID token validity
  try {
    await oktaAuth.client.token.verify(tokens[0])
  } catch (err) {
    dispatch('loginFailed', 'An error occurred')
    console.error('id_token failed validation')
    return
  }

  oktaAuth.client.tokenManager.add('id_token', tokens[0]);
  oktaAuth.client.tokenManager.add('access_token', tokens[1]);

  commit('loggedIn', tokens[0].claims)
},

async logout({ commit }) {
  oktaAuth.client.tokenManager.clear()
  await oktaAuth.client.signOut()
  commit('loggedOut')
},

These actions delegate to the Okta Auth SDK, which calls the Okta authentication API to log the user in and get access and ID tokens for the user via OpenID Connect. The Auth SDK also stores and manages the tokens for your app.

You’ll also need to add an import statement at the top of actions.js:

import oktaAuth from '../oktaAuth'

Try it out: run the server with dotnet run and try logging in with the email and password you used to sign up for Okta:

Logged in and sending real API requests

Try logging in, refreshing the page (you should still be logged in!), and logging out.

That takes care of authenticating the user on the frontend. The Vuex store will keep track of the authentication state, and the Okta Auth SDK will handle login, logout, and keeping the user’s tokens fresh.

To secure the backend API, you need to configure ASP.NET Core to use token authentication and require a token when the frontend code makes a request.

Add API token authentication

Under the hood, the Okta Auth SDK uses OpenID Connect to get access and ID tokens when the user logs in. The ID token is used to display the user’s name in the Vue app, and the access token can be used to secure the backend API.

Open up the Startup.cs file and add this code to the ConfigureServices method:

services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
.AddJwtBearer(options =>
{
    options.Authority = "{{yourOktaOrgUrl}}/oauth2/default";
    options.Audience = "api://default";
});

This code adds token authentication to the ASP.NET Core authentication system. With this in place, your frontend code will need to attach an access token to requests in order to access the API.

Make sure you replace {{yourOktaOrgUrl}} with your Okta Org URL (find it in the top-right of your Okta developer console’s Dashboard).

You’ll also need to add this using statement at the top of the file:

using Microsoft.AspNetCore.Authentication.JwtBearer;

And this line down in the Configure method:

app.UseStaticFiles();

// Add this:
app.UseAuthentication();

app.UseMvc(...

The Configure method defines the middleware pipeline for an ASP.NET Core project, or the list of handlers that modify incoming requests. Adding UseAuthentication() makes it possible for you to require authentication for your controllers.

Controllers need to opt-in to an authentication check, by adding the [Authorize] attribute at the top of the controller. Add this in the TodoController:

[Route("api/[controller]")]
[Authorize] // Add this
public class TodoController : Controller
{
    // ...

If you tried running the app and looking at your browser’s network console, you’d see a failed API request:

API request returns 401

The TodoController is responding with 401 Unauthorized because it now requires a valid token to access the /api/todo route, and your frontend code isn’t sending a token.

Open up actions.js once more and add a small function below sleep that attaches the user’s token to the HTTP Authorization header:

const addAuthHeader = () => {
  return {
    headers: {
        'Authorization': 'Bearer '
            + oktaAuth.client.tokenManager.get('access_token').accessToken
    }
  }
}

Then, update the code that calls the backend in the getAllTodos function:

let response = await axios.get('/api/todo', addAuthHeader())

Refresh the page (or start the server) and the now-authenticated request will succeed once again.

Add the Okta .NET SDK

You’re almost done! The final task is to store and retrieve the user’s to-do items in the Okta custom profile attribute you set up earlier. You’ll use the Okta .NET SDK to do this in a few lines of backend code.

Stop the ASP.NET Core server (if it’s running), and install the Okta .NET SDK in your project with the dotnet tool:

dotnet add package Okta.Sdk --version 1.0.0-alpha4

Open the Startup.cs file again and add this code anywhere in the ConfigureServices method:

services.AddSingleton<IOktaClient>(new OktaClient(new OktaClientConfiguration
{
    OrgUrl = "{{yourOktaOrgUrl}}",
    Token = Configuration["okta:token"]
}));

This makes the Okta .NET SDK available to the whole project as a service. You’ll also need to add these lines to the top of the file:

using Okta.Sdk;
using Okta.Sdk.Configuration;

Remember to replace {{yourOktaOrgUrl}} with your Okta Org URL.

Get an Okta API token

The Okta SDK needs an Okta API token to call the Okta API. This is used for management tasks (like storing and retrieving user profile data), and is separate from the Bearer tokens you’re using for user authentication.

Generate an Okta API token in the Okta developer console by hovering on API and clicking Tokens. Create a token and copy the value.

The Okta API token is sensitive and should be protected, because it allows you to do any action in the Okta API, including deleting users and applications! Because of this, you shouldn’t store it in code that gets checked into source control. Instead, use the .NET Secret Manager tool.

Tip: If you’re using Visual Studio 2017 on Windows, you can right-click the project in the Solution Explorer and choose Manage user secrets. Then you can skip the installation steps and jump down to adding the secret value with dotnet user-secrets set.

Open up the Vue2Spa.csproj file and add this line near the existing DotNetCliToolReference line:

<DotNetCliToolReference Include="Microsoft.Extensions.SecretManager.Tools" Version="2.0.0" />

The .csproj file is the main project file for any ASP.NET Core application. It defines the packages that are installed in the project, and some other metadata. Adding this line installs the Secret Manager tool in this project. Run a package restore to make sure the tool gets installed, and test it out:

dotnet restore
dotnet user-secrets -h

Next, add another line near the top of the Vue2Spa.csproj file, right under :

<UserSecretsId>{{some random value}}</UserSecretsId>

Generate a random GUID as the ID value, and save the project file.

Grab the Okta API token value and store it using the Secret Manager:

dotnet user-secrets set okta:token {{oktaApiToken}}

To make the values stored in the Secret Manager available to your application, you need to add it as a configuration source in Startup.cs. At the top of the file, in the Startup (constructor) method, add this code:

// ... existing code
.AddEnvironmentVariables();

// Add this:
if (env.IsDevelopment())
{
    builder.AddUserSecrets<Startup>();
}

// Existing code... 
Configuration = builder.Build();

With that, the Okta .NET SDK will have an API token it can use to call the Okta API. You’ll use the SDK to store and retrieve the user’s to-do items.

Use Okta for user data storage

Remember the FakeTodoItemService you created before? It’s time to replace it with a new service that uses Okta to store the user’s to-do items. Create OktaTodoItemService.cs in the Services foldre:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Newtonsoft.Json;
using Okta.Sdk;
using Vue2Spa.Models;

namespace Vue2Spa.Services
{
    public class OktaTodoItemService : ITodoItemService
    {
        private const string TodoProfileKey = "todos";

        private readonly IOktaClient _oktaClient;

        public OktaTodoItemService(IOktaClient oktaClient)
        {
            _oktaClient = oktaClient;
        }

        private IEnumerable<TodoItemModel> GetItemsFromProfile(IUser oktaUser)
        {
            if (oktaUser == null)
            {
                return Enumerable.Empty<TodoItemModel>();
            }

            var json = oktaUser.Profile.GetProperty<string>(TodoProfileKey);
            if (string.IsNullOrEmpty(json))
            {
                return Enumerable.Empty<TodoItemModel>();
            }

            return JsonConvert.DeserializeObject<TodoItemModel[]>(json);
        }

        private async Task SaveItemsToProfile(IUser user, IEnumerable<TodoItemModel> todos)
        {
            var json = JsonConvert.SerializeObject(todos.ToArray());

            user.Profile[TodoProfileKey] = json;
            await user.UpdateAsync();
        }

        public async Task AddItem(string userId, string text)
        {
            var user = await _oktaClient.Users.GetUserAsync(userId);

            var existingItems = GetItemsFromProfile(user)
                .ToList();

            existingItems.Add(new TodoItemModel
            {
                Id = Guid.NewGuid(),
                Completed = false,
                Text = text
            });

            await SaveItemsToProfile(user, existingItems);
        }

        public async Task DeleteItem(string userId, Guid id)
        {
            var user = await _oktaClient.Users.GetUserAsync(userId);

            var updatedItems = GetItemsFromProfile(user)
                .Where(item => item.Id != id);

            await SaveItemsToProfile(user, updatedItems);
        }

        public async Task<IEnumerable<TodoItemModel>> GetItems(string userId)
        {
            var user = await _oktaClient.Users.GetUserAsync(userId);
            return GetItemsFromProfile(user);
        }

        public async Task UpdateItem(string userId, Guid id, TodoItemModel updatedData)
        {
            var user = await _oktaClient.Users.GetUserAsync(userId);

            var existingItems = GetItemsFromProfile(user)
                .ToList();

            var itemToUpdate = existingItems
                .FirstOrDefault(item => item.Id == id);
            if (itemToUpdate == null)
            {
                return;
            }

            // Update the item with the new data
            itemToUpdate.Completed = updatedData.Completed;
            if (!string.IsNullOrEmpty(updatedData.Text))
            {
                itemToUpdate.Text = updatedData.Text;
            }

            await SaveItemsToProfile(user, existingItems);
        }
    }
}

Okta custom profile fields are limited to storing primitives likes strings and numbers, but you’re using the TodoModel type to represent to-do items. This service serializes the strongly-typed items to a JSON array and stores them as a string. It’s not the fastest data storage mechanism, but it works!

Since you’ve created a new service class, update the line in the Startup.cs file to use the OktaTodoitemService instead of the FakeTodoItemService:

services.AddSingleton<ITodoItemService, OktaTodoItemService>();

The TodoController will now use the new service when it interacts with the ITodoItemService interface. Update the controller with some new code and methods:

// GET /api/todo
[HttpGet]
public async Task<IActionResult> GetAllTodos()
{
    var userId = User.Claims.FirstOrDefault(c => c.Type == "uid")?.Value;
    if (string.IsNullOrEmpty(userId)) return BadRequest();

    var todos = await _todoItemService.GetItems(userId);
    var todosInReverseOrder = todos.Reverse();

    return Ok(todosInReverseOrder);
}

// POST /api/todo
[HttpPost]
public async Task<IActionResult> AddTodo([FromBody]TodoItemModel newTodo)
{
    if (string.IsNullOrEmpty(newTodo?.Text)) return BadRequest();

    var userId = User.Claims.FirstOrDefault(c => c.Type == "uid")?.Value;
    if (string.IsNullOrEmpty(userId)) return BadRequest();

    await _todoItemService.AddItem(userId, newTodo.Text);

    return Ok();
}

// POST /api/todo/{id}
[HttpPost("{id}")]
public async Task<IActionResult> UpdateTodo(Guid id, [FromBody]TodoItemModel updatedData)
{
    var userId = User.Claims.FirstOrDefault(c => c.Type == "uid")?.Value;
    if (string.IsNullOrEmpty(userId)) return BadRequest();

    await _todoItemService.UpdateItem(userId, id, updatedData);

    return Ok();
}

// DELETE /api/todo/{id}
[HttpDelete("{id}")]
public async Task<IActionResult> DeleteTodo(Guid id)
{
    var userId = User.Claims.FirstOrDefault(c => c.Type == "uid")?.Value;
    if (string.IsNullOrEmpty(userId)) return BadRequest();

    try
    {
        await _todoItemService.DeleteItem(userId, id);
    }
    catch (Exception ex)
    {
        return BadRequest(ex.Message);
    }

    return Ok();
}

And add one more using statement at the top:

using Vue2Spa.Models;

In each method, the first step is to extract the user’s ID from the Bearer token attached to the incoming request. The ID is then passed along to the service method, and the Okta .NET SDK uses it to find the right user’s profile.

Finish out the frontend code by adding the last few actions to actions.js:

  async addTodo({ dispatch }, data) {
    await axios.post(
      '/api/todo',
      { text: data.text },
      addAuthHeader())

    await dispatch('getAllTodos')
  },

  async toggleTodo({ dispatch }, data) {
    await axios.post(
      '/api/todo/' + data.id,
      { completed: data.completed },
      addAuthHeader())

    await dispatch('getAllTodos')
  },

  async deleteTodo({ dispatch }, id) {
    await axios.delete('/api/todo/' + id, addAuthHeader())
    await dispatch('getAllTodos')
  }

Start the server one more time with dotnet run and try adding a real to-do item to the list:

Final application

Learn More!

If you made it all the way to the end, congratulations! I’d love to hear about what you built. Shoot me a tweet @nbarbettini and tell me about it!

Feel free to download the final project’s code on GitHub.

If you want to keep building, here’s what you could do next:

  • Add a form in Vue (and a controller on the backend) to let a new user create an account
  • Store a timestamp when a new to-do item is added, and display it with each item
  • Speed up the frontend by storing changes in the Vuex store immediately (before the API response arrives)

For more Vue.js inspiration, check our other recent posts:

Happy coding!

Source:: scotch.io