mirror of
https://github.com/goldbergyoni/nodebestpractices.git
synced 2025-10-30 00:57:04 +08:00
Merge pull request #175 from utkarshbhatt12/master
fix grammar and formatting in all files
This commit is contained in:
58
README.md
58
README.md
@ -10,7 +10,7 @@
|
||||
|
||||
<div align="center">
|
||||
<img src="https://img.shields.io/badge/⚙%20Item%20count%20-%2052%20Best%20practices-blue.svg" alt="52 items"> <img src="https://img.shields.io/badge/%F0%9F%93%85%20Last%20update%20-%20Mar%2018%202018-green.svg" alt="Last update: Mar 18, 2018"> <img src="https://img.shields.io/badge/%E2%9C%94%20Updated%20For%20Version%20-%20Node%208.10-brightgreen.svg" alt="Updated for Node v.8.10">
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<br/>
|
||||
|
||||
@ -117,7 +117,7 @@
|
||||
|
||||
## ![✔] 2.2 Use only the built-in Error object
|
||||
|
||||
**TL;DR:** Many throws errors as a string or as some custom type – this complicates the error handling logic and the interoperability between modules. Whether you reject a promise, throw exception or emit error – using only the built-in Error object will increase uniformity and prevent loss of information
|
||||
**TL;DR:** Many throws errors as a string or as some custom type – this complicates the error handling logic and the interoperability between modules. Whether you reject a promise, throw an exception or an emit error – using only the built-in Error object will increase uniformity and prevent loss of information
|
||||
|
||||
|
||||
**Otherwise:** When invoking some component, being uncertain which type of errors come in return – it makes proper error handling much harder. Even worse, using custom types to describe errors might lead to loss of critical error information like the stack trace!
|
||||
@ -194,9 +194,9 @@
|
||||
|
||||
## ![✔] 2.9 Discover errors and downtime using APM products
|
||||
|
||||
**TL;DR:** Monitoring and performance products (a.k.a APM) proactively gauge your codebase or API so they can auto-magically highlight errors, crashes and slow parts that you were missing
|
||||
**TL;DR:** Monitoring and performance products (a.k.a APM) proactively gauge your codebase or API so they can automagically highlight errors, crashes and slow parts that you were missing
|
||||
|
||||
**Otherwise:** You might spend great effort on measuring API performance and downtimes, probably you’ll never be aware which are your slowest code parts under real world scenario and how these affects the UX
|
||||
**Otherwise:** You might spend great effort on measuring API performance and downtimes, probably you’ll never be aware which are your slowest code parts under real-world scenario and how these affect the UX
|
||||
|
||||
|
||||
🔗 [**Read More: using APM products**](/sections/errorhandling/apmproducts.md)
|
||||
@ -217,9 +217,9 @@
|
||||
|
||||
## ![✔] 2.11 Fail fast, validate arguments using a dedicated library
|
||||
|
||||
**TL;DR:** This should be part of your Express best practices – Assert API input to avoid nasty bugs that are much harder to track later. Validation code is usually tedious unless using a very cool helper libraries like Joi
|
||||
**TL;DR:** This should be part of your Express best practices – Assert API input to avoid nasty bugs that are much harder to track later. The validation code is usually tedious unless you are using a very cool helper library like Joi.
|
||||
|
||||
**Otherwise:** Consider this – your function expects a numeric argument “Discount” which the caller forgets to pass, later on your code checks if Discount!=0 (amount of allowed discount is greater than zero), then it will allow the user to enjoy a discount. OMG, what a nasty bug. Can you see it?
|
||||
**Otherwise:** Consider this – your function expects a numeric argument “Discount” which the caller forgets to pass, later on, your code checks if Discount!=0 (amount of allowed discount is greater than zero), then it will allow the user to enjoy a discount. OMG, what a nasty bug. Can you see it?
|
||||
|
||||
🔗 [**Read More: failing fast**](/sections/errorhandling/failfast.md)
|
||||
|
||||
@ -245,7 +245,7 @@
|
||||
|
||||
<br/><br/>
|
||||
|
||||
## ![✔] 3.3 Start a Codeblock's Curly Braces in the Same Line
|
||||
## ![✔] 3.3 Start a Codeblock's Curly Braces on the Same Line
|
||||
|
||||
**TL;DR:** The opening curly braces of a code block should be in the same line of the opening statement.
|
||||
|
||||
@ -263,7 +263,7 @@
|
||||
}
|
||||
```
|
||||
|
||||
**Otherwise:** Deferring from this best practice might lead to unexpected results, as seen in the Stackoverflow thread below:
|
||||
**Otherwise:** Deferring from this best practice might lead to unexpected results, as seen in the StackOverflow thread below:
|
||||
|
||||
🔗 [**Read more:** "Why does a results vary based on curly brace placement?" (Stackoverflow)](https://stackoverflow.com/questions/3641519/why-does-a-results-vary-based-on-curly-brace-placement)
|
||||
|
||||
@ -287,9 +287,9 @@
|
||||
|
||||
## ![✔] 3.6 Naming conventions for variables, constants, functions and classes
|
||||
|
||||
**TL;DR:** Use ***lowerCamelCase*** when naming constants, variables and functions and ***UpperCamelCase*** (capital first letter as well) when naming classes. This will help you to easily distinguish between plain variables / functions, and classes that require instantiation. Use descriptive names, but try to keep them short.
|
||||
**TL;DR:** Use ***lowerCamelCase*** when naming constants, variables and functions and ***UpperCamelCase*** (capital first letter as well) when naming classes. This will help you to easily distinguish between plain variables/functions, and classes that require instantiation. Use descriptive names, but try to keep them short.
|
||||
|
||||
**Otherwise:** Javascript is the only language in the world which allows to invoke a constructor ("Class") directly without instantiating it first. Consequently, Classes and function-constructors are differentiated by starting with UpperCamelCase.
|
||||
**Otherwise:** Javascript is the only language in the world which allows invoking a constructor ("Class") directly without instantiating it first. Consequently, Classes and function-constructors are differentiated by starting with UpperCamelCase.
|
||||
|
||||
### Code Example ###
|
||||
```javascript
|
||||
@ -310,7 +310,7 @@
|
||||
|
||||
## ![✔] 3.7 Prefer const over let. Ditch the var
|
||||
|
||||
**TL;DR:** Using `const` means that once a variable is assigned, it cannot be reassigned. Preferring const will help you to not be tempted to use the same variable for different uses, and make your code clearer. If a variable needs to be reassigned, in a for loop for example, use `let` to declare it. Another important aspect of let is that a variable declared using let is only available in the block scope in which it was defined. `var` is function scoped, not block scoped, and [shouldn't be used in ES6](https://hackernoon.com/why-you-shouldnt-use-var-anymore-f109a58b9b70) now that you have const and let at your disposal.
|
||||
**TL;DR:** Using `const` means that once a variable is assigned, it cannot be reassigned. Preferring const will help you to not be tempted to use the same variable for different uses, and make your code clearer. If a variable needs to be reassigned, in a for loop, for example, use `let` to declare it. Another important aspect of `let` is that a variable declared using it is only available in the block scope in which it was defined. `var` is function scoped, not block scoped, and [shouldn't be used in ES6](https://hackernoon.com/why-you-shouldnt-use-var-anymore-f109a58b9b70) now that you have const and let at your disposal.
|
||||
|
||||
**Otherwise:** Debugging becomes way more cumbersome when following a variable that frequently changes.
|
||||
|
||||
@ -320,7 +320,7 @@
|
||||
|
||||
## ![✔] 3.8 Requires come first, and not inside functions
|
||||
|
||||
**TL;DR:** Require modules at the beginning of each file, before and outside of any functions. This simple best practice will not only help you easily and quickly tell the dependencies of a file right at the top, but also avoids a couple of potential problems.
|
||||
**TL;DR:** Require modules at the beginning of each file, before and outside of any functions. This simple best practice will not only help you easily and quickly tell the dependencies of a file right at the top but also avoids a couple of potential problems.
|
||||
|
||||
**Otherwise:** Requires are run synchronously by Node.js. If they are called from within a function, it may block other requests from being handled at a more critical time. Also, if a required module or any of its own dependencies throw an error and crash the server, it is best to find out about it as soon as possible, which might not be the case if that module is required from within a function.
|
||||
|
||||
@ -329,10 +329,10 @@
|
||||
## ![✔] 3.9 Do Require on the folders, not directly on the files
|
||||
|
||||
**TL;DR:** When developing a module/library in a folder, place an index.js file that exposes the module's
|
||||
internals so every consumer will pass through it. This serves as an 'interface' to your module and ease
|
||||
internals so every consumer will pass through it. This serves as an 'interface' to your module and eases
|
||||
future changes without breaking the contract.
|
||||
|
||||
**Otherwise:** Changing to the internal structure of files or the signature may break the interface with
|
||||
**Otherwise:** Changing the internal structure of files or the signature may break the interface with
|
||||
clients.
|
||||
|
||||
### Code example
|
||||
@ -386,7 +386,7 @@ All statements above will return false if used with `===`
|
||||
|
||||
## ![✔] 3.12 Use Fat (=>) Arrow Functions
|
||||
|
||||
**TL;DR:** Though it's recommended to use async-await and avoid function parameters, when dealing with older API that accept promises or callbacks - arrow functions make the code structure more compact and keep the lexical context of the root function (i.e. 'this').
|
||||
**TL;DR:** Though it's recommended to use async-await and avoid function parameters when dealing with older API that accept promises or callbacks - arrow functions make the code structure more compact and keep the lexical context of the root function (i.e. 'this').
|
||||
|
||||
**Otherwise:** Longer code (in ES5 functions) is more prone to bugs and cumbersome to read.
|
||||
|
||||
@ -402,7 +402,7 @@ All statements above will return false if used with `===`
|
||||
|
||||
## ![✔] 4.1 At the very least, write API (component) testing
|
||||
|
||||
**TL;DR:** Most projects just don't have any automated testing due to short time tables or often the 'testing project' run out of control and being abandoned. For that reason, prioritize and start with API testing which are the easiest to write and provide more coverage than unit testing (you may even craft API tests without code using tools like [Postman](https://www.getpostman.com/). Afterwards, should you have more resources and time, continue with advanced test types like unit testing, DB testing, performance testing, etc
|
||||
**TL;DR:** Most projects just don't have any automated testing due to short timetables or often the 'testing project' run out of control and being abandoned. For that reason, prioritize and start with API testing which is the easiest to write and provide more coverage than unit testing (you may even craft API tests without code using tools like [Postman](https://www.getpostman.com/). Afterward, should you have more resources and time, continue with advanced test types like unit testing, DB testing, performance testing, etc
|
||||
|
||||
**Otherwise:** You may spend long days on writing unit tests to find out that you got only 20% system coverage
|
||||
|
||||
@ -419,7 +419,7 @@ All statements above will return false if used with `===`
|
||||
|
||||
## ![✔] 4.3 Carefully choose your CI platform (Jenkins vs CircleCI vs Travis vs Rest of the world)
|
||||
|
||||
**TL;DR:** Your continuous integration platform (CICD) will host all the quality tools (e.g test, lint) so it should come with a vibrant ecosystem of plugins. [Jenkins](https://jenkins.io/) used to be the default for many projects as it has the biggest community along with a very powerful platform at the price of complex setup that demands a steep learning curve. Nowadays, it became much easier to setup a CI solution using SaaS tools like [CircleCI](https://circleci.com) and others. These tools allow crafting a flexible CI pipeline without the burden of managing the whole infrastructure. Eventually, it's a trade-off between robustness and speed - choose your side carefully.
|
||||
**TL;DR:** Your continuous integration platform (CICD) will host all the quality tools (e.g test, lint) so it should come with a vibrant ecosystem of plugins. [Jenkins](https://jenkins.io/) used to be the default for many projects as it has the biggest community along with a very powerful platform at the price of complex setup that demands a steep learning curve. Nowadays, it became much easier to set up a CI solution using SaaS tools like [CircleCI](https://circleci.com) and others. These tools allow crafting a flexible CI pipeline without the burden of managing the whole infrastructure. Eventually, it's a trade-off between robustness and speed - choose your side carefully.
|
||||
|
||||
**Otherwise:** Choosing some niche vendor might get you blocked once you need some advanced customization. On the other hand, going with Jenkins might burn precious time on infrastructure setup
|
||||
|
||||
@ -476,7 +476,7 @@ All statements above will return false if used with `===`
|
||||
# `5. Going To Production Practices`
|
||||
## ![✔] 5.1. Monitoring!
|
||||
|
||||
**TL;DR:** Monitoring is a game of finding out issues before customers do – obviously this should be assigned unprecedented importance. The market is overwhelmed with offers thus consider starting with defining the basic metrics you must follow (my suggestions inside), then go over additional fancy features and choose the solution that ticks all boxes. Click ‘The Gist’ below for overview of solutions
|
||||
**TL;DR:** Monitoring is a game of finding out issues before customers do – obviously this should be assigned unprecedented importance. The market is overwhelmed with offers thus consider starting with defining the basic metrics you must follow (my suggestions inside), then go over additional fancy features and choose the solution that ticks all boxes. Click ‘The Gist’ below for an overview of the solutions
|
||||
|
||||
**Otherwise:** Failure === disappointed customers. Simple.
|
||||
|
||||
@ -489,11 +489,11 @@ All statements above will return false if used with `===`
|
||||
|
||||
**TL;DR:** Logs can be a dumb warehouse of debug statements or the enabler of a beautiful dashboard that tells the story of your app. Plan your logging platform from day 1: how logs are collected, stored and analyzed to ensure that the desired information (e.g. error rate, following an entire transaction through services and servers, etc) can really be extracted
|
||||
|
||||
**Otherwise:** You end-up with a blackbox that is hard to reason about, then you start re-writing all logging statements to add additional information
|
||||
**Otherwise:** You end-up with a black box that is hard to reason about, then you start re-writing all logging statements to add additional information
|
||||
|
||||
|
||||
🔗 [**Read More: Increase transparency using smart logging**](/sections/production/smartlogging.md)
|
||||
|
||||
|
||||
<br/><br/>
|
||||
|
||||
## ![✔] 5.3. Delegate anything possible (e.g. gzip, SSL) to a reverse proxy
|
||||
@ -509,7 +509,7 @@ All statements above will return false if used with `===`
|
||||
|
||||
## ![✔] 5.4. Lock dependencies
|
||||
|
||||
**TL;DR:** Your code must be identical across all environments, but amazingly NPM lets dependencies drift across environments by default – when you install packages at various environments it tries to fetch packages’ latest patch version. Overcome this by using NPM config files , .npmrc, that tell each environment to save the exact (not the latest) version of each package. Alternatively, for finer grain control use NPM” shrinkwrap”. *Update: as of NPM5 , dependencies are locked by default. The new package manager in town, Yarn, also got us covered by default
|
||||
**TL;DR:** Your code must be identical across all environments, but amazingly NPM lets dependencies drift across environments by default – when you install packages at various environments it tries to fetch packages’ latest patch version. Overcome this by using NPM config files, .npmrc, that tell each environment to save the exact (not the latest) version of each package. Alternatively, for finer grain control use NPM” shrinkwrap”. *Update: as of NPM5, dependencies are locked by default. The new package manager in town, Yarn, also got us covered by default
|
||||
|
||||
**Otherwise:** QA will thoroughly test the code and approve a version that will behave differently at production. Even worse, different servers at the same production cluster might run different code
|
||||
|
||||
@ -520,9 +520,9 @@ All statements above will return false if used with `===`
|
||||
|
||||
## ![✔] 5.5. Guard process uptime using the right tool
|
||||
|
||||
**TL;DR:** The process must go on and get restarted upon failures. For simple scenario, ‘restarter’ tools like PM2 might be enough but in today ‘dockerized’ world – a cluster management tools should be considered as well
|
||||
**TL;DR:** The process must go on and get restarted upon failures. For simple scenarios, ‘restarter’ tools like PM2 might be enough but in today ‘dockerized’ world – a cluster management tools should be considered as well
|
||||
|
||||
**Otherwise:** Running dozens of instances without clear strategy and too many tools together (cluster management, docker, PM2) might lead to a devops chaos
|
||||
**Otherwise:** Running dozens of instances without a clear strategy and too many tools together (cluster management, docker, PM2) might lead to a DevOps chaos
|
||||
|
||||
|
||||
🔗 [**Read More: Guard process uptime using the right tool**](/sections/production/guardprocess.md)
|
||||
@ -556,7 +556,7 @@ All statements above will return false if used with `===`
|
||||
|
||||
**TL;DR:** Monitoring and performance products (a.k.a APM) proactively gauge codebase and API so they can auto-magically go beyond traditional monitoring and measure the overall user-experience across services and tiers. For example, some APM products can highlight a transaction that loads too slow on the end-users side while suggesting the root cause
|
||||
|
||||
**Otherwise:** You might spend great effort on measuring API performance and downtimes, probably you’ll never be aware which is your slowest code parts under real world scenario and how these affects the UX
|
||||
**Otherwise:** You might spend great effort on measuring API performance and downtimes, probably you’ll never be aware which is your slowest code parts under real-world scenario and how these affects the UX
|
||||
|
||||
|
||||
🔗 [**Read More: Discover errors and downtime using APM products**](/sections/production/apmproducts.md)
|
||||
@ -569,7 +569,7 @@ All statements above will return false if used with `===`
|
||||
|
||||
**TL;DR:** Code with the end in mind, plan for production from day 1. This sounds a bit vague so I’ve compiled a few development tips that are closely related to production maintenance (click Gist below)
|
||||
|
||||
**Otherwise:** A world champion IT/devops guy won’t save a system that is badly written
|
||||
**Otherwise:** A world champion IT/DevOps guy won’t save a system that is badly written
|
||||
|
||||
|
||||
🔗 [**Read More: Make your code production-ready**](/sections/production/productoncode.md)
|
||||
@ -578,7 +578,7 @@ All statements above will return false if used with `===`
|
||||
|
||||
## ![✔] 5.10. Measure and guard the memory usage
|
||||
|
||||
**TL;DR:** Node.js has controversial relationships with memory: the v8 engine has soft limits on memory usage (1.4GB) and there are known paths to leaks memory in Node’s code – thus watching Node’s process memory is a must. In small apps you may gauge memory periodically using shell commands but in medium-large app consider baking your memory watch into a robust monitoring system
|
||||
**TL;DR:** Node.js has controversial relationships with memory: the v8 engine has soft limits on memory usage (1.4GB) and there are known paths to leaks memory in Node’s code – thus watching Node’s process memory is a must. In small apps, you may gauge memory periodically using shell commands but in medium-large app consider baking your memory watch into a robust monitoring system
|
||||
|
||||
**Otherwise:** Your process memory might leak a hundred megabytes a day like happened in Wallmart
|
||||
|
||||
@ -627,7 +627,7 @@ All statements above will return false if used with `===`
|
||||
|
||||
## ![✔] 5.14. Assign ‘TransactionId’ to each log statement
|
||||
|
||||
**TL;DR:** Assign the same identifier, transaction-id: {some value}, to each log entry within a single request. Then when inspecting errors in logs, easily conclude what happened before and after. Unfortunately, this is not easy to achieve in Node due its async nature, see code examples inside
|
||||
**TL;DR:** Assign the same identifier, transaction-id: {some value}, to each log entry within a single request. Then when inspecting errors in logs, easily conclude what happened before and after. Unfortunately, this is not easy to achieve in Node due to its async nature, see code examples inside
|
||||
|
||||
**Otherwise:** Looking at a production error log without the context – what happened before – makes it much harder and slower to reason about the issue
|
||||
|
||||
@ -641,7 +641,7 @@ All statements above will return false if used with `===`
|
||||
|
||||
**TL;DR:** Set the environment variable NODE_ENV to ‘production’ or ‘development’ to flag whether production optimizations should get activated – many NPM packages determining the current environment and optimize their code for production
|
||||
|
||||
**Otherwise:** Omitting this simple property might greatly degrade performance. For example, when using Express for server side rendering omitting NODE_ENV makes the slower by a factor of three!
|
||||
**Otherwise:** Omitting this simple property might greatly degrade performance. For example, when using Express for server-side rendering omitting `NODE_ENV` makes the slower by a factor of three!
|
||||
|
||||
|
||||
🔗 [**Read More: Set NODE_ENV=production**](/sections/production/setnodeenv.md)
|
||||
@ -707,7 +707,7 @@ All translations are contributed by the community. We will be happy to get any h
|
||||
|
||||
# Contributors
|
||||
## `Yoni Goldberg`
|
||||
Independent Node.js consultant who works with customers at USA, Europe and Israel on building large-scale scalable Node applications. Many of the best practices above were first published on his blog post at [http://www.goldbergyoni.com](http://www.goldbergyoni.com). Reach Yoni at @goldbergyoni or me@goldbergyoni.com
|
||||
Independent Node.js consultant who works with customers in USA, Europe, and Israel on building large-scale scalable Node applications. Many of the best practices above were first published in his blog post at [http://www.goldbergyoni.com](http://www.goldbergyoni.com). Reach Yoni at @goldbergyoni or me@goldbergyoni.com
|
||||
|
||||
## `Ido Richter`
|
||||
👨💻 Software engineer, 🌐 web developer, 🤖 emojis enthusiast.
|
||||
|
||||
@ -13,12 +13,13 @@
|
||||
Welcome to the biggest compilation of Node.js best practices. The content below was gathered from all top ranked books and posts and is updated constantly - when you read here rest assure that no significant tip slipped away. Feel at home - we love to discuss via PRs, issues or Gitter.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
* [Project Setup Practices (18)](#project-setup-practices)
|
||||
* [Code Style Practices (11) ](#code-style-practices)
|
||||
* [Error Handling Practices (14) ](#error-handling-practices)
|
||||
* [Going To Production Practices (21) ](#going-to-production-practices)
|
||||
* [Testing Practices (9) ](#deployment-practices)
|
||||
* [Security Practices (8) ](#security-practices)
|
||||
* [Code Style Practices (11)](#code-style-practices)
|
||||
* [Error Handling Practices (14)](#error-handling-practices)
|
||||
* [Going To Production Practices (21)](#going-to-production-practices)
|
||||
* [Testing Practices (9)](#deployment-practices)
|
||||
* [Security Practices (8)](#security-practices)
|
||||
|
||||
|
||||
<br/><br/>
|
||||
@ -26,7 +27,7 @@ Welcome to the biggest compilation of Node.js best practices. The content below
|
||||
|
||||
## ✔ 1. Structure your solution by feature ('microservices')
|
||||
|
||||
**TL&DR:** The worst large applications pitfal is a huge code base with hundreds of dependencies that slow down they developers as they try to incorporate new features. Partioning into small units ensures that each unit is kept simple and easy to maintain. This strategy pushes the complexity to the higher level - designing the cross-component interactions.
|
||||
**TL&DR:** The worst large applications pitfal is a huge code base with hundreds of dependencies that slow down they developers as they try to incorporate new features. Partioning into small units ensures that each unit is kept simple and easy to maintain. This strategy pushes the complexity to the higher level - designing the cross-component interactions.
|
||||
|
||||
**Otherwise:** Developing a new feature with a change to few objects demands to evaluate how this changes might affect dozends of dependants and ach deployment becomes a fear.
|
||||
|
||||
@ -54,11 +55,14 @@ Welcome to the biggest compilation of Node.js best practices. The content below
|
||||
|
||||
|
||||
<br/><br/><br/>
|
||||
|
||||
# `Code Style Practices`
|
||||
|
||||
|
||||
<br/><br/><br/>
|
||||
|
||||
# `Error Handling Practices`
|
||||
|
||||
<p align="right"><a href="#table-of-contents">⬆ Return to top</a></p>
|
||||
|
||||
## ✔ Use async-await for async error handling
|
||||
@ -69,16 +73,14 @@ Welcome to the biggest compilation of Node.js best practices. The content below
|
||||
|
||||
🔗 [**Use async-await for async error handling**](/sections/errorhandling/asyncawait.md)
|
||||
|
||||
|
||||
|
||||
<br/><br/><br/>
|
||||
|
||||
# `Going To Production Practices`
|
||||
|
||||
|
||||
<br/><br/><br/>
|
||||
|
||||
# `Deployment Practices`
|
||||
|
||||
|
||||
<br/><br/><br/>
|
||||
# `Security Practices`
|
||||
|
||||
# `Security Practices`
|
||||
|
||||
@ -12,11 +12,11 @@ Major products and segments
|
||||
|
||||
### Understanding the APM marketplace
|
||||
|
||||
APM products constitues 3 major segments:
|
||||
APM products constitute 3 major segments:
|
||||
|
||||
1. Website or API monitoring – external services that constantly monitor uptime and performance via HTTP requests. Can be set up in few minutes. Following are few selected contenders: [Pingdom](https://www.pingdom.com/), [Uptime Robot](https://uptimerobot.com/), and [New Relic](https://newrelic.com/application-monitoring)
|
||||
|
||||
2. Code instrumentation – product family which require to embed an agent within the application to use features like slow code detection, exception statistics, performance monitoring and many more. Following are few selected contenders: New Relic, App Dynamics
|
||||
2. Code instrumentation – product family which requires embedding an agent within the application to use features like slow code detection, exception statistics, performance monitoring and many more. Following are few selected contenders: New Relic, App Dynamics
|
||||
|
||||
3. Operational intelligence dashboard – this line of products is focused on facilitating the ops team with metrics and curated content that helps to easily stay on top of application performance. This usually involves aggregating multiple sources of information (application logs, DB logs, servers log, etc) and upfront dashboard design work. Following are few selected contenders: [Datadog](https://www.datadoghq.com/), [Splunk](https://www.splunk.com/), [Zabbix](https://www.zabbix.com/)
|
||||
|
||||
|
||||
@ -1,14 +1,11 @@
|
||||
# Use Async-Await or promises for async error handling
|
||||
|
||||
|
||||
### One Paragraph Explainer
|
||||
|
||||
Callbacks don’t scale as they are not familiar to most programmers, force to check errors all over, deal with nasty code nesting and make it difficult to reason about the code flow. Promise libraries like BlueBird, async, and Q pack a standard code style using RETURN and THROW to control the program flow. Specifically, they support the favorite try-catch error handling style which allows freeing the main code path from dealing with errors in every function
|
||||
|
||||
Callbacks don’t scale well since most programmers are not familiar with them. They force to check errors all over, deal with nasty code nesting and make it difficult to reason about the code flow. Promise libraries like BlueBird, async, and Q pack a standard code style using RETURN and THROW to control the program flow. Specifically, they support the favorite try-catch error handling style which allows freeing the main code path from dealing with errors in every function
|
||||
|
||||
### Code Example – using promises to catch errors
|
||||
|
||||
|
||||
```javascript
|
||||
doWork()
|
||||
.then(doWork)
|
||||
@ -21,36 +18,45 @@ doWork()
|
||||
### Anti pattern code example – callback style error handling
|
||||
|
||||
```javascript
|
||||
getData(someParameter, function(err, result){
|
||||
if(err != null)
|
||||
// do something like calling the given callback function and pass the error
|
||||
getMoreData(a, function(err, result){
|
||||
if(err != null)
|
||||
// do something like calling the given callback function and pass the error
|
||||
getMoreData(b, function(c){
|
||||
getMoreData(d, function(e){
|
||||
if(err != null)
|
||||
// you get the idea?
|
||||
});
|
||||
getData(someParameter, function(err, result) {
|
||||
if(err !== null) {
|
||||
// do something like calling the given callback function and pass the error
|
||||
getMoreData(a, function(err, result) {
|
||||
if(err !== null) {
|
||||
// do something like calling the given callback function and pass the error
|
||||
getMoreData(b, function(c) {
|
||||
getMoreData(d, function(e) {
|
||||
if(err !== null ) {
|
||||
// you get the idea?
|
||||
}
|
||||
})
|
||||
});
|
||||
}
|
||||
});
|
||||
}
|
||||
});
|
||||
```
|
||||
|
||||
### Blog Quote: "We have a problem with promises"
|
||||
|
||||
From the blog pouchdb.com
|
||||
|
||||
> ……And in fact, callbacks do something even more sinister: they deprive us of the stack, which is something we usually take for granted in programming languages. Writing code without a stack is a lot like driving a car without a brake pedal: you don’t realize how badly you need it, until you reach for it and it’s not there. The whole point of promises is to give us back the language fundamentals we lost when we went async: return, throw, and the stack. But you have to know how to use promises correctly in order to take advantage of them.
|
||||
|
||||
> ……And in fact, callbacks do something even more sinister: they deprive us of the stack, which is something we usually take for granted in programming languages. Writing code without a stack is a lot like driving a car without a brake pedal: you don’t realize how badly you need it until you reach for it and it’s not there. The whole point of promises is to give us back the language fundamentals we lost when we went async: return, throw, and the stack. But you have to know how to use promises correctly in order to take advantage of them.
|
||||
|
||||
### Blog Quote: "The promises method is much more compact"
|
||||
|
||||
From the blog gosquared.com
|
||||
|
||||
|
||||
> ………The promises method is much more compact, clearer and quicker to write. If an error or exception occurs within any of the ops it is handled by the single .catch() handler. Having this single place to handle all errors means you don’t need to write error checking for each stage of the work.
|
||||
|
||||
### Blog Quote: "Promises are native ES6, can be used with generators"
|
||||
|
||||
From the blog StrongLoop
|
||||
|
||||
|
||||
> ….Callbacks have a lousy error-handling story. Promises are better. Marry the built-in error handling in Express with promises and significantly lower the chances of an uncaught exception. Promises are native ES6, can be used with generators, and ES7 proposals like async/await through compilers like Babel
|
||||
|
||||
### Blog Quote: "All those regular flow control constructs you are used to are completely broken"
|
||||
|
||||
From the blog Benno’s
|
||||
|
||||
> ……One of the best things about asynchronous, callback based programming is that basically all those regular flow control constructs you are used to are completely broken. However, the one I find most broken is the handling of exceptions. Javascript provides a fairly familiar try…catch construct for dealing with exceptions. The problems with exceptions is that they provide a great way of short-cutting errors up a call stack, but end up being completely useless of the error happens on a different stack…
|
||||
|
||||
> ……One of the best things about asynchronous, callback-based programming is that basically all those regular flow control constructs you are used to are completely broken. However, the one I find most broken is the handling of exceptions. Javascript provides a fairly familiar try…catch construct for dealing with exceptions. The problem with exceptions is that they provide a great way of short-cutting errors up a call stack, but end up being completely useless of the error happens on a different stack…
|
||||
|
||||
@ -1,10 +1,10 @@
|
||||
# Catch unhandled promise rejections
|
||||
<br/><br/>
|
||||
|
||||
<br/><br/>
|
||||
|
||||
### One Paragraph Explainer
|
||||
|
||||
Typically, most of modern Node.js/Express application code runs within promises – whether within the .then handler, a function callback or in a catch block. Suprisingly, unless a developer remembered to add a .catch clause, errors thrown at these places are not handled by the uncaughtException event-handler and disappear. Recent versions of Node added a warning message when an unhandled rejection pops, though this might help to notice when things go wrong but it's obviously not a proper error handling method. The straightforward solution is to never forget adding .catch clauses within each promise chain call and redirect to a centralized error handler. However building your error handling strategy only on developer’s discipline is somewhat fragile. Consequently, it’s highly recommended using a graceful fallback and subscribe to `process.on(‘unhandledRejection’, callback)` – this will ensure that any promise error, if not handled locally, will get its treatment.
|
||||
Typically, most of modern Node.js/Express application code runs within promises – whether within the .then handler, a function callback or in a catch block. Surprisingly, unless a developer remembered to add a .catch clause, errors thrown at these places are not handled by the uncaughtException event-handler and disappear. Recent versions of Node added a warning message when an unhandled rejection pops, though this might help to notice when things go wrong but it's obviously not a proper error handling method. The straightforward solution is to never forget adding .catch clauses within each promise chain call and redirect to a centralized error handler. However, building your error handling strategy only on developer’s discipline is somewhat fragile. Consequently, it’s highly recommended using a graceful fallback and subscribe to `process.on(‘unhandledRejection’, callback)` – this will ensure that any promise error, if not handled locally, will get its treatment.
|
||||
|
||||
<br/><br/>
|
||||
|
||||
@ -13,12 +13,14 @@ Typically, most of modern Node.js/Express application code runs within promises
|
||||
```javascript
|
||||
DAL.getUserById(1).then((johnSnow) => {
|
||||
// this error will just vanish
|
||||
if(johnSnow.isAlive == false)
|
||||
throw new Error('ahhhh');
|
||||
if(johnSnow.isAlive == false)
|
||||
throw new Error('ahhhh');
|
||||
});
|
||||
|
||||
```
|
||||
|
||||
<br/><br/>
|
||||
|
||||
### Code example: Catching unresolved and rejected promises
|
||||
|
||||
```javascript
|
||||
@ -34,10 +36,13 @@ process.on('uncaughtException', (error) => {
|
||||
});
|
||||
|
||||
```
|
||||
|
||||
<br/><br/>
|
||||
|
||||
### Blog Quote: "If you can make a mistake, at some point you will"
|
||||
|
||||
From the blog James Nelson
|
||||
|
||||
|
||||
> Let’s test your understanding. Which of the following would you expect to print an error to the console?
|
||||
|
||||
```javascript
|
||||
|
||||
@ -1,12 +1,9 @@
|
||||
# Handle errors centrally, through but not within middleware
|
||||
|
||||
# Handle errors centrally, through but not within a middleware
|
||||
|
||||
### One Paragraph Explainer
|
||||
|
||||
Without one dedicated object for error handling, greater are the chances of important errors hiding under the radar due to improper handling. The error handler object is responsible for making the error visible, for example by writing to a well-formatted logger, sending events to some monitoring product or to an admin directly via email. A typical error handling flow might be: Some module throws an error -> API router catches the error -> it propagates the error to the middleware (e.g. Express, KOA) who is responsible for catching errors -> a centralized error handler is called -> the middleware is being told whether this error is an untrusted error (not operational) so it can restart the app gracefully. Note that it’s a common, yet wrong, practice to handle errors within Express middleware – doing so will not cover errors that are thrown in non-web interfaces
|
||||
|
||||
|
||||
|
||||
### Code Example – a typical error flow
|
||||
|
||||
```javascript
|
||||
@ -15,7 +12,7 @@ DB.addDocument(newCustomer, (error, result) => {
|
||||
if (error)
|
||||
throw new Error("Great error explanation comes here", other useful parameters)
|
||||
});
|
||||
|
||||
|
||||
// API route code, we catch both sync and async errors and forward to the middleware
|
||||
try {
|
||||
customerService.addNew(req.body).then((result) => {
|
||||
@ -27,7 +24,7 @@ try {
|
||||
catch (error) {
|
||||
next(error);
|
||||
}
|
||||
|
||||
|
||||
// Error handling middleware, we delegate the handling to the centralized error handler
|
||||
app.use((err, req, res, next) => {
|
||||
errorHandler.handleError(err).then((isOperationalError) => {
|
||||
@ -65,19 +62,19 @@ app.use((err, req, res, next) => {
|
||||
```
|
||||
|
||||
### Blog Quote: "Sometimes lower levels can’t do anything useful except propagate the error to their caller"
|
||||
|
||||
From the blog Joyent, ranked 1 for the keywords “Node.js error handling”
|
||||
|
||||
|
||||
> …You may end up handling the same error at several levels of the stack. This happens when lower levels can’t do anything useful except propagate the error to their caller, which propagates the error to its caller, and so on. Often, only the top-level caller knows what the appropriate response is, whether that’s to retry the operation, report an error to the user, or something else. But that doesn’t mean you should try to report all errors to a single top-level callback, because that callback itself can’t know in what context the error occurred…
|
||||
|
||||
|
||||
### Blog Quote: "Handling each err individually would result in tremendous duplication"
|
||||
From the blog JS Recipes, ranked 17 for the keywords “Node.js error handling”
|
||||
|
||||
> ……In Hackathon Starter api.js controller alone, there are over 79 occurences of error objects. Handling each err individually would result in tremendous amount of code duplication. The next best thing you can do is to delegate all error handling logic to an Express middleware…
|
||||
|
||||
From the blog JS Recipes ranked 17 for the keywords “Node.js error handling”
|
||||
|
||||
> ……In Hackathon Starter api.js controller alone, there are over 79 occurrences of error objects. Handling each err individually would result in a tremendous amount of code duplication. The next best thing you can do is to delegate all error handling logic to an Express middleware…
|
||||
|
||||
### Blog Quote: "HTTP errors have no place in your database code"
|
||||
From the blog Daily JS, ranked 14 for the keywords “Node.js error handling”
|
||||
|
||||
> ……You should set useful properties in error objects, but use such properties consistently. And, don’t cross the streams: HTTP errors have no place in your database code. Or for browser developers, Ajax errors have a place in code that talks to the server, but not code that processes Mustache templates…
|
||||
|
||||
From the blog Daily JS ranked 14 for the keywords “Node.js error handling”
|
||||
|
||||
> ……You should set useful properties in error objects, but use such properties consistently. And, don’t cross the streams: HTTP errors have no place in your database code. Or for browser developers, Ajax errors have a place in the code that talks to the server, but not code that processes Mustache templates…
|
||||
|
||||
@ -1,15 +1,15 @@
|
||||
# Document API errors using Swagger
|
||||
|
||||
|
||||
### One Paragraph Explainer
|
||||
|
||||
REST APIs return results using HTTP status codes, it’s absolutely required for the API user to be aware not only about the API schema but also about potential errors – the caller may then catch an error and tactfully handle it. For example, your API documentation might state in advance that HTTP status 409 is returned when the customer name already exist (assuming the API register new users) so the caller can correspondingly render the best UX for the given situation. Swagger is a standard that defines the schema of API documentation offering an eco-system of tools that allow creating documentation easily online, see print screens below
|
||||
REST APIs return results using HTTP status codes, it’s absolutely required for the API user to be aware not only about the API schema but also about potential errors – the caller may then catch an error and tactfully handle it. For example, your API documentation might state in advance that HTTP status 409 is returned when the customer name already exists (assuming the API register new users) so the caller can correspondingly render the best UX for the given situation. Swagger is a standard that defines the schema of API documentation offering an eco-system of tools that allow creating documentation easily online, see print screens below
|
||||
|
||||
### Blog Quote: "You have to tell your callers what errors can happen"
|
||||
|
||||
From the blog Joyent, ranked 1 for the keywords “Node.js logging”
|
||||
|
||||
|
||||
> We’ve talked about how to handle errors, but when you’re writing a new function, how do you deliver errors to the code that called your function? …If you don’t know what errors can happen or don’t know what they mean, then your program cannot be correct except by accident. So if you’re writing a new function, you have to tell your callers what errors can happen and what they mean…
|
||||
|
||||
|
||||
### Useful Tool: Swagger Online Documentation Creator
|
||||
### Useful Tool: Swagger Online Documentation Creator
|
||||
|
||||

|
||||
|
||||
@ -1,15 +1,12 @@
|
||||
# Fail fast, validate arguments using a dedicated library
|
||||
|
||||
|
||||
### One Paragraph Explainer
|
||||
|
||||
We all know how checking arguments and failing fast is important to avoid hidden bugs (see anti-pattern code example below). If not, read about explicit programming and defensive programming. In reality, we tend to avoid it due to the annoyance of coding it (e.g. think of validating hierarchical JSON object with fields like email and dates) – libraries like Joi and Validator turn this tedious task into a breeze.
|
||||
|
||||
### Wikipedia: Defensive Programming
|
||||
|
||||
Defensive programming is an approach to improve software and source code, in terms of: General quality – reducing the number of software bugs and problems. Making the source code comprehensible – the source code should be readable and understandable so it is approved in a code audit. Making the software behave in a predictable manner despite unexpected inputs or user actions.
|
||||
|
||||
|
||||
Defensive programming is an approach to improve software and source code, in terms of General quality – reducing the number of software bugs and problems. Making the source code comprehensible – the source code should be readable and understandable so it is approved in a code audit. Making the software behave in a predictable manner despite unexpected inputs or user actions.
|
||||
|
||||
### Code example: validating complex JSON input using ‘Joi’
|
||||
|
||||
@ -19,7 +16,7 @@ var memberSchema = Joi.object().keys({
|
||||
birthyear: Joi.number().integer().min(1900).max(2013),
|
||||
email: Joi.string().email()
|
||||
});
|
||||
|
||||
|
||||
function addNewMember(newMember)
|
||||
{
|
||||
// assertions come first
|
||||
@ -38,13 +35,14 @@ function redirectToPrintDiscount(httpResponse, member, discount)
|
||||
if(discount != 0)
|
||||
httpResponse.redirect(`/discountPrintView/${member.id}`);
|
||||
}
|
||||
|
||||
|
||||
redirectToPrintDiscount(httpResponse, someMember);
|
||||
// forgot to pass the parameter discount, why the heck was the user redirected to the discount screen?
|
||||
|
||||
```
|
||||
|
||||
### Blog Quote: "You should throw these errors immediately"
|
||||
|
||||
From the blog: Joyent
|
||||
|
||||
> A degenerate case is where someone calls an asynchronous function but doesn’t pass a callback. You should throw these errors immediately, since the program is broken and the best chance of debugging it involves getting at least a stack trace and ideally a core file at the point of the error. To do this, we recommend validating the types of all arguments at the start of the function.
|
||||
|
||||
> A degenerate case is where someone calls an asynchronous function but doesn’t pass a callback. You should throw these errors immediately since the program is broken and the best chance of debugging it involves getting at least a stack trace and ideally a core file at the point of the error. To do this, we recommend validating the types of all arguments at the start of the function.
|
||||
|
||||
@ -1,18 +1,17 @@
|
||||
# Monitoring
|
||||
|
||||
|
||||
### One Paragraph Explainer
|
||||
|
||||
> At the very basic level, monitoring means you can *easily identify when bad things happen at production. For example, by getting notified by email or Slack. The challenge is to choose the right set of tools that will satisfy your requirements without breaking your bank. May I suggest, start with defining the core set of metrics that must be watched to ensure a healthy state – CPU, server RAM, Node process RAM (less than 1.4GB), the amount of errors in the last minute, number of process restarts, average response time. Then go over some advanced features you might fancy and add to your wish list. Some examples of luxury monitoring feature: DB profiling, cross-service measuring (i.e. measure business transaction), frontend integration, expose raw data to custom BI clients, Slack notifications and many others.
|
||||
> At the very basic level, monitoring means you can *easily identify when bad things happen at production. For example, by getting notified by email or Slack. The challenge is to choose the right set of tools that will satisfy your requirements without breaking your bank. May I suggest, start with defining the core set of metrics that must be watched to ensure a healthy state – CPU, server RAM, Node process RAM (less than 1.4GB), the number of errors in the last minute, number of process restarts, average response time. Then go over some advanced features you might fancy and add to your wish list. Some examples of a luxury monitoring feature: DB profiling, cross-service measuring (i.e. measure business transaction), front-end integration, expose raw data to custom BI clients, Slack notifications and many others.
|
||||
|
||||
Achieving the advanced features demands lengthy setup or buying a commercial product such as Datadog, newrelic and alike. Unfortunately, achieving even the basics is not a walk in the park as some metrics are hardware-related (CPU) and others live within the node process (internal errors) thus all the straightforward tools require some additional setup. For example, cloud vendor monitoring solutions (e.g. AWS CloudWatch, Google StackDriver) will tell you immediately about the hardware metric but nothing about the internal app behavior. On the other end, Log-based solutions such as ElasticSearch lack by default the hardware view. The solution is to augment your choice with missing metrics, for example, a popular choice is sending application logs to Elastic stack and configure some additional agent (e.g. Beat) to share hardware-related information to get the full picture.
|
||||
|
||||
|
||||
### Blog Quote: "We have a problem with promises"
|
||||
From the blog pouchdb.com, ranked 11 for the keywords “Node Promises”
|
||||
|
||||
|
||||
From the blog, pouchdb.com ranked 11 for the keywords “Node Promises”
|
||||
|
||||
> … We recommend you to watch these signals for all of your services: Error Rate: Because errors are user facing and immediately affect your customers.
|
||||
Response time: Because the latency directly affects your customers and business.
|
||||
Throughput: The traffic helps you to understand the context of increased error rates and the latency too.
|
||||
Saturation: It tells how “full” your service is. If the CPU usage is 90%, can your system handle more traffic?
|
||||
…
|
||||
…
|
||||
|
||||
@ -4,15 +4,13 @@
|
||||
|
||||
Distinguishing the following two error types will minimize your app downtime and helps avoid crazy bugs: Operational errors refer to situations where you understand what happened and the impact of it – for example, a query to some HTTP service failed due to connection problem. On the other hand, programmer errors refer to cases where you have no idea why and sometimes where an error came from – it might be some code that tried to read an undefined value or DB connection pool that leaks memory. Operational errors are relatively easy to handle – usually logging the error is enough. Things become hairy when a programmer error pops up, the application might be in an inconsistent state and there’s nothing better you can do than to restart gracefully
|
||||
|
||||
|
||||
|
||||
### Code Example – marking an error as operational (trusted)
|
||||
|
||||
```javascript
|
||||
// marking an error object as operational
|
||||
var myError = new Error("How can I add new product when no value provided?");
|
||||
myError.isOperational = true;
|
||||
|
||||
|
||||
// or if you're using some centralized error factory (see other examples at the bullet "Use only the built-in Error object")
|
||||
function appError(commonType, description, isOperational) {
|
||||
Error.call(this);
|
||||
@ -21,31 +19,34 @@ function appError(commonType, description, isOperational) {
|
||||
this.description = description;
|
||||
this.isOperational = isOperational;
|
||||
};
|
||||
|
||||
|
||||
throw new appError(errorManagement.commonErrors.InvalidInput, "Describe here what happened", true);
|
||||
|
||||
```
|
||||
|
||||
### Blog Quote: "Programmer errors are bugs in the program"
|
||||
From the blog Joyent, ranked 1 for the keywords “Node.js error handling”
|
||||
|
||||
|
||||
From the blog, Joyent ranked 1 for the keywords “Node.js error handling”
|
||||
|
||||
> …The best way to recover from programmer errors is to crash immediately. You should run your programs using a restarter that will automatically restart the program in the event of a crash. With a restarter in place, crashing is the fastest way to restore reliable service in the face of a transient programmer error…
|
||||
|
||||
### Blog Quote: "No safe way to leave without creating some undefined brittle state"
|
||||
From Node.js official documentation
|
||||
|
||||
> …By the very nature of how throw works in JavaScript, there is almost never any way to safely “pick up where you left off”, without leaking references, or creating some other sort of undefined brittle state. The safest way to respond to a thrown error is to shut down the process. Of course, in a normal web server, you might have many connections open, and it is not reasonable to abruptly shut those down because an error was triggered by someone else. The better approach is to send an error response to the request that triggered the error, while letting the others finish in their normal time, and stop listening for new requests in that worker.
|
||||
|
||||
From Node.js official documentation
|
||||
|
||||
> …By the very nature of how throw works in JavaScript, there is almost never any way to safely “pick up where you left off”, without leaking references, or creating some other sort of undefined brittle state. The safest way to respond to a thrown error is to shut down the process. Of course, in a normal web server, you might have many connections open, and it is not reasonable to abruptly shut those down because an error was triggered by someone else. The better approach is to send an error response to the request that triggered the error while letting the others finish in their normal time, and stop listening for new requests in that worker.
|
||||
|
||||
### Blog Quote: "Otherwise you risk the state of your application"
|
||||
From the blog debugable.com, ranked 3 for the keywords “Node.js uncaught exception”
|
||||
|
||||
> …So, unless you really know what you are doing, you should perform a graceful restart of your service after receiving an “uncaughtException” exception event. Otherwise you risk the state of your application, or that of 3rd party libraries to become inconsistent, leading to all kinds of crazy bugs…
|
||||
|
||||
From the blog, debugable.com ranked 3 for the keywords “Node.js uncaught exception”
|
||||
|
||||
> …So, unless you really know what you are doing, you should perform a graceful restart of your service after receiving an “uncaughtException” exception event. Otherwise, you risk the state of your application, or that of 3rd party libraries to become inconsistent, leading to all kinds of crazy bugs…
|
||||
|
||||
### Blog Quote: "Blog Quote: There are three schools of thoughts on error handling"
|
||||
|
||||
From the blog: JS Recipes
|
||||
|
||||
> …There are primarily three schools of thoughts on error handling:
|
||||
|
||||
> …There are primarily three schools of thoughts on error handling:
|
||||
1. Let the application crash and restart it.
|
||||
2. Handle all possible errors and never crash.
|
||||
3. Balanced approach between the two
|
||||
3. A balanced approach between the two
|
||||
|
||||
@ -1,12 +1,9 @@
|
||||
# Exit the process gracefully when a stranger comes to town
|
||||
|
||||
|
||||
### One Paragraph Explainer
|
||||
|
||||
Somewhere within your code, an error handler object is responsible for deciding how to proceed when an error is thrown – if the error is trusted (i.e. operational error, see further explanation within best practice #3) then writing to log file might be enough. Things get hairy if the error is not familiar – this means that some component might be in a faulty state and all future requests are subject to failure. For example, assuming a singleton, stateful token issuer service that threw an exception and lost its state – from now it might behave unexpectedly and cause all requests to fail. Under this scenario, kill the process and use a ‘Restarter tool’ (like Forever, PM2, etc) to start over with a clean slate.
|
||||
|
||||
|
||||
|
||||
### Code example: deciding whether to crash
|
||||
|
||||
```javascript
|
||||
@ -16,8 +13,7 @@ process.on('uncaughtException', function(error) {
|
||||
if(!errorManagement.handler.isTrustedError(error))
|
||||
process.exit(1)
|
||||
});
|
||||
|
||||
|
||||
|
||||
// centralized error handler encapsulates error-handling related logic
|
||||
function errorHandler() {
|
||||
this.handleError = function (error) {
|
||||
@ -33,23 +29,23 @@ function errorHandler() {
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
### Blog Quote: "The best way is to crash"
|
||||
|
||||
From the blog Joyent
|
||||
|
||||
> …The best way to recover from programmer errors is to crash immediately. You should run your programs using a restarter that will automatically restart the program in the event of a crash. With a restarter in place, crashing is the fastest way to restore reliable service in the face of a transient programmer error…
|
||||
|
||||
> …The best way to recover from programmer errors is to crash immediately. You should run your programs using a restarter that will automatically restart the program in the event of a crash. With a restarter in place, crashing is the fastest way to restore reliable service in the face of a transient programmer error…
|
||||
|
||||
### Blog Quote: "There are three schools of thoughts on error handling"
|
||||
|
||||
From the blog: JS Recipes
|
||||
|
||||
|
||||
> …There are primarily three schools of thoughts on error handling:
|
||||
1. Let the application crash and restart it.
|
||||
2. Handle all possible errors and never crash.
|
||||
3. Balanced approach between the two
|
||||
|
||||
3. A balanced approach between the two
|
||||
|
||||
### Blog Quote: "No safe way to leave without creating some undefined brittle state"
|
||||
|
||||
From Node.js official documentation
|
||||
|
||||
> …By the very nature of how throw works in JavaScript, there is almost never any way to safely “pick up where you left off”, without leaking references, or creating some other sort of undefined brittle state. The safest way to respond to a thrown error is to shut down the process. Of course, in a normal web server, you might have many connections open, and it is not reasonable to abruptly shut those down because an error was triggered by someone else. The better approach is to send an error response to the request that triggered the error, while letting the others finish in their normal time, and stop listening for new requests in that worker.
|
||||
|
||||
> …By the very nature of how throw works in JavaScript, there is almost never any way to safely “pick up where you left off”, without leaking references, or creating some other sort of undefined brittle state. The safest way to respond to a thrown error is to shut down the process. Of course, in a normal web server, you might have many connections open, and it is not reasonable to abruptly shut those down because an error was triggered by someone else. The better approach is to send an error response to the request that triggered the error while letting the others finish in their normal time, and stop listening for new requests in that worker.
|
||||
|
||||
@ -1,12 +1,9 @@
|
||||
# Test error flows using your favorite test framework
|
||||
|
||||
|
||||
### One Paragraph Explainer
|
||||
|
||||
Testing ‘happy’ paths is no better than testing failures. Good testing code coverage demands to test exceptional paths. Otherwise, there is no trust that exceptions are indeed handled correctly. Every unit testing framework, like [Mocha](https://mochajs.org/) & [Chai](http://chaijs.com/), supports exception testing (code examples below). If you find it tedious to test every inner function and exception you may settle with testing only REST API HTTP errors.
|
||||
|
||||
|
||||
|
||||
### Code example: ensuring the right exception is thrown using Mocha & Chai
|
||||
|
||||
```javascript
|
||||
@ -38,5 +35,4 @@ it("Creates new Facebook group", function (done) {
|
||||
done();
|
||||
});
|
||||
});
|
||||
|
||||
```
|
||||
```
|
||||
|
||||
@ -2,7 +2,7 @@
|
||||
|
||||
### One Paragraph Explainer
|
||||
|
||||
We all loovve console.log but obviously a reputable and persisted Logger like [Winston][winston], [Bunyan][bunyan] (highly popular) or [Pino][pino] (the new kid in town which is focused on performance) is mandatory for serious projects. A set of practices and tools will help to reason about errors much quicker – (1) log frequently using different levels (debug, info, error), (2) when logging, provide contextual information as JSON objects, see example below. (3) watch and filter logs using a log querying API (built-in in most loggers) or a log viewer software
|
||||
We all love console.log but obviously, a reputable and persistent logger like [Winston][winston], [Bunyan][bunyan] (highly popular) or [Pino][pino] (the new kid in town which is focused on performance) is mandatory for serious projects. A set of practices and tools will help to reason about errors much quicker – (1) log frequently using different levels (debug, info, error), (2) when logging, provide contextual information as JSON objects, see example below. (3) watch and filter logs using a log querying API (built-in in most loggers) or a log viewer software
|
||||
(4) Expose and curate log statement for the operation team using operational intelligence tools like Splunk
|
||||
|
||||
[winston]: https://www.npmjs.com/package/winston
|
||||
@ -43,13 +43,13 @@ var options = {
|
||||
winston.query(options, function (err, results) {
|
||||
// execute callback with results
|
||||
});
|
||||
|
||||
```
|
||||
|
||||
### Blog Quote: "Logger Requirements"
|
||||
|
||||
From the blog Strong Loop
|
||||
|
||||
> Lets identify a few requirements (for a logger):
|
||||
1. Time stamp each log line. This one is pretty self explanatory – you should be able to tell when each log entry occured.
|
||||
> Lets identify a few requirements (for a logger):
|
||||
1. Timestamp each log line. This one is pretty self-explanatory – you should be able to tell when each log entry occurred.
|
||||
2. Logging format should be easily digestible by humans as well as machines.
|
||||
3. Allows for multiple configurable destination streams. For example, you might be writing trace logs to one file but when an error is encountered, write to the same file, then into error file and send an email at the same time…
|
||||
|
||||
@ -1,6 +1,5 @@
|
||||
# Use only the built-in Error object
|
||||
|
||||
|
||||
### One Paragraph Explainer
|
||||
|
||||
The permissive nature of JS along with its variety code-flow options (e.g. EventEmitter, Callbacks, Promises, etc) pushes to great variance in how developers raise errors – some use strings, other define their own custom types. Using Node.js built-in Error object helps to keep uniformity within your code and with 3rd party libraries, it also preserves significant information like the StackTrace. When raising the exception, it’s usually a good practice to fill it with additional contextual properties like the error name and the associated HTTP error code. To achieve this uniformity and practices, consider extending the Error object with additional properties, see code example below
|
||||
@ -11,17 +10,17 @@ The permissive nature of JS along with its variety code-flow options (e.g. Event
|
||||
// throwing an Error from typical function, whether sync or async
|
||||
if(!productToAdd)
|
||||
throw new Error("How can I add new product when no value provided?");
|
||||
|
||||
|
||||
// 'throwing' an Error from EventEmitter
|
||||
const myEmitter = new MyEmitter();
|
||||
myEmitter.emit('error', new Error('whoops!'));
|
||||
|
||||
|
||||
// 'throwing' an Error from a Promise
|
||||
return new Promise(function (resolve, reject) {
|
||||
Return DAL.getProduct(productToAdd.id).then((existingProduct) =>{
|
||||
if(existingProduct != null)
|
||||
reject(new Error("Why fooling us and trying to add an existing product?"));
|
||||
})
|
||||
return DAL.getProduct(productToAdd.id).then((existingProduct) => {
|
||||
if(existingProduct != null)
|
||||
reject(new Error("Why fooling us and trying to add an existing product?"));
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
@ -31,7 +30,6 @@ return new Promise(function (resolve, reject) {
|
||||
// throwing a string lacks any stack trace information and other important data properties
|
||||
if(!productToAdd)
|
||||
throw ("How can I add new product when no value provided?");
|
||||
|
||||
```
|
||||
|
||||
### Code example – doing it even better
|
||||
@ -48,29 +46,33 @@ function appError(name, httpCode, description, isOperational) {
|
||||
appError.prototype.__proto__ = Error.prototype;
|
||||
|
||||
module.exports.appError = appError;
|
||||
|
||||
|
||||
// client throwing an exception
|
||||
if(user == null)
|
||||
throw new appError(commonErrors.resourceNotFound, commonHTTPErrors.notFound, "further explanation", true)
|
||||
```
|
||||
|
||||
### Blog Quote: "I don’t see the value in having lots of different types"
|
||||
From the blog Ben Nadel, ranked 5 for the keywords “Node.js error object”
|
||||
|
||||
From the blog, Ben Nadel ranked 5 for the keywords “Node.js error object”
|
||||
|
||||
>…”Personally, I don’t see the value in having lots of different types of error objects – JavaScript, as a language, doesn’t seem to cater to Constructor-based error-catching. As such, differentiating on an object property seems far easier than differentiating on a Constructor type…
|
||||
|
||||
### Blog Quote: "A string is not an error"
|
||||
From the blog devthought.com, ranked 6 for the keywords “Node.js error object”
|
||||
|
||||
> …passing a string instead of an error results in reduced interoperability between modules. It breaks contracts with APIs that might be performing instanceof Error checks, or that want to know more about the error. Error objects, as we’ll see, have very interesting properties in modern JavaScript engines besides holding the message passed to the constructor…
|
||||
|
||||
From the blog, devthought.com ranked 6 for the keywords “Node.js error object”
|
||||
|
||||
> …passing a string instead of an error results in reduced interoperability between modules. It breaks contracts with APIs that might be performing `instanceof` Error checks, or that want to know more about the error. Error objects, as we’ll see, have very interesting properties in modern JavaScript engines besides holding the message passed to the constructor…
|
||||
Blog Quote: “All JavaScript and System errors raised by Node.js inherit from Error”
|
||||
|
||||
### Blog Quote: "Inheriting from Error doesn’t add too much value"
|
||||
From the blog machadogj
|
||||
|
||||
> …One problem that I have with the Error class is that is not so simple to extend. Of course you can inherit the class and create your own Error classes like HttpError, DbError, etc. However that takes time, and doesn’t add too much value unless you are doing something with types. Sometimes, you just want to add a message, and keep the inner error, and sometimes you might want to extend the error with parameters, and such…
|
||||
|
||||
### Blog Quote: "All JavaScript and System errors raised by Node.js inherit from Error"
|
||||
From the blog machadogj
|
||||
|
||||
> …One problem that I have with the Error class is that is not so simple to extend. Of course, you can inherit the class and create your own Error classes like HttpError, DbError, etc. However, that takes time and doesn’t add too much value unless you are doing something with types. Sometimes, you just want to add a message and keep the inner error, and sometimes you might want to extend the error with parameters, and such…
|
||||
|
||||
### Blog Quote: "All JavaScript and System errors raised by Node.js inherit from Error"
|
||||
|
||||
From Node.js official documentation
|
||||
|
||||
> …All JavaScript and System errors raised by Node.js inherit from, or are instances of, the standard JavaScript Error class and are guaranteed to provide at least the properties available on that class. A generic JavaScript Error object that does not denote any specific circumstance of why the error occurred. Error objects capture a “stack trace” detailing the point in the code at which the Error was instantiated, and may provide a text description of the error.All errors generated by Node.js, including all System and JavaScript errors, will either be instances of, or inherit from, the Error class…
|
||||
|
||||
> …All JavaScript and System errors raised by Node.js inherit from, or are instances of, the standard JavaScript Error class and are guaranteed to provide at least the properties available on that class. A generic JavaScript Error object that does not denote any specific circumstance of why the error occurred. Error objects capture a “stack trace” detailing the point in the code at which the Error was instantiated, and may provide a text description of the error.All errors generated by Node.js, including all System and JavaScript errors, will either be instances of or inherit from, the Error class…
|
||||
|
||||
@ -1,19 +1,20 @@
|
||||
# Use an LTS release of Node.js in production
|
||||
|
||||
|
||||
### One Paragraph Explainer
|
||||
|
||||
Ensure you are using an LTS(Long Term Support) version of Node.js in production to receive critical bug fixes, security updates and performance improvements.
|
||||
|
||||
LTS versions of Node.js are supported for at least 18 months, and are indicated by even version numbers (e.g. 4, 6, 8). They're best for production since the LTS release line is focussed on stability and security, whereas the 'Current' release line has a shorter lifespan and more frequent updates to the code. Changes to LTS versions are limited to bug fixes for stability, security updates, possible npm updates, documentation updates and certain performance improvements that can be demonstrated to not break existing applications.
|
||||
LTS versions of Node.js are supported for at least 18 months and are indicated by even version numbers (e.g. 4, 6, 8). They're best for production since the LTS release line is focussed on stability and security, whereas the 'Current' release line has a shorter lifespan and more frequent updates to the code. Changes to LTS versions are limited to bug fixes for stability, security updates, possible npm updates, documentation updates and certain performance improvements that can be demonstrated to not break existing applications.
|
||||
|
||||
<br/><br/>
|
||||
|
||||
### Read on:
|
||||
### Read on
|
||||
|
||||
🔗 [Node.js release definitions](https://nodejs.org/en/about/releases/)
|
||||
|
||||
🔗 [Node.js release schedule](https://github.com/nodejs/Release)
|
||||
|
||||
🔗 [Essential Steps: Long Term Support for Node.js by Rod Vagg](https://medium.com/@nodesource/essential-steps-long-term-support-for-node-js-8ecf7514dbd)
|
||||
> ...the schedule of incremental releases within each of these will be driven by the the availability of bug fixes, security fixes and other small but important changes. The focus will be on stability, but stability also includes minimizing the number of known bugs and staying on top of security concerns as they arise.
|
||||
|
||||
> ...the schedule of incremental releases within each of these will be driven by the availability of bug fixes, security fixes, and other small but important changes. The focus will be on stability, but stability also includes minimizing the number of known bugs and staying on top of security concerns as they arise.
|
||||
|
||||
<br/><br/>
|
||||
|
||||
@ -2,21 +2,19 @@
|
||||
|
||||
<br/><br/>
|
||||
|
||||
|
||||
### One Paragraph Explainer
|
||||
|
||||
APM (application performance monitoring) refers to a familiy of products that aims to monitor application performance from end to end, also from the customer perspective. While traditional monitoring solutions focuses on Exceptions and standalone technical metrics (e.g. error tracking, slow server endpoints, etc), in real world our app might create disappointed users without any code exceptions, for example if some middleware service performed real slow. APM products measure the user experience from end to end, for example, given a system that encompass frontend UI and multiple distributed services – some APM products can tell how fast a transaction that spans multiple tiers last. It can tell whether the user experience is solid and point to the problem. This attractive offering comes with a relatively high price tag hence it’s recommended for large-scale and complex products that require to go beyond straightforwd monitoring.
|
||||
APM (application performance monitoring) refers to a family of products that aims to monitor application performance from end to end, also from the customer perspective. While traditional monitoring solutions focus on Exceptions and standalone technical metrics (e.g. error tracking, slow server endpoints, etc), in the real world our app might create disappointed users without any code exceptions, for example, if some middleware service performed real slow. APM products measure the user experience from end to end, for example, given a system that encompasses frontend UI and multiple distributed services – some APM products can tell how fast a transaction that spans multiple tiers last. It can tell whether the user experience is solid and point to the problem. This attractive offering comes with a relatively high price tag hence it’s recommended for large-scale and complex products that require going beyond straightforward monitoring.
|
||||
|
||||
<br/><br/>
|
||||
|
||||
|
||||
### APM example – a commercial product that visualize cross-service app performance
|
||||
### APM example – a commercial product that visualizes cross-service app performance
|
||||
|
||||

|
||||
|
||||
<br/><br/>
|
||||
|
||||
### APM example – a commercial product that emphasize the user experience score
|
||||
### APM example – a commercial product that emphasizes the user experience score
|
||||
|
||||

|
||||
|
||||
|
||||
@ -2,19 +2,17 @@
|
||||
|
||||
<br/><br/>
|
||||
|
||||
|
||||
### One Paragraph Explainer
|
||||
|
||||
A typical log is a warehouse of entries from all components and requests. Upon detection of some suspicious line or error it becomes hairy to match other lines that belong to the same specific flow (e.g. the user “John” tried to buy something). This becomes even more critical and challenging in a microservice environment when a request/transaction might span across multiple computers. Address this by assigning a unique transaction identifier value to all the entries from the same request so when detecting one line one can copy the id and search for every line that has similar transaction Id. However, achieving this In Node is not straightforward as a single thread is used to serve all requests –consider using a library that that can group data on the request level – see code example on the next slide. When calling other microservice, pass the transaction Id using an HTTP header like “x-transaction-id” to keep the same context.
|
||||
A typical log is a warehouse of entries from all components and requests. Upon detection of some suspicious line or error, it becomes hairy to match other lines that belong to the same specific flow (e.g. the user “John” tried to buy something). This becomes even more critical and challenging in a microservice environment when a request/transaction might span across multiple computers. Address this by assigning a unique transaction identifier value to all the entries from the same request so when detecting one line one can copy the id and search for every line that has similar transaction Id. However, achieving this In Node is not straightforward as a single thread is used to serve all requests –consider using a library that that can group data on the request level – see code example on the next slide. When calling other microservice, pass the transaction Id using an HTTP header like “x-transaction-id” to keep the same context.
|
||||
|
||||
<br/><br/>
|
||||
|
||||
|
||||
### Code example: typical Express configuration
|
||||
|
||||
```javascript
|
||||
// when receiving a new request, start a new isolated context and set a transaction Id. The following example is using the NPM library continuation-local-storage to isolate requests
|
||||
|
||||
|
||||
const { createNamespace } = require('continuation-local-storage');
|
||||
var session = createNamespace('my session');
|
||||
|
||||
@ -32,7 +30,7 @@ class someService {
|
||||
}
|
||||
}
|
||||
|
||||
// The logger can now append the transaction-id to each entry, so that entries from the same request will have the same value
|
||||
// The logger can now append the transaction-id to each entry so that entries from the same request will have the same value
|
||||
class logger {
|
||||
info (message)
|
||||
{console.log(`${message} ${session.get('transactionId')}`);}
|
||||
@ -42,5 +40,6 @@ class logger {
|
||||
<br/><br/>
|
||||
|
||||
### What Other Bloggers Say
|
||||
|
||||
From the blog [ARG! TEAM](http://blog.argteam.com/coding/hardening-node-js-for-production-part-2-using-nginx-to-avoid-node-js-load):
|
||||
> ...Although express.js has built in static file handling through some connect middleware, you should never use it. *Nginx can do a much better job of handling static files and can prevent requests for non-dynamic content from clogging our node processes*...
|
||||
> ...Although express.js has built-in static file handling through some connect middleware, you should never use it. *Nginx can do a much better job of handling static files and can prevent requests for non-dynamic content from clogging our node processes*...
|
||||
|
||||
@ -2,18 +2,16 @@
|
||||
|
||||
<br/><br/>
|
||||
|
||||
|
||||
### One Paragraph Explainer
|
||||
|
||||
Have you ever encountered a severe production issue where one server was missing some piece of configuration or data? That is probably due to some unnecessary dependency on some local asset that is not part of the deployment. Many successful products treat servers like a phoenix bird – it dies and is reborn periodically without any damage. In other words, a server is just a piece of hardware that executes your code for some time and is replaced after that.
|
||||
This approach
|
||||
|
||||
- allows to scale by adding and removing servers dynamically without any side-affect
|
||||
- allows scaling by adding and removing servers dynamically without any side-effects.
|
||||
- simplifies the maintenance as it frees our mind from evaluating each server state.
|
||||
|
||||
<br/><br/>
|
||||
|
||||
|
||||
### Code example: anti-patterns
|
||||
|
||||
```javascript
|
||||
@ -37,7 +35,8 @@ Global.someCacheLike.result = { somedata };
|
||||
<br/><br/>
|
||||
|
||||
### What Other Bloggers Say
|
||||
|
||||
From the blog [Martin Fowler](https://martinfowler.com/bliki/PhoenixServer.html):
|
||||
> ...One day I had this fantasy of starting a certification service for operations. The certification assessment would consist of a colleague and I turning up at the corporate data center and setting about critical production servers with a baseball bat, a chainsaw, and a water pistol. The assessment would be based on how long it would take for the operations team to get all the applications up and running again. This may be a daft fantasy, but there’s a nugget of wisdom here. While you should forego the baseball bats, it is a good idea to virtually burn down your servers at regular intervals. A server should be like a phoenix, regularly rising from the ashes...
|
||||
|
||||
|
||||
<br/><br/>
|
||||
|
||||
@ -2,19 +2,17 @@
|
||||
|
||||
<br/><br/>
|
||||
|
||||
|
||||
### One Paragraph Explainer
|
||||
|
||||
A maintenance endpoint is a plain secured HTTP API that is part of the app code and its purpose is to be used by the ops/production team to monitor and expose maintenance functionality. For example, it can return a head dump (memory snapshot) of the process, report whether there are some memory leaks and even allow to execute REPL commands directly. This endpoint is needed where the conventional devops tools (monitoring products, logs, etc) fails to gather some specific type of information or you choose not to buy/install such tools. The golden rule is using professional and external tools for monitoring and maintaining the production, these are usually more robust and accurate. That said, there are likely to be cases where the generic tools will fail to extract information that is specific to Node or to your app – for example, should you wish to generate a memory snapshot at the moment GC completed a cycle – few NPM libraries will be glad to perform this for you but popular monitoring tools will be likely to miss this functionality
|
||||
A maintenance endpoint is a plain secured HTTP API that is part of the app code and its purpose is to be used by the ops/production team to monitor and expose maintenance functionality. For example, it can return a head dump (memory snapshot) of the process, report whether there are some memory leaks and even allow to execute REPL commands directly. This endpoint is needed where the conventional DevOps tools (monitoring products, logs, etc) fails to gather some specific type of information or you choose not to buy/install such tools. The golden rule is using professional and external tools for monitoring and maintaining the production, these are usually more robust and accurate. That said, there are likely to be cases where the generic tools will fail to extract information that is specific to Node or to your app – for example, should you wish to generate a memory snapshot at the moment GC completed a cycle – few NPM libraries will be glad to perform this for you but popular monitoring tools will be likely to miss this functionality
|
||||
|
||||
<br/><br/>
|
||||
|
||||
|
||||
### Code example: generating a head dump via code
|
||||
|
||||
```javascript
|
||||
var heapdump = require('heapdump');
|
||||
|
||||
|
||||
router.get('/ops/headump', (req, res, next) => {
|
||||
logger.info('About to generate headump');
|
||||
heapdump.writeSnapshot((err, filename) => {
|
||||
|
||||
@ -2,17 +2,15 @@
|
||||
|
||||
<br/><br/>
|
||||
|
||||
|
||||
### One Paragraph Explainer
|
||||
|
||||
It’s very tempting to cargo-cult Express and use its rich middleware offering for networking related tasks like serving static files, gzip encoding, throttling requests, SSL termination, etc. This is a performance kill due to its single threaded model which will keep the CPU busy for long periods (Remember, Node’s execution model is optimized for short tasks or async IO related tasks). A better approach is to use a tool that expertise in networking tasks – the most popular are nginx and HAproxy which are also used by the biggest cloud vendors to lighten the incoming load on node.js processes.
|
||||
|
||||
<br/><br/>
|
||||
|
||||
|
||||
### Nginx Config Example – Using nginx to compress server responses
|
||||
|
||||
```
|
||||
```nginx
|
||||
# configure gzip compression
|
||||
gzip on;
|
||||
gzip_comp_level 6;
|
||||
@ -46,9 +44,8 @@ server {
|
||||
### What Other Bloggers Say
|
||||
|
||||
* From the blog [Mubaloo](http://mubaloo.com/best-practices-deploying-node-js-applications):
|
||||
> …It’s very easy to fall into this trap – You see a package like Express and think “Awesome! Let’s get started” – you code away and you’ve got an application that does what you want. This is excellent and, to be honest, you’ve won a lot of the battle. However, you will lose the war if you upload your app to a server and have it listen on your HTTP port, because you’ve forgotten a very crucial thing: Node is not a web server. **As soon as any volume of traffic starts to hit your application, you’ll notice that things start to go wrong: connections are dropped, assets stop being served or, at the very worst, your server crashes. What you’re doing is attempting to have Node deal with all of the complicated things that a proven web server does really well. Why reinvent the wheel?**
|
||||
> **This is just for one request, for one image and bearing in mind this is memory that your application could be using for important stuff like reading a database or handling complicated logic; why would you cripple your application for the sake of convenience?**
|
||||
|
||||
> …It’s very easy to fall into this trap – You see a package like Express and think “Awesome! Let’s get started” – you code away and you’ve got an application that does what you want. This is excellent and, to be honest, you’ve won a lot of the battle. However, you will lose the war if you upload your app to a server and have it listen on your HTTP port because you’ve forgotten a very crucial thing: Node is not a web server. **As soon as any volume of traffic starts to hit your application, you’ll notice that things start to go wrong: connections are dropped, assets stop being served or, at the very worst, your server crashes. What you’re doing is attempting to have Node deal with all of the complicated things that a proven web server does really well. Why reinvent the wheel?**
|
||||
> **This is just for one request, for one image and bearing in mind this is the memory that your application could be used for important stuff like reading a database or handling complicated logic; why would you cripple your application for the sake of convenience?**
|
||||
|
||||
* From the blog [Argteam](http://blog.argteam.com/coding/hardening-node-js-for-production-part-2-using-nginx-to-avoid-node-js-load):
|
||||
> Although express.js has built in static file handling through some connect middleware, you should never use it. **Nginx can do a much better job of handling static files and can prevent requests for non-dynamic content from clogging our node processes**…
|
||||
> Although express.js has built-in static file handling through some connect middleware, you should never use it. **Nginx can do a much better job of handling static files and can prevent requests for non-dynamic content from clogging our node processes**…
|
||||
|
||||
@ -14,6 +14,7 @@ The following tools automatically check for known security vulnerabilities in yo
|
||||
<br/><br/>
|
||||
|
||||
### What Other Bloggers Say
|
||||
|
||||
From the [StrongLoop](https://strongloop.com/strongblog/best-practices-for-express-in-production-part-one-security/) blog:
|
||||
|
||||
> ...Using to manage your application’s dependencies is powerful and convenient. But the packages that you use may contain critical security vulnerabilities that could also affect your application. The security of your app is only as strong as the “weakest link” in your dependencies. Fortunately, there are two helpful tools you can use to ensure of the third-party packages you use: and requireSafe. These two tools do largely the same thing, so using both might be overkill, but “better safe than sorry” are words to live by when it comes to security...
|
||||
> ...Using to manage your application’s dependencies is powerful and convenient. But the packages that you use may contain critical security vulnerabilities that could also affect your application. The security of your app is only as strong as the “weakest link” in your dependencies. Fortunately, there are two helpful tools you can use to ensure the third-party packages you use: and requireSafe. These two tools do largely the same thing, so using both might be overkill, but “better safe than sorry” are words to live by when it comes to security...
|
||||
|
||||
@ -2,7 +2,6 @@
|
||||
|
||||
<br/><br/>
|
||||
|
||||
|
||||
### One Paragraph Explainer
|
||||
|
||||
In a classic web app the backend serves the frontend/graphics to the browser, a very common approach in the Node’s world is to use Express static middleware for streamlining static files to the client. BUT – Node is not a typical webapp as it utilizes a single thread that is not optimized to serve many files at once. Instead, consider using a reverse proxy (e.g. nginx, HAProxy), cloud storage or CDN (e.g. AWS S3, Azure Blob Storage, etc) that utilizes many optimizations for this task and gain much better throughput. For example, specialized middleware like nginx embodies direct hooks between the file system and the network card and uses a multi-threaded approach to minimize intervention among multiple requests.
|
||||
@ -15,10 +14,9 @@ Your optimal solution might wear one of the following forms:
|
||||
|
||||
<br/><br/>
|
||||
|
||||
|
||||
### Configuration example: typical nginx configuration for serving static files
|
||||
|
||||
```
|
||||
```nginx
|
||||
# configure gzip compression
|
||||
gzip on;
|
||||
keepalive 64;
|
||||
@ -39,6 +37,7 @@ expires max;
|
||||
<br/><br/>
|
||||
|
||||
### What Other Bloggers Say
|
||||
|
||||
From the blog [StrongLoop](https://strongloop.com/strongblog/best-practices-for-express-in-production-part-two-performance-and-reliability/):
|
||||
|
||||
>…In development, you can use [res.sendFile()](http://expressjs.com/4x/api.html#res.sendFile) to serve static files. But don’t do this in production, because this function has to read from the file system for every file request, so it will encounter significant latency and affect the overall performance of the app. Note that res.sendFile() is not implemented with the sendfile system call, which would make it far more efficient. Instead, use serve-static middleware (or something equivalent), that is optimized for serving files for Express apps. An even better option is to use a reverse proxy to serve static files; see Use a reverse proxy for more information…
|
||||
|
||||
@ -2,18 +2,16 @@
|
||||
|
||||
<br/><br/>
|
||||
|
||||
|
||||
### One Paragraph Explainer
|
||||
|
||||
At the base level, Node processes must be guarded and restarted upon failures. Simply put, for small apps and those who don’t use containers – tools like [PM2](https://www.npmjs.com/package/pm2-docker) are perfect as they bring simplicity, restarting capabilities and also rich integration with Node. Others with strong Linux skills might use systemd and run Node as a service. Things get more interesting for apps that use Docker or any container technology since those are usually accompanied by cluster management and orchestration tools (e.g. [AWS ECS](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html), [Kubernetes](https://kubernetes.io/), etc) that deploy, monitor and heal containers. Having all those rich cluster management features including container restart, why mess up with other tools like PM2? There’s no bullet proof answer. There are good reasons to keep PM2 within containers (mostly its containers specific version [pm2-docker](https://www.npmjs.com/package/pm2-docker)) as the first guarding tier – it’s much faster to restart a process and provide Node-specific features like flagging to the code when the hosting container asks to gracefully restart. Other might choose to avoid unnecessary layers. To conclude this write-up, no solution suits them all and getting to know the options is the important thing
|
||||
At the base level, Node processes must be guarded and restarted upon failures. Simply put, for small apps and those who don’t use containers – tools like [PM2](https://www.npmjs.com/package/pm2-docker) are perfect as they bring simplicity, restarting capabilities and also rich integration with Node. Others with strong Linux skills might use systemd and run Node as a service. Things get more interesting for apps that use Docker or any container technology since those are usually accompanied by cluster management and orchestration tools (e.g. [AWS ECS](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html), [Kubernetes](https://kubernetes.io/), etc) that deploy, monitor and heal containers. Having all those rich cluster management features including container restart, why mess up with other tools like PM2? There’s no bulletproof answer. There are good reasons to keep PM2 within containers (mostly its containers specific version [pm2-docker](https://www.npmjs.com/package/pm2-docker)) as the first guarding tier – it’s much faster to restart a process and provide Node-specific features like flagging to the code when the hosting container asks to gracefully restart. Other might choose to avoid unnecessary layers. To conclude this write-up, no solution suits them all and getting to know the options is the important thing
|
||||
|
||||
<br/><br/>
|
||||
|
||||
|
||||
### What Other Bloggers Say
|
||||
|
||||
* From the [Express Production Best Practices](https://expressjs.com/en/advanced/best-practice-performance.html):
|
||||
> ... In development, you started your app simply from the command line with node server.js or something similar. **But doing this in production is a recipe for disaster. If the app crashes, it will be offline** until you restart it. To ensure your app restarts if it crashes, use a process manager. A process manager is a “container” for applications that facilitates deployment, provides high availability, and enables you to manage the application at runtime.
|
||||
> ... In development, you started your app simply from the command line with node server.js or something similar. **But doing this in production is a recipe for disaster. If the app crashes, it will be offline** until you restart it. To ensure your app restarts if it crashes, use a process manager. A process manager is a “container” for applications that facilitate deployment, provides high availability, and enables you to manage the application at runtime.
|
||||
|
||||
* From the Medium blog post [Understanding Node Clustering](https://medium.com/@CodeAndBiscuits/understanding-nodejs-clustering-in-docker-land-64ce2306afef#.cssigr5z3):
|
||||
> ... Understanding NodeJS Clustering in Docker-Land “Docker containers are streamlined, lightweight virtual environments, designed to simplify processes to their bare minimum. Processes that manage and coordinate their own resources are no longer as valuable. **Instead, management stacks like Kubernetes, Mesos, and Cattle have popularized the concept that these resources should be managed infrastructure-wide**. CPU and memory resources are allocated by “schedulers”, and network resources are managed by stack-provided load balancers.
|
||||
|
||||
@ -2,38 +2,33 @@
|
||||
|
||||
<br/><br/>
|
||||
|
||||
|
||||
### One Paragraph Explainer
|
||||
|
||||
Your code depends on many external packages, let’s say it ‘requires’ and use momentjs-2.1.4, then by default when you deploy to production NPM might fetch momentjs 2.1.5 which unfortunately brings some new bugs to the table. Using NPM config files and the argument ```–save-exact=true``` instructs NPM to refer to the *exact* same version that was installed so the next time you run ```npm install``` (in production or within a Docker container you plan to ship forward for testing) the same dependent version will be fetched. An alternative and popular approach is using a `.shrinkwrap` file (easily generated using NPM) that states exactly which packages and versions should be installed so no environment can get tempted to fetch newer versions than expected.
|
||||
|
||||
|
||||
Your code depends on many external packages, let’s say it ‘requires’ and use momentjs-2.1.4, then by default when you deploy to production NPM might fetch momentjs 2.1.5 which unfortunately brings some new bugs to the table. Using NPM config files and the argument ```–save-exact=true``` instructs NPM to refer to the *exact* same version that was installed so the next time you run ```npm install``` (in production or within a Docker container you plan to ship forward for testing) the same dependent version will be fetched. An alternative and popular approach is using a .shrinkwrap file (easily generated using NPM) that states exactly which packages and versions should be installed so no environement can get tempted to fetch newer versions than expected.
|
||||
|
||||
* **Update:** as of NPM 5, dependencies are locked automatically using .shrinkwrap. Yarn, an emerging package manager, also locks down dependencies by default
|
||||
|
||||
* **Update:** as of NPM 5, dependencies are locked automatically using .shrinkwrap. Yarn, an emerging package manager, also locks down dependencies by default.
|
||||
|
||||
<br/><br/>
|
||||
|
||||
|
||||
### Code example: .npmrc file that instructs NPM to use exact versions
|
||||
|
||||
```
|
||||
```npmrc
|
||||
// save this as .npmrc file on the project directory
|
||||
save-exact:true
|
||||
```
|
||||
|
||||
<br/><br/>
|
||||
|
||||
### Code example: shirnkwrap.json file that distill the exact depedency tree
|
||||
### Code example: shrinkwrap.json file that distills the exact dependency tree
|
||||
|
||||
```javascript
|
||||
```json
|
||||
{
|
||||
"name": "A",
|
||||
"dependencies": {
|
||||
"B": {
|
||||
"version": "0.0.1",
|
||||
"dependencies": {
|
||||
"C": {
|
||||
"C": {
|
||||
"version": "0.1.0"
|
||||
}
|
||||
}
|
||||
@ -46,7 +41,7 @@ save-exact:true
|
||||
|
||||
### Code example: NPM 5 dependencies lock file – package.json
|
||||
|
||||
```javascript
|
||||
```json
|
||||
{
|
||||
"name": "package-name",
|
||||
"version": "1.0.0",
|
||||
|
||||
@ -2,10 +2,9 @@
|
||||
|
||||
<br/><br/>
|
||||
|
||||
|
||||
### One Paragraph Explainer
|
||||
|
||||
In a perfect world, a web developer shouldn’t deal with memory leaks. In reality, memory issues are a known Node’s gotcha one must be aware of. Above all, memory usage must be monitored constantly. In development and small production sites you may gauge manually using Linux commands or NPM tools and libraries like node-inspector and memwatch. The main drawback of this manual activities is that they require a human being actively monitoring – for serious production sites it’s absolutely vital to use robust monitoring tools e.g. (AWS CloudWatch, DataDog or any similar proactive system) that alerts when a leak happens. There are also few development guidelines to prevent leaks: avoid storing data on the global level, use streams for data with dynamic size, limit variables scope using let and const.
|
||||
In a perfect world, a web developer shouldn’t deal with memory leaks. In reality, memory issues are a known Node’s gotcha one must be aware of. Above all, memory usage must be monitored constantly. In the development and small production sites, you may gauge manually using Linux commands or NPM tools and libraries like node-inspector and memwatch. The main drawback of this manual activities is that they require a human being actively monitoring – for serious production sites, it’s absolutely vital to use robust monitoring tools e.g. (AWS CloudWatch, DataDog or any similar proactive system) that alerts when a leak happens. There are also few development guidelines to prevent leaks: avoid storing data on the global level, use streams for data with dynamic size, limit variables scope using let and const.
|
||||
|
||||
<br/><br/>
|
||||
|
||||
@ -20,7 +19,7 @@ Create heap dumps with some time and a fair amount of memory allocation in betwe
|
||||
Compare a few dumps to find out what’s growing”
|
||||
|
||||
* From the blog [Dyntrace](http://blog.argteam.com/coding/hardening-node-js-for-production-part-2-using-nginx-to-avoid-node-js-load):
|
||||
> ... “fault, Node.js will try to use about 1.5GBs of memory, which has to be capped when running on systems with less memory. This is the expected behaviour as garbage collection is a very costly operation.
|
||||
> ... “fault, Node.js will try to use about 1.5GBs of memory, which has to be capped when running on systems with less memory. This is the expected behavior as garbage collection is a very costly operation.
|
||||
The solution for it was adding an extra parameter to the Node.js process:
|
||||
node –max_old_space_size=400 server.js –production ”
|
||||
“Why is garbage collection expensive? The V8 JavaScript engine employs a stop-the-world garbage collector mechanism. In practice, it means that the program stops execution while garbage collection is in progress.”
|
||||
“Why is garbage collection expensive? The V8 JavaScript engine employs a stop-the-world garbage collector mechanism. In practice, it means that the program stops execution while garbage collection is in progress.”
|
||||
|
||||
@ -4,14 +4,12 @@
|
||||
|
||||
### One Paragraph Explainer
|
||||
|
||||
At the very basic level, monitoring means you can *easily* identify when bad things happen at production. For example, by getting notified by email or Slack. The challenge is to choose the right set of tools that will satisfy your requirements without breaking your bank. May I suggest, start with defining the core set of metrics that must be watched to ensure a healthy state – CPU, server RAM, Node process RAM (less than 1.4GB), the amount of errors in the last minute, number of process restarts, average response time. Then go over some advanced features you might fancy and add to your wish list. Some examples of luxury monitoring feature: DB profiling, cross-service measuring (i.e. measure business transaction), frontend integration, expose raw data to custom BI clients, Slack notifications and many others.
|
||||
At the very basic level, monitoring means you can *easily* identify when bad things happen at production. For example, by getting notified by email or Slack. The challenge is to choose the right set of tools that will satisfy your requirements without breaking your bank. May I suggest, start with defining the core set of metrics that must be watched to ensure a healthy state – CPU, server RAM, Node process RAM (less than 1.4GB), the number of errors in the last minute, number of process restarts, average response time. Then go over some advanced features you might fancy and add to your wish list. Some examples of a luxury monitoring feature: DB profiling, cross-service measuring (i.e. measure business transaction), front-end integration, expose raw data to custom BI clients, Slack notifications and many others.
|
||||
|
||||
Achieving the advanced features demands lengthy setup or buying a commercial product such as Datadog, NewRelic and alike. Unfortunately, achieving even the basics is not a walk in the park as some metrics are hardware-related (CPU) and others live within the node process (internal errors) thus all the straightforward tools require some additional setup. For example, cloud vendor monitoring solutions (e.g. [AWS CloudWatch](https://aws.amazon.com/cloudwatch/), [Google StackDriver](https://cloud.google.com/stackdriver/)) will tell you immediately about the hardware metrics but not about the internal app behavior. On the other end, Log-based solutions such as ElasticSearch lack the hardware view by default. The solution is to augment your choice with missing metrics, for example, a popular choice is sending application logs to [Elastic stack](https://www.elastic.co/products) and configure some additional agent (e.g. [Beat](https://www.elastic.co/products)) to share hardware-related information to get the full picture.
|
||||
|
||||
|
||||
<br/><br/>
|
||||
|
||||
|
||||
### Monitoring example: AWS cloudwatch default dashboard. Hard to extract in-app metrics
|
||||
|
||||

|
||||
@ -29,7 +27,9 @@ Achieving the advanced features demands lengthy setup or buying a commercial pro
|
||||

|
||||
|
||||
<br/><br/>
|
||||
|
||||
### What Other Bloggers Say
|
||||
|
||||
From the blog [Rising Stack](http://mubaloo.com/best-practices-deploying-node-js-applications/):
|
||||
|
||||
> …We recommend you to watch these signals for all of your services:
|
||||
|
||||
@ -2,7 +2,6 @@
|
||||
|
||||
<br/><br/>
|
||||
|
||||
|
||||
### One Paragraph Explainer
|
||||
|
||||
Following is a list of development tips that greatly affect the production maintenance and stability:
|
||||
|
||||
@ -2,21 +2,19 @@
|
||||
|
||||
<br/><br/>
|
||||
|
||||
|
||||
### One Paragraph Explainer
|
||||
|
||||
Process environment variables is a set of key-value pairs made available to any running program, usually for configuration purposes. Though any variables can be used, Node encourages the convention of using a variable called NODE_ENV to flag whether we’re in production right now. This determination allows components to provide better diagnostics during development, for example by disabling caching or emitting verbose log statements. Any modern deployment tool – Chef, Puppet, CloudFormation, others – support setting environment variables during deployment
|
||||
|
||||
<br/><br/>
|
||||
|
||||
|
||||
### Code example: Setting and reading the NODE_ENV environment variable
|
||||
|
||||
```javascript
|
||||
// Setting environment variables in bash before starting the node process
|
||||
$ NODE_ENV=development
|
||||
$ node
|
||||
|
||||
|
||||
// Reading the environment variable using code
|
||||
if (process.env.NODE_ENV === “production”)
|
||||
useCaching = true;
|
||||
@ -24,13 +22,11 @@ if (process.env.NODE_ENV === “production”)
|
||||
|
||||
<br/><br/>
|
||||
|
||||
|
||||
### What Other Bloggers Say
|
||||
From the blog [dynatrace](https://www.dynatrace.com/blog/the-drastic-effects-of-omitting-node_env-in-your-express-js-applications/):
|
||||
> ...In Node.js there is a convention to use a variable called NODE_ENV to set the current mode. We see that it in fact reads NODE_ENV and defaults to ‘development’ if it isn’t set. We clearly see that by setting NODE_ENV to production the number of requests Node.js can handle jumps by around two-thirds while the CPU usage even drops slightly. *Let me emphasize this: Setting NODE_ENV to production makes your application 3 times faster!*
|
||||
|
||||
From the blog [dynatrace](https://www.dynatrace.com/blog/the-drastic-effects-of-omitting-node_env-in-your-express-js-applications/):
|
||||
> ...In Node.js there is a convention to use a variable called NODE_ENV to set the current mode. We see that it, in fact, reads NODE_ENV and defaults to ‘development’ if it isn’t set. We clearly see that by setting NODE_ENV to production the number of requests Node.js can handle jumps by around two-thirds while the CPU usage even drops slightly. *Let me emphasize this: Setting NODE_ENV to production makes your application 3 times faster!*
|
||||
|
||||

|
||||
|
||||
|
||||
<br/><br/>
|
||||
|
||||
@ -2,42 +2,39 @@
|
||||
|
||||
<br/><br/>
|
||||
|
||||
|
||||
### One Paragraph Explainer
|
||||
|
||||
Since you print out log statements anyway and you're obviously in a need of some interface that wraps up production information where you can trace errors and core metrics (e.g. how many errors happen every hour and which is your slowest API end-point) why not invest some moderate effort in a robust logging framework that will tick all boxes? Achieving that requires a thoughtful decision on three steps:
|
||||
|
||||
**1. smart logging** – at the bare minimum you need to use a reputable logging library like [Winston](https://github.com/winstonjs/winston), [Bunyan](https://github.com/trentm/node-bunyan) and write meaningful information at each transaction start and end. Consider to also format log statements as JSON and provide all the contextual properties (e.g. user id, operation type, etc) so that the operations team can act on those fields. Include also a unique transaction ID at each log line, for more information refer to the bullet below “Write transaction-id to log”. One last point to consider is also including an agent that logs the system resource like memory and CPU like Elastic Beat.
|
||||
|
||||
**2. smart aggregation** – once you have comprehensive information within your servers file system, it’s time to periodically push these to a system that aggregates, facilities and visualizes this data. The Elastic stack, for example, is a popular and free choice that offers all the components to aggregate and visualize data. Many commercial products provide similar functionality only they greatly cut down the setup time and require no hosting.
|
||||
**2. smart aggregation** – once you have comprehensive information on your servers file system, it’s time to periodically push these to a system that aggregates, facilities and visualizes this data. The Elastic stack, for example, is a popular and free choice that offers all the components to aggregate and visualize data. Many commercial products provide similar functionality only they greatly cut down the setup time and require no hosting.
|
||||
|
||||
**3. smart visualization** – now the information is aggregated and searchable, one can be satisfied only with the power of easily searching the logs but this can go much further without coding or spending much effort. We can now show important operational metrics like error rate, average CPU throughout the day, how many new users opted-in in the last hour and any other metric that helps to govern and improve our app
|
||||
|
||||
<br/><br/>
|
||||
|
||||
|
||||
### Visualization Example: Kibana (part of Elastic stack) facilitates advanced searching on log content
|
||||
### Visualization Example: Kibana (part of the Elastic stack) facilitates advanced searching on log content
|
||||
|
||||

|
||||
|
||||
<br/><br/>
|
||||
|
||||
### Visualization Example: Kibana (part of Elastic stack) visualizes data based on logs
|
||||
### Visualization Example: Kibana (part of the Elastic stack) visualizes data based on logs
|
||||
|
||||

|
||||
|
||||
<br/><br/>
|
||||
|
||||
### Blog Quote: Logger Requirements
|
||||
|
||||
From the blog [Strong Loop](https://strongloop.com/strongblog/compare-node-js-logging-winston-bunyan/):
|
||||
|
||||
> Lets identify a few requirements (for a logger):
|
||||
> 1. Time stamp each log line. This one is pretty self explanatory – you should be able to tell when each log entry occured.
|
||||
> 1. Timestamp each log line. This one is pretty self-explanatory – you should be able to tell when each log entry occurred.
|
||||
> 2. Logging format should be easily digestible by humans as well as machines.
|
||||
> 3. Allows for multiple configurable destination streams. For example, you might be writing trace logs to one file but when an error is encountered, write to the same file, then into error file and send an email at the same time…
|
||||
|
||||
<br/><br/>
|
||||
|
||||
|
||||
|
||||
<br/><br/>
|
||||
|
||||
<br/><br/>
|
||||
|
||||
@ -2,14 +2,12 @@
|
||||
|
||||
<br/><br/>
|
||||
|
||||
|
||||
### One Paragraph Explainer
|
||||
|
||||
It might not come as a surprise that at its basic form, Node runs over a single thread=single process=single CPU. Paying for beefy hardware with 4 or 8 CPU and utilizing only one sounds crazy, right? The quickest solution which fits medium sized apps is using Node’s Cluster module which in 10 lines of code spawns a process for each logical core and route requests between the processes in a round-robin style. Even better, use PM2 which sugarcoats the clustering module with a simple interface and cool monitoring UI. While this solution works well for traditional applications, it might fall short for applications that require top-notch performance and robust devops flow. For those advanced use cases, consider replicating the NODE process using custom deployment script and balancing using a specialized tool such as nginx or use a container engine such as AWS ECS or Kubernetees that have advanced features for deployment and replication of processes.
|
||||
It might not come as a surprise that in its basic form, Node runs over a single thread=single process=single CPU. Paying for beefy hardware with 4 or 8 CPU and utilizing only one sounds crazy, right? The quickest solution which fits medium sized apps is using Node’s Cluster module which in 10 lines of code spawns a process for each logical core and route requests between the processes in a round-robin style. Even better, use PM2 which sugarcoats the clustering module with a simple interface and cool monitoring UI. While this solution works well for traditional applications, it might fall short for applications that require top-notch performance and robust DevOps flow. For those advanced use cases, consider replicating the NODE process using custom deployment script and balancing using a specialized tool such as nginx or use a container engine such as AWS ECS or Kubernetees that have advanced features for deployment and replication of processes.
|
||||
|
||||
<br/><br/>
|
||||
|
||||
|
||||
### Comparison: Balancing using Node’s cluster vs nginx
|
||||
|
||||

|
||||
@ -17,11 +15,12 @@ It might not come as a surprise that at its basic form, Node runs over a single
|
||||
<br/><br/>
|
||||
|
||||
### What Other Bloggers Say
|
||||
|
||||
* From the [Node.js documentation](https://nodejs.org/api/cluster.html#cluster_how_it_works):
|
||||
> ... The second approach, Node clusters, should, in theory, give the best performance. In practice however, distribution tends to be very unbalanced due to operating system scheduler vagaries. Loads have been observed where over 70% of all connections ended up in just two processes, out of a total of eight ...
|
||||
> ... The second approach, Node clusters, should, in theory, give the best performance. In practice, however, distribution tends to be very unbalanced due to operating system scheduler vagaries. Loads have been observed where over 70% of all connections ended up in just two processes, out of a total of eight ...
|
||||
|
||||
* From the blog [StrongLoop](https://strongloop.com/strongblog/best-practices-for-express-in-production-part-two-performance-and-reliability/):
|
||||
> ... Clustering is made possible with Node’s cluster module. This enables a master process to spawn worker processes and distribute incoming connections among the workers. However, rather than using this module directly, it’s far better to use one of the many tools out there that does it for you automatically; for example node-pm or cluster-service ...
|
||||
> ... Clustering is made possible with Node’s cluster module. This enables a master process to spawn worker processes and distribute incoming connections among the workers. However, rather than using this module directly, it’s far better to use one of the many tools out there that do it for you automatically; for example node-pm or cluster-service ...
|
||||
|
||||
* From the Medium post [Node.js process load balance performance: comparing cluster module, iptables and Nginx](https://medium.com/@fermads/node-js-process-load-balancing-comparing-cluster-iptables-and-nginx-6746aaf38272)
|
||||
> ... Node cluster is simple to implement and configure, things are kept inside Node’s realm without depending on other software. Just remember your master process will work almost as much as your worker processes and with a little less request rate then the other solutions ...
|
||||
* From the Medium post [Node.js process load balance performance: comparing cluster module, iptables, and Nginx](https://medium.com/@fermads/node-js-process-load-balancing-comparing-cluster-iptables-and-nginx-6746aaf38272)
|
||||
> ... Node cluster is simple to implement and configure, things are kept inside Node’s realm without depending on other software. Just remember your master process will work almost as much as your worker processes and with a little less request rate than the other solutions ...
|
||||
|
||||
@ -2,35 +2,36 @@
|
||||
|
||||
<br/><br/>
|
||||
|
||||
|
||||
### One Paragraph Explainer
|
||||
|
||||
For medium sized apps and above, monoliths are really bad - having one big software with many dependencies is just hard to reason about and often leads to spaghetti code. Even smart architects — those who are skilled enough to tame the beast and 'modularize' it — spend great mental effort on design, and each change requires carefully evaluating the impact on other dependant objects. The ultimate solution is to develop small software: divide the whole stack into self-contained components that don't share files with others, each constitutes very few files (e.g. API, service, data access, test, etc.) so that it's very easy to reason about it. Some may call this 'microservices' architecture — it's important to understand that microservices is not a spec which you must follow, but rather a set of principles. You may adopt many principles into a full-blown microservices architecture or adopt only few. Both are good as long as you keep the software complexity low. The very least you should do is create basic borders between components, assign a folder in your project root for each business component and make it self contained - other components are allowed to consume its functionality only through its public interface or API. This is the foundation for keeping your components simple, avoid dependency hell and pave the way to full-blown microservices in the future once your app grows.
|
||||
For medium sized apps and above, monoliths are really bad - having one big software with many dependencies is just hard to reason about and often leads to spaghetti code. Even smart architects — those who are skilled enough to tame the beast and 'modularize' it — spend great mental effort on design, and each change requires carefully evaluating the impact on other dependent objects. The ultimate solution is to develop small software: divide the whole stack into self-contained components that don't share files with others, each constitutes very few files (e.g. API, service, data access, test, etc.) so that it's very easy to reason about it. Some may call this 'microservices' architecture — it's important to understand that microservices are not a spec which you must follow, but rather a set of principles. You may adopt many principles into a full-blown microservices architecture or adopt only a few. Both are good as long as you keep the software complexity low. The very least you should do is create basic borders between components, assign a folder in your project root for each business component and make it self-contained - other components are allowed to consume its functionality only through its public interface or API. This is the foundation for keeping your components simple, avoid dependency hell and pave the way to full-blown microservices in the future once your app grows.
|
||||
|
||||
<br/><br/>
|
||||
|
||||
|
||||
### Blog Quote: "Scaling requires scaling of the entire application"
|
||||
|
||||
From the blog MartinFowler.com
|
||||
|
||||
> Monolithic applications can be successful, but increasingly people are feeling frustrations with them - especially as more applications are being deployed to the cloud. Change cycles are tied together - a change made to a small part of the application, requires the entire monolith to be rebuilt and deployed. Over time it's often hard to keep a good modular structure, making it harder to keep changes that ought to only affect one module within that module. Scaling requires scaling of the entire application rather than parts of it that require greater resource.
|
||||
|
||||
> Monolithic applications can be successful, but increasingly people are feeling frustrations with them - especially as more applications are being deployed to the cloud. Change cycles are tied together - a change made to a small part of the application requires the entire monolith to be rebuilt and deployed. Over time it's often hard to keep a good modular structure, making it harder to keep changes that ought to only affect one module within that module. Scaling requires scaling of the entire application rather than parts of it that require greater resource.
|
||||
|
||||
<br/><br/>
|
||||
|
||||
### Blog Quote: "So what does the architecture of your application scream?"
|
||||
|
||||
From the blog [uncle-bob](https://8thlight.com/blog/uncle-bob/2011/09/30/Screaming-Architecture.html)
|
||||
|
||||
|
||||
> ...if you were looking at the architecture of a library, you’d likely see a grand entrance, an area for check-in-out clerks, reading areas, small conference rooms, and gallery after gallery capable of holding bookshelves for all the books in the library. That architecture would scream: Library.<br/>
|
||||
|
||||
So what does the architecture of your application scream? When you look at the top level directory structure, and the source files in the highest level package; do they scream: Health Care System, or Accounting System, or Inventory Management System? Or do they scream: Rails, or Spring/Hibernate, or ASP?.
|
||||
|
||||
|
||||
|
||||
<br/><br/>
|
||||
|
||||
|
||||
### Good: Structure your solution by self-contained components
|
||||
|
||||

|
||||
|
||||
<br/><br/>
|
||||
<br/><br/>
|
||||
|
||||
### Bad: Group your files by technical role
|
||||
|
||||

|
||||
|
||||
@ -1,27 +1,26 @@
|
||||
# Use environment aware, secure and hirearchical config
|
||||
# Use environment aware, secure and hierarchical config
|
||||
|
||||
<br/><br/>
|
||||
|
||||
|
||||
### One Paragraph Explainer
|
||||
|
||||
When dealing with configuration data, many things can just annoy and slow down:
|
||||
When dealing with configuration data, many things can just annoy and slow down:
|
||||
|
||||
(1) setting all the keys using process environment variables becomes very tedious when in need to inject 100 keys (instead of just committing those in a config file), however when dealing with files only the devops admins can not alter the behaviour without changing the code. A reliable config solution must combine both configuration files + overrides from the process variables
|
||||
1. setting all the keys using process environment variables becomes very tedious when in need to inject 100 keys (instead of just committing those in a config file), however when dealing with files only the DevOps admins cannot alter the behavior without changing the code. A reliable config solution must combine both configuration files + overrides from the process variables
|
||||
|
||||
(2) when specifying all keys in a flat JSON, it becomes frustrating to find and modify entries when the list grows bigger. A hierarchical JSON file that is grouped into sections can overcome this issue + few config libraries allow to store the configuration in multiple files and take care to union all in runtime. See example below
|
||||
2. when specifying all keys in a flat JSON, it becomes frustrating to find and modify entries when the list grows bigger. A hierarchical JSON file that is grouped into sections can overcome this issue + few config libraries allow to store the configuration in multiple files and take care to union all at runtime. See example below
|
||||
|
||||
(3) storing sensitive information like DB password is obviously not recommended but no quick and handy solution exists for this challenge. Some configuration libraries allow to encrypt files, others encrypt those entries during GIT commits or simply don't store real values for those entries and specify the actual value during deployment via environment variables.
|
||||
3. storing sensitive information like DB password is obviously not recommended but no quick and handy solution exists for this challenge. Some configuration libraries allow to encrypt files, others encrypt those entries during GIT commits or simply don't store real values for those entries and specify the actual value during deployment via environment variables.
|
||||
|
||||
(4) some advanced configuration scenarios demand to inject configuration values via command line (vargs) or sync configuration info via a centralized cache like Redis so multiple servers will use the same configuration data.
|
||||
4. some advanced configuration scenarios demand to inject configuration values via command line (vargs) or sync configuration info via a centralized cache like Redis so multiple servers will use the same configuration data.
|
||||
|
||||
Some configuration libraries can provide most of these features for free, have a look at NPM libraries like [rc](https://www.npmjs.com/package/rc), [nconf](https://www.npmjs.com/package/nconf) and [config](https://www.npmjs.com/package/config) which tick many of these requirements.
|
||||
|
||||
<br/><br/>
|
||||
|
||||
### Code Example – hirearchical config helps to find entries and maintain huge config files
|
||||
### Code Example – hierarchical config helps to find entries and maintain huge config files
|
||||
|
||||
```javascript
|
||||
```json
|
||||
{
|
||||
// Customer module configs
|
||||
"Customer": {
|
||||
|
||||
@ -1,11 +1,13 @@
|
||||
# Layer your app, keep Express within its boundaries
|
||||
|
||||
<br/><br/>
|
||||
|
||||
### Separate component code into layers: web, services and DAL
|
||||
|
||||
### Separate component code into layers: web, services, and DAL
|
||||
|
||||

|
||||
|
||||
<br/><br/>
|
||||
<br/><br/>
|
||||
|
||||
### 1 min explainer: The downside of mixing layers
|
||||
|
||||

|
||||
|
||||
@ -2,7 +2,6 @@
|
||||
|
||||
<br/><br/>
|
||||
|
||||
|
||||
### One Paragraph Explainer
|
||||
|
||||
The latest Express generator comes with a great practice that is worth to keep - the API declaration is separated from the network related configuration (port, protocol, etc). This allows testing the API in-process, without performing network calls, with all the benefits that it brings to the table: fast testing execution and getting coverage metrics of the code. It also allows deploying the same API under flexible and different network conditions. Bonus: better separation of concerns and cleaner code
|
||||
@ -16,7 +15,6 @@ var app = express();
|
||||
app.use(bodyParser.json());
|
||||
app.use("/api/events", events.API);
|
||||
app.use("/api/forms", forms);
|
||||
|
||||
```
|
||||
|
||||
<br/><br/>
|
||||
@ -39,10 +37,8 @@ app.set('port', port);
|
||||
*/
|
||||
|
||||
var server = http.createServer(app);
|
||||
|
||||
```
|
||||
|
||||
|
||||
### Example: test your API in-process using supertest (popular testing package)
|
||||
|
||||
```javascript
|
||||
|
||||
@ -2,25 +2,26 @@
|
||||
|
||||
<br/><br/>
|
||||
|
||||
|
||||
### One Paragraph Explainer
|
||||
|
||||
For medium sized apps and above, monoliths are really bad - a one big software with many dependencies is just hard to reason about and often leads to code spaghetti. Even those smart architects who are skilled to tame the beast and 'modularize' it - spend great mental effort on design and each change requires to carefully evaluate the impact on other dependant objects. The ultimate solution is to develop small software: divide the whole stack into self-contained components that don't share files with others, each constitute very few files (e.g. API, service, data access, test, etc) so that it's very easy to reason about it. Some may call this 'microservices' architecture - it's important to understand that microservices is not a spec which you must follow rather a set of principles. You may adopt many principles into a full-blown microservices architecture or adopt only few. Both are good as long as you keep the software complexity low. The very least you should do is create a basic borders between components, assign a folder in your project root for each business component and make it self contained - other components are allowed to consumeits functionality only through its public interface or API. This is the foundation for keeping your components simple, avoid dependencies hell and pave the way to full-blown microservices in the future once your app grows
|
||||
For medium sized apps and above, monoliths are really bad - one big software with many dependencies is just hard to reason about and often leads to code spaghetti. Even those smart architects who are skilled to tame the beast and 'modularize' it - spend great mental effort on design and each change requires to carefully evaluate the impact on other dependent objects. The ultimate solution is to develop small software: divide the whole stack into self-contained components that don't share files with others, each constitutes very few files (e.g. API, service, data access, test, etc) so that it's very easy to reason about it. Some may call this 'microservices' architecture - it's important to understand that microservices are not a spec which you must follow rather a set of principles. You may adopt many principles into a full-blown microservices architecture or adopt only a few. Both are good as long as you keep the software complexity low. The very least you should do is create basic borders between components, assign a folder in your project root for each business component and make it self-contained - other components are allowed to consume its functionality only through its public interface or API. This is the foundation for keeping your components simple, avoid dependencies hell and pave the way to full-blown microservices in the future once your app grows
|
||||
|
||||
<br/><br/>
|
||||
|
||||
|
||||
### Blog Quote: "Scaling requires scaling of the entire application"
|
||||
|
||||
From the blog MartinFowler.com
|
||||
|
||||
> Monolithic applications can be successful, but increasingly people are feeling frustrations with them - especially as more applications are being deployed to the cloud . Change cycles are tied together - a change made to a small part of the application, requires the entire monolith to be rebuilt and deployed. Over time it's often hard to keep a good modular structure, making it harder to keep changes that ought to only affect one module within that module. Scaling requires scaling of the entire application rather than parts of it that require greater resource.
|
||||
|
||||
> Monolithic applications can be successful, but increasingly people are feeling frustrations with them - especially as more applications are being deployed to the cloud. Change cycles are tied together - a change made to a small part of the application requires the entire monolith to be rebuilt and deployed. Over time it's often hard to keep a good modular structure, making it harder to keep changes that ought to only affect one module within that module. Scaling requires scaling of the entire application rather than parts of it that require greater resource.
|
||||
|
||||
<br/><br/>
|
||||
|
||||
### Good: Structure your solution by self-contained components
|
||||
|
||||
### Good: Structure your solution by self-contained components
|
||||
|
||||

|
||||
|
||||
<br/><br/>
|
||||
<br/><br/>
|
||||
|
||||
### Bad: Group your files by technical role
|
||||
|
||||

|
||||
|
||||
@ -2,13 +2,12 @@
|
||||
|
||||
<br/><br/>
|
||||
|
||||
|
||||
### One Paragraph Explainer
|
||||
Once you start growing and have different components on different servers which consumes similar utilities, you should start managing the dependencies - how can you keep 1 copy of your utility code and let multiple consumer components use and deploy it? well, there is a tool for that, it's called NPM... Start by wrapping 3rd party utility packages with your own code to make it easily replaceable in the future and publish your own code as private NPM package. Now, all your code base can import that code and benefit free dependency management tool. It's possible to publish NPM packages for your own private use without sharing it publicly using [private modules](https://docs.npmjs.com/private-modules/intro), [private registry](https://npme.npmjs.com/docs/tutorials/npm-enterprise-with-nexus.html) or [local NPM packages](https://medium.com/@arnaudrinquin/build-modular-application-with-npm-local-modules-dfc5ff047bcc)
|
||||
|
||||
Once you start growing and have different components on different servers which consumes similar utilities, you should start managing the dependencies - how can you keep 1 copy of your utility code and let multiple consumer components use and deploy it? well, there is a tool for that, it's called NPM... Start by wrapping 3rd party utility packages with your own code to make it easily replaceable in the future and publish your own code as private NPM package. Now, all your code base can import that code and benefit free dependency management tool. It's possible to publish NPM packages for your own private use without sharing it publicly using [private modules](https://docs.npmjs.com/private-modules/intro), [private registry](https://npme.npmjs.com/docs/tutorials/npm-enterprise-with-nexus.html) or [local NPM packages](https://medium.com/@arnaudrinquin/build-modular-application-with-npm-local-modules-dfc5ff047bcc)
|
||||
|
||||
<br/><br/>
|
||||
|
||||
### Sharing your own common utilities across environments and components
|
||||
|
||||
### Sharing your own common utilities across environments and components
|
||||

|
||||
|
||||
@ -2,14 +2,12 @@
|
||||
|
||||
<br/><br/>
|
||||
|
||||
|
||||
### One Paragraph Explainer
|
||||
|
||||
Text
|
||||
|
||||
<br/><br/>
|
||||
|
||||
|
||||
### Code Example – explanation
|
||||
|
||||
```javascript
|
||||
@ -27,14 +25,15 @@ code here
|
||||
<br/><br/>
|
||||
|
||||
### Blog Quote: "Title"
|
||||
From the blog pouchdb.com, ranked 11 for the keywords “Node Promises”
|
||||
|
||||
|
||||
From the blog, pouchdb.com ranked 11 for the keywords “Node Promises”
|
||||
|
||||
> …text here
|
||||
|
||||
<br/><br/>
|
||||
|
||||
<br/><br/>
|
||||
|
||||
### Image title
|
||||
|
||||

|
||||
|
||||
|
||||
<br/><br/>
|
||||
|
||||
@ -1,11 +1,9 @@
|
||||
# Title here
|
||||
|
||||
|
||||
### One Paragraph Explainer
|
||||
|
||||
Text
|
||||
|
||||
|
||||
### Code Example – explanation
|
||||
|
||||
```javascript
|
||||
@ -19,12 +17,11 @@ code here
|
||||
```
|
||||
|
||||
### Blog Quote: "Title"
|
||||
From the blog pouchdb.com, ranked 11 for the keywords “Node Promises”
|
||||
|
||||
|
||||
From the blog, pouchdb.com ranked 11 for the keywords “Node Promises”
|
||||
|
||||
> …text here
|
||||
|
||||
### Image title
|
||||
|
||||
### Image title
|
||||
|
||||

|
||||
|
||||
|
||||
|
||||
|
||||
@ -2,15 +2,14 @@
|
||||
|
||||
<br/><br/>
|
||||
|
||||
|
||||
### One Paragraph Explainer
|
||||
|
||||
The CI world used to be the flexibility of [Jenkins](https://jenkins.io/) vs the simplicity of SaaS vendors. The game is now changing as SaaS providers like [CircleCI](https://circleci.com/) and [Travis](https://travis-ci.org/) offer robust solutions including Docker containers with miniumum setup time while Jenkins tries to compete on 'simplicity' segment as well. Though one can setup rich CI solution in the cloud, should it required to control the finest details Jenkins is still the platform of choice. The choice eventually boils down to which extent the CI process should be customized: free and setup free cloud vendors allow to run custom shell commands, custom docker images, adjust the workflow, run matrix builds and other rich features. However if controlling the infrastructure or programming the CI logic using a formal programming language like Java is desired - Jenkins might still be the choice. Otherwise, consider opting for the simple and setup free cloud option
|
||||
The CI world used to be the flexibility of [Jenkins](https://jenkins.io/) vs the simplicity of SaaS vendors. The game is now changing as SaaS providers like [CircleCI](https://circleci.com/) and [Travis](https://travis-ci.org/) offer robust solutions including Docker containers with minimum setup time while Jenkins tries to compete on 'simplicity' segment as well. Though one can setup rich CI solution in the cloud, should it required to control the finest details Jenkins is still the platform of choice. The choice eventually boils down to which extent the CI process should be customized: free and setup free cloud vendors allow to run custom shell commands, custom docker images, adjust the workflow, run matrix builds and other rich features. However, if controlling the infrastructure or programming the CI logic using a formal programming language like Java is desired - Jenkins might still be the choice. Otherwise, consider opting for the simple and setup free cloud option
|
||||
|
||||
<br/><br/>
|
||||
|
||||
### Code Example – a typical cloud CI configuration. Single .yml file and that's it
|
||||
|
||||
### Code Example – a typical cloud CI configuratin. Single .yml file and that's it
|
||||
```javascript
|
||||
version: 2
|
||||
jobs:
|
||||
@ -41,13 +40,12 @@ jobs:
|
||||
|
||||
```
|
||||
|
||||
### Circle CI - almost zero setup cloud CI
|
||||
|
||||
|
||||
### Circle CI - almost zero setup cloud CI
|
||||

|
||||
|
||||
### Jenkins - sophisiticated and robust CI
|
||||
### Jenkins - sophisticated and robust CI
|
||||
|
||||

|
||||
|
||||
|
||||
<br/><br/>
|
||||
|
||||
Reference in New Issue
Block a user