mirror of
				https://github.com/goldbergyoni/nodebestpractices.git
				synced 2025-11-01 01:56:06 +08:00 
			
		
		
		
	Merge pull request #117 from BrunoScheufler/patch-4
Optimized sytax and fixed typos
This commit is contained in:
		| @ -23,4 +23,4 @@ Source: [https://github.com/prettier/prettier-eslint/issues/101](https://github. | ||||
|  | ||||
| ### Integrating ESLint and Prettier | ||||
|  | ||||
| ESLint and Prettier overlaps in the code formatting feature but it can be easily solved by using other packages like [prettier-eslint](https://github.com/prettier/prettier-eslint), [eslint-plugin-prettier](https://github.com/prettier/eslint-plugin-prettier), and [eslint-config-prettier](https://github.com/prettier/eslint-config-prettier). For more information about their differences, you can view the link [here](https://stackoverflow.com/questions/44690308/whats-the-difference-between-prettier-eslint-eslint-plugin-prettier-and-eslint). | ||||
| ESLint and Prettier overlap in the code formatting feature but can be easily combined by using other packages like [prettier-eslint](https://github.com/prettier/prettier-eslint), [eslint-plugin-prettier](https://github.com/prettier/eslint-plugin-prettier), and [eslint-config-prettier](https://github.com/prettier/eslint-config-prettier). For more information about their differences, you can view the link [here](https://stackoverflow.com/questions/44690308/whats-the-difference-between-prettier-eslint-eslint-plugin-prettier-and-eslint). | ||||
|  | ||||
| @ -3,7 +3,7 @@ | ||||
|  | ||||
| ### One Paragraph Explainer | ||||
|  | ||||
| Exception != Error. Traditional error handling assumes the existence of Exception but application errors might come in the form of slow code paths, API downtime, lack of computational resources and more. This is where APM products come handy as they allow with minimal setup to detect a wide variety of ‘burried’ issues proactively. Among the common features of APM products are – alerting when HTTP API returns errors, detect when API response time drops below some threshold, detection of ‘code smells’, monitor server resources, operational intelligence dashboard with IT metrics and many other useful features. Most vendors offer a free plan. | ||||
| Exception != Error. Traditional error handling assumes the existence of Exception but application errors might come in the form of slow code paths, API downtime, lack of computational resources and more. This is where APM products come in handy as they allow to detect a wide variety of ‘burried’ issues proactively with a minimal setup. Among the common features of APM products are for example alerting when the HTTP API returns errors, detect when the API response time drops below some threshold, detection of ‘code smells’, features to monitor server resources, operational intelligence dashboard with IT metrics and many other useful features. Most vendors offer a free plan. | ||||
|  | ||||
| ### Wikipedia about APM | ||||
|  | ||||
| @ -14,11 +14,11 @@ Major products and segments | ||||
|  | ||||
| APM products constitues 3 major segments: | ||||
|  | ||||
| 1. Website or API monitoring – external services that constantly monitor uptime and performance via HTTP requests. Can be setup in few minutes. Following are few selected contenders: Pingdom, Uptime Robot, and New Relic | ||||
| 1. Website or API monitoring – external services that constantly monitor uptime and performance via HTTP requests. Can be set up in few minutes. Following are few selected contenders: [Pingdom](https://www.pingdom.com/), [Uptime Robot](https://uptimerobot.com/), and [New Relic](https://newrelic.com/application-monitoring) | ||||
|  | ||||
| 2. Code instrumentation – products family which require to embed an agent within the application to benefit feature slow code detection, exceptions statistics, performance monitoring and many more. Following are few selected contenders: New Relic, App Dynamics | ||||
| 2. Code instrumentation – product family which require to embed an agent within the application to use features like slow code detection, exception statistics, performance monitoring and many more. Following are few selected contenders: New Relic, App Dynamics | ||||
|  | ||||
| 3. Operational intelligence dashboard – these line of products are focused on facilitating the ops team with metrics and curated content that helps to easily stay on top of application performance. This usually involves aggregating multiple sources of information (application logs, DB logs, servers log, etc) and upfront dashboard design work. Following are few selected contenders: Datadog, Splunk     | ||||
| 3. Operational intelligence dashboard – this line of products is focused on facilitating the ops team with metrics and curated content that helps to easily stay on top of application performance. This usually involves aggregating multiple sources of information (application logs, DB logs, servers log, etc) and upfront dashboard design work. Following are few selected contenders: [Datadog](https://www.datadoghq.com/), [Splunk](https://www.splunk.com/), [Zabbix](https://www.zabbix.com/) | ||||
|  | ||||
|  | ||||
|  | ||||
|  | ||||
| @ -4,7 +4,7 @@ | ||||
|  | ||||
| ### One Paragraph Explainer | ||||
|  | ||||
| Typically, most of modern Node.JS/Express application code runs within promises – whether within the .then handler, a function callback or in a catch block. Suprisingly, unless a developer remembered to add a .catch clause, errors thrown at these places are not handled  by the uncaughtException event-handler and disappear.  Recent versions of Node added a warning message when an unhandled rejection pops, though this might help to notice when things go wrong but it's obviously not a proper error handling. The straightforward solution is to never forget adding .catch clause within each promise chain call and redirect to a centralized error handler. However building your error handling strategy only on developer’s discipline is somewhat fragile. Consequently, it’s highly recommended using a graceful fallback and subscribe to process.on(‘unhandledRejection’, callback) – this will ensure that any promise error, if not handled locally, will get its treatment. | ||||
| Typically, most of modern Node.JS/Express application code runs within promises – whether within the .then handler, a function callback or in a catch block. Suprisingly, unless a developer remembered to add a .catch clause, errors thrown at these places are not handled  by the uncaughtException event-handler and disappear.  Recent versions of Node added a warning message when an unhandled rejection pops, though this might help to notice when things go wrong but it's obviously not a proper error handling method. The straightforward solution is to never forget adding .catch clauses within each promise chain call and redirect to a centralized error handler. However building your error handling strategy only on developer’s discipline is somewhat fragile. Consequently, it’s highly recommended using a graceful fallback and subscribe to `process.on(‘unhandledRejection’, callback)` – this will ensure that any promise error, if not handled locally, will get its treatment. | ||||
|  | ||||
| <br/><br/> | ||||
|  | ||||
|  | ||||
| @ -3,7 +3,7 @@ | ||||
|  | ||||
| ### One Paragraph Explainer | ||||
|  | ||||
| REST APIs return results using HTTP code, it’s absolutely required for the API user to be aware not only about the API schema but also about potential errors – the caller may then catch an error and tactfully handle it. For example, your API documentation might state in advanced that HTTP status 409 is returned when the customer name already exist (assuming the API register new users) so the caller can correspondingly render the best UX for the given situation. Swagger is a standard that defines the schema of API documentation offering an eco-system of tools that allow creating documentation easily online, see print screens below | ||||
| REST APIs return results using HTTP status codes, it’s absolutely required for the API user to be aware not only about the API schema but also about potential errors – the caller may then catch an error and tactfully handle it. For example, your API documentation might state in advanced that HTTP status 409 is returned when the customer name already exist (assuming the API register new users) so the caller can correspondingly render the best UX for the given situation. Swagger is a standard that defines the schema of API documentation offering an eco-system of tools that allow creating documentation easily online, see print screens below | ||||
|  | ||||
| ### Blog Quote: "You have to tell your callers what errors can happen" | ||||
| From the blog Joyent, ranked 1 for the keywords “Node.JS logging” | ||||
| @ -12,4 +12,4 @@ From the blog Joyent, ranked 1 for the keywords “Node.JS logging” | ||||
|  | ||||
|   | ||||
|  ### Useful Tool: Swagger Online Documentation Creator | ||||
|  | ||||
|  | ||||
| @ -1,4 +1,4 @@ | ||||
| # Title | ||||
| # Monitoring | ||||
|  | ||||
|  | ||||
| ### One Paragraph Explainer | ||||
|  | ||||
| @ -2,7 +2,7 @@ | ||||
|  | ||||
| ### One Paragraph Explainer | ||||
|  | ||||
| Distinguishing the following two error types will minimize your app downtime and helps avoid crazy bugs: Operational errors refer to situations where you understand what happened and the impact of it – for example, a query to some HTTP service failed due to connection problem. On the other hand, programmer errors refer to cases where you have no idea why and sometimes where an error came from – it might be some code that tried to read an undefined value or DB connection pool that leaks memory. Operational errors are relatively easy to handle – usually logging the error is enough. Things become hairy when a programmer error pops up, the application might be in an inconsistent state and there’s nothing better you can do than restart gracefully | ||||
| Distinguishing the following two error types will minimize your app downtime and helps avoid crazy bugs: Operational errors refer to situations where you understand what happened and the impact of it – for example, a query to some HTTP service failed due to connection problem. On the other hand, programmer errors refer to cases where you have no idea why and sometimes where an error came from – it might be some code that tried to read an undefined value or DB connection pool that leaks memory. Operational errors are relatively easy to handle – usually logging the error is enough. Things become hairy when a programmer error pops up, the application might be in an inconsistent state and there’s nothing better you can do than to restart gracefully | ||||
|  | ||||
|  | ||||
|  | ||||
|  | ||||
| @ -1,4 +1,4 @@ | ||||
| # Shut the process gracefully when a stranger comes to town | ||||
| # Exit the process gracefully when a stranger comes to town | ||||
|  | ||||
|  | ||||
| ### One Paragraph Explainer | ||||
| @ -10,7 +10,6 @@ Somewhere within your code, an error handler object is responsible for deciding | ||||
| ### Code example: deciding whether to crash | ||||
|  | ||||
| ```javascript | ||||
| //deciding whether to crash when an uncaught exception arrives | ||||
| // Assuming developers mark known operational errors with error.isOperational=true, read best practice #3 | ||||
| process.on('uncaughtException', function(error) { | ||||
|   errorManagement.handler.handleError(error); | ||||
|  | ||||
| @ -3,7 +3,7 @@ | ||||
|  | ||||
| ### One Paragraph Explainer | ||||
|  | ||||
| Testing ‘happy’ paths is no better than testing failures. Good testing code coverage demands to test exceptional paths. Otherwise, there is no trust that exceptions are indeed handled correctly. Every unit testing framework, like Mocha & Chai, has a support for exception testing (code examples below). If you find it tedious to test every inner function and exception – you may settle with testing only REST API HTTP errors. | ||||
| Testing ‘happy’ paths is no better than testing failures. Good testing code coverage demands to test exceptional paths. Otherwise, there is no trust that exceptions are indeed handled correctly. Every unit testing framework, like [Mocha](https://mochajs.org/) & [Chai](http://chaijs.com/), supports exception testing (code examples below). If you find it tedious to test every inner function and exception you may settle with testing only REST API HTTP errors. | ||||
|  | ||||
|  | ||||
|  | ||||
| @ -32,7 +32,7 @@ it("Creates new Facebook group", function (done) { | ||||
|     body: invalidGroupInfo, | ||||
|     json: true | ||||
|   }).then((response) => { | ||||
|     //oh no if we reached here than no exception was thrown | ||||
|     // if we were to execute the code in this block, no error was thrown in the operation above | ||||
|   }).catch(function (response) { | ||||
|     expect(400).to.equal(response.statusCode); | ||||
|     done(); | ||||
|  | ||||
| @ -3,7 +3,7 @@ | ||||
| ### One Paragraph Explainer | ||||
|  | ||||
| We all loovve console.log but obviously a reputable and persisted Logger like [Winston][winston], [Bunyan][bunyan] (highly popular) or [Pino][pino] (the new kid in town which is focused on performance) is mandatory for serious projects. A set of practices and tools will help to reason about errors much quicker – (1) log frequently using different levels (debug, info, error), (2) when logging, provide contextual information as JSON objects, see example below. (3) watch and filter logs using a log querying API (built-in in most loggers) or a log viewer software | ||||
| (4) Expose and curate log statement for the operation team using operational intelligence tool like Splunk | ||||
| (4) Expose and curate log statement for the operation team using operational intelligence tools like Splunk | ||||
|  | ||||
| [winston]: https://www.npmjs.com/package/winston | ||||
| [bunyan]: https://www.npmjs.com/package/bunyan | ||||
| @ -41,7 +41,7 @@ var options = { | ||||
|  | ||||
| // Find items logged between today and yesterday. | ||||
| winston.query(options, function (err, results) { | ||||
|   //callback with results | ||||
|   // execute callback with results | ||||
| }); | ||||
|  | ||||
| ``` | ||||
|  | ||||
| @ -33,7 +33,7 @@ return new Promise(function (resolve, reject) { | ||||
| ### Code example – Anti Pattern | ||||
|  | ||||
| ```javascript | ||||
| //throwing a String lacks any stack trace information and other important properties | ||||
| // throwing a string lacks any stack trace information and other important data properties | ||||
| if(!productToAdd) | ||||
|     throw ("How can I add new product when no value provided?"); | ||||
|  | ||||
|  | ||||
| @ -5,7 +5,7 @@ | ||||
|  | ||||
| ### One Paragraph Explainer | ||||
|  | ||||
| APM, application performance monitoring refers to a familiy of products that aims to monitor application performance from end to end, also from the customer perspective. While traditional monitoring solutions focuses on Exceptions and standalone technical metrics (e.g. error tracking, slow server endpoints, etc), in real world our app might create disappointed users without any code exceptions, for example if some middleware service performed real slow. APM products measure the user experience from end to end, for example, given a system that encompass frontend UI and multiple distributed services – some APM products can tell how fast a transaction that spans multiple tiers last. It can tell whether the user experience is solid and point to the problem. This attractive offering comes with a relatively high price tag hence it’s recommended for large-scale and complex products that require to go beyond straightforwd monitoring. | ||||
| APM (application performance monitoring) refers to a familiy of products that aims to monitor application performance from end to end, also from the customer perspective. While traditional monitoring solutions focuses on Exceptions and standalone technical metrics (e.g. error tracking, slow server endpoints, etc), in real world our app might create disappointed users without any code exceptions, for example if some middleware service performed real slow. APM products measure the user experience from end to end, for example, given a system that encompass frontend UI and multiple distributed services – some APM products can tell how fast a transaction that spans multiple tiers last. It can tell whether the user experience is solid and point to the problem. This attractive offering comes with a relatively high price tag hence it’s recommended for large-scale and complex products that require to go beyond straightforwd monitoring. | ||||
|  | ||||
| <br/><br/> | ||||
|  | ||||
|  | ||||
| @ -5,7 +5,7 @@ | ||||
|  | ||||
| ### One Paragraph Explainer | ||||
|  | ||||
| A typical log is a warehouse of entries from all components and requests. Upon detection of some suspicious line or error it becomes hairy to match other lines that belong to the same specific flow (e.g. the user “John” tried to buy something). This becomes even more critical and challenging in microservices environment when a request/transaction might span across multiple computers. Address this by assigning a unique transaction identifier value to all the entries from the same request so when detecting one line one can copy the id and search for every line that has similar transaction Id. However, achieving this In Node is not straightforward as a single thread is used to serve all requests –consider using a library that that can group data on the request level – see code example on the next slide. When calling other microservice, pass the transaction Id using an HTTP header “x-transaction-id” to keep the same context. | ||||
| A typical log is a warehouse of entries from all components and requests. Upon detection of some suspicious line or error it becomes hairy to match other lines that belong to the same specific flow (e.g. the user “John” tried to buy something). This becomes even more critical and challenging in a microservice environment when a request/transaction might span across multiple computers. Address this by assigning a unique transaction identifier value to all the entries from the same request so when detecting one line one can copy the id and search for every line that has similar transaction Id. However, achieving this In Node is not straightforward as a single thread is used to serve all requests –consider using a library that that can group data on the request level – see code example on the next slide. When calling other microservice, pass the transaction Id using an HTTP header like “x-transaction-id” to keep the same context. | ||||
|  | ||||
| <br/><br/> | ||||
|  | ||||
| @ -15,19 +15,24 @@ A typical log is a warehouse of entries from all components and requests. Upon d | ||||
| ```javascript | ||||
| // when receiving a new request, start a new isolated context and set a transaction Id. The following example is using the NPM library continuation-local-storage to isolate requests | ||||
|   | ||||
| var createNamespace = require('continuation-local-storage').createNamespace; | ||||
| const { createNamespace } = require('continuation-local-storage'); | ||||
| var session = createNamespace('my session'); | ||||
|  | ||||
| router.get('/:id', (req, res, next) => { | ||||
|     session.set('transactionId', 'some unique GUID'); | ||||
|     someService.getById(req.params.id); | ||||
|     logger.info('Starting now to get something by Id'); | ||||
| }//Now any other service or components can have access to the contextual, per-request, data | ||||
| }); | ||||
|  | ||||
| // Now any other service or components can have access to the contextual, per-request, data | ||||
| class someService { | ||||
|     getById(id) { | ||||
|         logger.info(“Starting now to get something by Id”); | ||||
|         logger.info(“Starting to get something by Id”); | ||||
|         // other logic comes here | ||||
|     } | ||||
| }//Logger can now append transaction-id to each entry, so that entries from the same request will have the same value | ||||
| } | ||||
|  | ||||
| // The logger can now append the transaction-id to each entry, so that entries from the same request will have the same value | ||||
| class logger { | ||||
|     info (message) | ||||
|     {console.log(`${message} ${session.get('transactionId')}`);} | ||||
|  | ||||
| @ -6,9 +6,10 @@ | ||||
| ### One Paragraph Explainer | ||||
|  | ||||
| Have you ever encountered a severe production issue where one server was missing some piece of configuration or data? That is probably due to some unnecessary dependency on some local asset that is not part of the deployment. Many successful products treat servers like a phoenix bird – it dies and is reborn periodically without any damage. In other words, a server is just a piece of hardware that executes your code for some time and is replaced after that. | ||||
| This approach: | ||||
| 1. allows to scale by adding and removing servers dynamically without any side-affect  | ||||
| 2. simplifies the maintenance as it frees our mind from evaluating each server state. | ||||
| This approach | ||||
|  | ||||
| - allows to scale by adding and removing servers dynamically without any side-affect  | ||||
| - simplifies the maintenance as it frees our mind from evaluating each server state. | ||||
|  | ||||
| <br/><br/> | ||||
|  | ||||
| @ -16,16 +17,19 @@ This approach: | ||||
| ### Code example: anti-patterns | ||||
|  | ||||
| ```javascript | ||||
| //Typical mistake 1: saving uploaded files locally in a server | ||||
| var multer  = require('multer'); //express middleware for fetching uploads | ||||
| // Typical mistake 1: saving uploaded files locally on a server | ||||
| var multer = require('multer'); // express middleware for handling multipart uploads | ||||
| var upload = multer({ dest: 'uploads/' }); | ||||
|  | ||||
| app.post('/photos/upload', upload.array('photos', 12), function (req, res, next) {}); | ||||
|  | ||||
| // Typical mistake 2: storing authentication sessions (passport) in a local file or memory | ||||
| var FileStore = require('session-file-store')(session); | ||||
| app.use(session({ | ||||
|     store: new FileStore(options), | ||||
|     secret: 'keyboard cat' | ||||
| })); | ||||
|  | ||||
| // Typical mistake 3: storing information on the global object | ||||
| Global.someCacheLike.result = { somedata }; | ||||
| ``` | ||||
|  | ||||
| @ -10,13 +10,15 @@ It’s very tempting to cargo-cult Express and use its rich middleware offering | ||||
| <br/><br/> | ||||
|  | ||||
|  | ||||
| ### Code Example – explanation | ||||
| ### Nginx Config Example – Using nginx to compress server responses | ||||
|  | ||||
| ```javascript | ||||
| ``` | ||||
| # configure gzip compression | ||||
| gzip on; | ||||
| #defining gzip compression | ||||
| gzip_comp_level 6; | ||||
| gzip_vary on; | ||||
|  | ||||
| # configure upstream | ||||
| upstream myApplication { | ||||
|     server 127.0.0.1:3000; | ||||
|     server 127.0.0.1:3001; | ||||
| @ -25,10 +27,12 @@ upstream myApplication { | ||||
|  | ||||
| #defining web server | ||||
| server { | ||||
|     # configure server with ssl and error pages | ||||
|     listen 80; | ||||
|     listen 443 ssl; | ||||
|     ssl_certificate /some/location/sillyfacesociety.com.bundle.crt; | ||||
|     error_page 502 /errors/502.html; | ||||
|  | ||||
|     # handling static content | ||||
|     location ~ ^/(images/|img/|javascript/|js/|css/|stylesheets/|flash/|media/|static/|robots.txt|humans.txt|favicon.ico) { | ||||
|     root /usr/local/silly_face_society/node/public; | ||||
|  | ||||
| @ -7,8 +7,9 @@ | ||||
| Modern Node applications have tens and sometimes hundreds of dependencies. If any of the dependencies | ||||
| you use has a known security vulnerability your app is vulnerable as well. | ||||
| The following tools automatically check for known security vulnerabilities in your dependencies: | ||||
| [nsp](https://www.npmjs.com/package/nsp) - Node Security Project | ||||
| [snyk](https://snyk.io/) - Continuously find & fix vulnerabilities in your dependencies | ||||
|  | ||||
| - [nsp](https://www.npmjs.com/package/nsp) - Node Security Project | ||||
| - [snyk](https://snyk.io/) - Continuously find & fix vulnerabilities in your dependencies | ||||
|  | ||||
| <br/><br/> | ||||
|  | ||||
|  | ||||
| @ -5,25 +5,30 @@ | ||||
|  | ||||
| ### One Paragraph Explainer | ||||
|  | ||||
| In a classic web app the backend serves the frontend/graphics to the browser, a very common approach in the Node’s world is to use Express static middleware for streamlining static files to the client. BUT – Node is not a typical webapp as it utilizes a single thread that is not optimized to serve many files at once. Instead, consider using a reverse proxy, cloud storage or CDN (e.g. Nginx, AWS S3, Azure Blob Storage, etc) that utilizes many optimizations for this task and gain much better throughput. For example, specialized middleware like nginx embodies direct hooks between the file system and the network card and use a multi-threaded approach to minimize intervention among multiple requests. | ||||
| In a classic web app the backend serves the frontend/graphics to the browser, a very common approach in the Node’s world is to use Express static middleware for streamlining static files to the client. BUT – Node is not a typical webapp as it utilizes a single thread that is not optimized to serve many files at once. Instead, consider using a reverse proxy (e.g. nginx, HAProxy), cloud storage or CDN (e.g. AWS S3, Azure Blob Storage, etc) that utilizes many optimizations for this task and gain much better throughput. For example, specialized middleware like nginx embodies direct hooks between the file system and the network card and uses a multi-threaded approach to minimize intervention among multiple requests. | ||||
|  | ||||
| Your optimal solution might wear one of the following forms: | ||||
| 1. A reverse proxy – your static files will be located right next to your Node application, only requests to the static files folder will be served by a proxy that sits in front of your Node app such as nginx. Using this approach, your Node app is responsible deploying the static files but not to serve them. Your frontend’s colleague will love this approach as it prevents cross-origin-requests from the frontend.  | ||||
| 2. Cloud storage – your static files will NOT be part of your Node app content, else they will be uploaded to services like AWS S3, Azure BlobStorage, or other similar services that were born for this mission. Using this approach, your Node app is not responsible deploying the static files neither to serve them, hence a complete decoupling is drawn between Node and the Frontend which is anyway handled by different teams. | ||||
|  | ||||
| 1. Using a reverse proxy – your static files will be located right next to your Node application, only requests to the static files folder will be served by a proxy that sits in front of your Node app such as nginx. Using this approach, your Node app is responsible deploying the static files but not to serve them. Your frontend’s colleague will love this approach as it prevents cross-origin-requests from the frontend. | ||||
|  | ||||
| 2. Cloud storage – your static files will NOT be part of your Node app content, they will be uploaded to services like AWS S3, Azure BlobStorage, or other similar services that were born for this mission. Using this approach, your Node app is not responsible deploying the static files neither to serve them, hence a complete decoupling is drawn between Node and the Frontend which is anyway handled by different teams. | ||||
|  | ||||
| <br/><br/> | ||||
|  | ||||
|  | ||||
| ### Code example: typical nginx configuration for serving static files | ||||
| ### Configuration example: typical nginx configuration for serving static files | ||||
|  | ||||
| ```javascript | ||||
| ``` | ||||
| # configure gzip compression | ||||
| gzip on; | ||||
| #defining gzip compression | ||||
| keepalive 64; | ||||
| }#defining web server | ||||
|  | ||||
| # defining web server | ||||
| server { | ||||
| listen 80; | ||||
| listen 443 ssl;#handling static content | ||||
| listen 443 ssl; | ||||
|  | ||||
| # handle static content | ||||
| location ~ ^/(images/|img/|javascript/|js/|css/|stylesheets/|flash/|media/|static/|robots.txt|humans.txt|favicon.ico) { | ||||
| root /usr/local/silly_face_society/node/public; | ||||
| access_log off; | ||||
|  | ||||
| @ -5,7 +5,7 @@ | ||||
|  | ||||
| ### One Paragraph Explainer | ||||
|  | ||||
| At the base level, Node processes must be guarded and restarted upon failures. Simply put, for small apps and those who don’t use containers – tools like [PM2](https://www.npmjs.com/package/pm2-docker) are perfect as they bring simplicity, restarting capabilities and also rich integration with Node. Others with strong Linux skills might use systemd and run Node as a service. Things get more interesting for apps that uses Docker or any container technology since those are usually accompanies by cluster management tools (e.g. [AWS ECS](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html), [Kubernetes](https://kubernetes.io/), etc) that deploy, monitor and heal containers. Having all those rich cluster management features including container restart, why mess up with other tools like PM2? There’s no bullet proof answer. There are good reasons to keep PM2 within containers (mostly its containers specific version [pm2-docker](https://www.npmjs.com/package/pm2-docker)) as the first guarding tier – it’s much faster to restart a process and provide Node-specific features like flagging to the code when the hosting container asks to gracefully restart. Other might choose to avoid unnecessary layers. To conclude this write-up, no solution suits them all and getting to know the options is the important thing | ||||
| At the base level, Node processes must be guarded and restarted upon failures. Simply put, for small apps and those who don’t use containers – tools like [PM2](https://www.npmjs.com/package/pm2-docker) are perfect as they bring simplicity, restarting capabilities and also rich integration with Node. Others with strong Linux skills might use systemd and run Node as a service. Things get more interesting for apps that use Docker or any container technology since those are usually accompanied by cluster management and orchestration tools (e.g. [AWS ECS](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html), [Kubernetes](https://kubernetes.io/), etc) that deploy, monitor and heal containers. Having all those rich cluster management features including container restart, why mess up with other tools like PM2? There’s no bullet proof answer. There are good reasons to keep PM2 within containers (mostly its containers specific version [pm2-docker](https://www.npmjs.com/package/pm2-docker)) as the first guarding tier – it’s much faster to restart a process and provide Node-specific features like flagging to the code when the hosting container asks to gracefully restart. Other might choose to avoid unnecessary layers. To conclude this write-up, no solution suits them all and getting to know the options is the important thing | ||||
|  | ||||
| <br/><br/> | ||||
|  | ||||
|  | ||||
| @ -7,7 +7,7 @@ | ||||
|  | ||||
|  | ||||
|  | ||||
| Your code depends on many external packages, let’s say it ‘requires’ and use momentjs-2.1.4, then by default when you deploy to production NPM might fetch momentjs 2.1.5 which unfortunately brings some new bugs to the table. Using NPM config files and the argument ```–save-exact=true``` instructs NPM to refer to the *exact* same version that was installed so the next time you run ```npm install``` (at production or within a Docker container you plan to ship forward for testing) the same dependent version will be fetched. An alternative popular approach is using a .shrinkwrap file (easily generated using NPM) that states exactly which packages and versions should be installed so no environement can get tempted to fetch newer versions than expected. | ||||
| Your code depends on many external packages, let’s say it ‘requires’ and use momentjs-2.1.4, then by default when you deploy to production NPM might fetch momentjs 2.1.5 which unfortunately brings some new bugs to the table. Using NPM config files and the argument ```–save-exact=true``` instructs NPM to refer to the *exact* same version that was installed so the next time you run ```npm install``` (in production or within a Docker container you plan to ship forward for testing) the same dependent version will be fetched. An alternative and popular approach is using a .shrinkwrap file (easily generated using NPM) that states exactly which packages and versions should be installed so no environement can get tempted to fetch newer versions than expected. | ||||
|  | ||||
| * **Update:** as of NPM 5, dependencies are locked automatically using .shrinkwrap. Yarn, an emerging package manager, also locks down dependencies by default | ||||
|  | ||||
| @ -17,7 +17,7 @@ Your code depends on many external packages, let’s say it ‘requires’ and u | ||||
|  | ||||
| ### Code example: .npmrc file that instructs NPM to use exact versions | ||||
|  | ||||
| ```javascript | ||||
| ``` | ||||
| // save this as .npmrc file on the project directory | ||||
| save-exact:true | ||||
| ``` | ||||
|  | ||||
| @ -6,7 +6,7 @@ | ||||
|  | ||||
| At the very basic level, monitoring means you can *easily* identify when bad things happen at production. For example, by getting notified by email or Slack. The challenge is to choose the right set of tools that will satisfy your requirements without breaking your bank. May I suggest, start with defining the core set of metrics that must be watched to ensure a healthy state – CPU, server RAM,  Node process RAM (less than 1.4GB), the amount of errors in the last minute, number of process restarts, average response time. Then go over some advanced features you might fancy and add to your wish list. Some examples of luxury monitoring feature: DB profiling, cross-service measuring (i.e. measure business transaction), frontend integration, expose raw data to custom BI clients, Slack notifications and many others. | ||||
|  | ||||
| Achieving the advanced features demands lengthy setup or buying a commercial product such as Datadog, newrelic and a like. Unfortunately, achieving even the basics is not a walk in the park as some metrics are hardware-related (CPU) and others live within the node process (internal errors) thus all the straightforward tools require some additional setup. For example, cloud vendor monitoring solutions (e.g. [AWS CloudWatch](https://aws.amazon.com/cloudwatch/), [Google StackDriver](https://cloud.google.com/stackdriver/)) will tell you immediately about the hardware metric but nothing about the internal app behavior. On the other end, Log-based solutions such as ElasticSearch lack the hardware view by default. The solution is to augment your choice with missing metrics, for example, a popular choice is sending application logs to [Elastic stack](https://www.elastic.co/products) and configure some additional agent (e.g. [Beat](https://www.elastic.co/products)) to share hardware-related information to get the full picture. | ||||
| Achieving the advanced features demands lengthy setup or buying a commercial product such as Datadog, NewRelic and alike. Unfortunately, achieving even the basics is not a walk in the park as some metrics are hardware-related (CPU) and others live within the node process (internal errors) thus all the straightforward tools require some additional setup. For example, cloud vendor monitoring solutions (e.g. [AWS CloudWatch](https://aws.amazon.com/cloudwatch/), [Google StackDriver](https://cloud.google.com/stackdriver/)) will tell you immediately about the hardware metrics but not about the internal app behavior. On the other end, Log-based solutions such as ElasticSearch lack the hardware view by default. The solution is to augment your choice with missing metrics, for example, a popular choice is sending application logs to [Elastic stack](https://www.elastic.co/products) and configure some additional agent (e.g. [Beat](https://www.elastic.co/products)) to share hardware-related information to get the full picture. | ||||
|  | ||||
|  | ||||
| <br/><br/> | ||||
|  | ||||
| @ -13,11 +13,12 @@ Process environment variables is a set of key-value pairs made available to any | ||||
| ### Code example: Setting and reading the NODE_ENV environment variable | ||||
|  | ||||
| ```javascript | ||||
| //Using a command line, initializing node process and setting before environment variables | ||||
| Set NODE_ENV=development&& set otherVariable=someValue&& node | ||||
| // Setting environment variables in bash before starting the node process | ||||
| $ NODE_ENV=development | ||||
| $ node | ||||
|   | ||||
| // Reading the environment variable using code | ||||
| If(process.env.NODE_ENV === “production”) | ||||
| if (process.env.NODE_ENV === “production”) | ||||
|     useCaching = true; | ||||
| ``` | ||||
|  | ||||
| @ -29,7 +30,7 @@ From the blog [dynatrace](https://www.dynatrace.com/blog/the-drastic-effects-of- | ||||
| > ...In Node.js there is a convention to use a variable called NODE_ENV to set the current mode. We see that it in fact reads NODE_ENV and defaults to ‘development’ if it isn’t set. We clearly see that by setting NODE_ENV to production the number of requests Node.js can handle jumps by around two-thirds while the CPU usage even drops slightly. *Let me emphasize this: Setting NODE_ENV to production makes your application 3 times faster!* | ||||
|  | ||||
|  | ||||
|  | ||||
|  | ||||
|  | ||||
|   | ||||
| <br/><br/> | ||||
|  | ||||
		Reference in New Issue
	
	Block a user
	 Bruno Scheufler
					Bruno Scheufler