Fixed per the PR comments

This commit is contained in:
Yoni
2020-07-26 16:57:43 +03:00
parent 3af0b7295e
commit d0e16544dd
2 changed files with 50 additions and 1 deletions

View File

@ -0,0 +1,49 @@
# Use .dockerignore to prevent leaking secrets
<br/><br/>
### One Paragraph Explainer
The Docker build command copies the local files into the build context environment over a virtual network. Be careful - development and CI folders contain secrets like .npmrc, .aws, .env files, and other sensitive files. Consequently, Docker images might hold secrets and expose them in unsafe territories (e.g. Docker repository, partners servers). In a better world, the Dockerfile should be explicit about what is being copied. On top of this, include a .dockerignore file that acts as the last safety net that filters out unnecessary folders and potential secrets. Doing so also boosts the build speed - By leaving out common development folders that have no use in production (e.g. .git, test results, IDE configuration), the build can better utilize the cache and achieve better performance
<br/><br/>
### Code Example A good default .dockerignore for Node.js
<details>
<summary><strong>.dockerignore</strong></summary>
```
**/node_modules/
**/.git
**/README.md
**/LICENSE
**/.vscode
**/npm-debug.log
**/coverage
**/.env
**/.editorconfig
**/.aws
**/dist
```
</details>
<br/><br/>
### Code Example Anti-Pattern Recursive copy of all files
<details>
<summary><strong>Dockerfile</strong></summary>
```
FROM node:12-slim AS build
WORKDIR /usr/src/app
# The next line copies everything
COPY . .
# The rest comes here
```
</details>

View File

@ -4,7 +4,7 @@
### One Paragraph Explainer
Docker runtime orchestrators like Kubernetes are really good at making containers health and placement decisions: They will take care to maximize the number of containers, balance them across zones, and can conclude many cluster factors while making these decisions. Goes without words, they identify failing processes (i.e., containers) and restart them in the right place. Despite that, some may tempt to use custom code or tools to replicate the Node process for CPU utilization or restart the process upon failure (e.g., Cluster module, PM2). These local tools don't have the perspective and the data that is available on the cluster level. For example, when the instances resources can host 3 containers and given 2 regions or zones, Kubernetes will take care to spread the containers across zones. This way, in case of a zonal or regional failure, the app will stay alive. On the contrary side, When using local tools for restarting the process, the Docker orchestrator is not aware of the errors and can not make thoughtful decisions like relocating the container to a new instance or zone.
Docker runtime orchestrators like Kubernetes are really good at making containers health and placement decisions: They will take care to maximize the number of containers, balance them across zones, and take into account many cluster factors while making these decisions. Goes without words, they identify failing processes (i.e., containers) and restart them in the right place. Despite that, some may tempt to use custom code or tools to replicate the Node process for CPU utilization or restart the process upon failure (e.g., Cluster module, PM2). These local tools don't have the perspective and the data that is available on the cluster level. For example, when the instances resources can host 3 containers and given 2 regions or zones, Kubernetes will take care to spread the containers across zones. This way, in case of a zonal or regional failure, the app will stay alive. On the contrary side, When using local tools for restarting the process, the Docker orchestrator is not aware of the errors and can not make thoughtful decisions like relocating the container to a new instance or zone.
<br/><br/>