diff --git a/README.md b/README.md index dfeb8758..60612ac0 100644 --- a/README.md +++ b/README.md @@ -1131,13 +1131,13 @@ Bear in mind that with the introduction of the new V8 engine alongside the new E


-## ![βœ”] 8.6. Set Docker memory limits which are in-par with v8 memory limit +## ![βœ”] 8.6. Set memory limits using Docker -**TL;DR:** +**TL;DR:** Always configure a memory limit using Docker, optionally set also the v8 limits. Practically, use the Docker flag 'run --memory' or set the right values within the platform that runs Docker. By doing this, the runtime will be capable of making better decisions on when to scale, prevent one citizen from starving others, drive thoughtful crash decisions (e.g., Docker can allow slight burst deviations) and in-overall it's always better to move HW decisions to the OPS court -**Otherwise:** +**Otherwise:** When setting limits using V8 --max-old-space-size the Docker runtime won't be aware of its capacity limits and will have to blindly place it in an instance that might not have the right size -πŸ”— [**Read More: Set Docker memory limits which are in-par with v8 memory limit**](/sections/docker/file.md) +πŸ”— [**Read More: Set memory limits using Docker only**](/sections/docker/memory-limit.md)


diff --git a/sections/docker/memory-limit.md b/sections/docker/memory-limit.md new file mode 100644 index 00000000..264dc89b --- /dev/null +++ b/sections/docker/memory-limit.md @@ -0,0 +1,84 @@ +# Set memory limits using Docker only + +

+ +### One Paragraph Explainer + +A memory limit tells the process/container the maximum allowed memory usage - a request or usage beyond this number will kill the process (OOMKill). Applying this is a great practice to ensure one citizen doesn't drink alone all the juice and leave other components to starve. Memory limits also allow the runtime to place a container in the right instance - placing a container that consumes 500MB in an instance with 300MB memory available will lead to failures. Two different options allow configuring this limit: V8 flags (--max-old-space-size) and the Docker runtime. There is confusion around which of those to set: Ensure to always configure the Docker runtime limits as is has a much wider perspective for making the right decisions. First, given this limit, the runtime knows how to scale and create more resources. It can also make a thoughtful decision on when to crash - if a container has a short burst in memory request and the hosting instance is capable of supporting this, Docker will let the container stay alive. Docker also measures the actually used memory and not the allocated part (unlike v8 --max-old-space-size) so it can get Killed only when really needed. Last, With Docker the Ops experts can set various production memory configurations that can be taken into account like memory swap. + +

+ +### Code Example – Memory limit with Docker + +
+Bash + +``` +docker run --memory 512m my-node-app +``` + +
+ +

+ +### Code Example – Memory limit with Kubernetes + +
+Kubernetes deployment yaml + +``` +apiVersion: v1 +kind: Pod +metadata: + name: my-node-app +spec: + containers: + - name: my-node-app + image: my-node-app + resources: + requests: + memory: "400Mi" + limits: + memory: "500Mi" + command: ["node index.js"] +``` + +
+ +

+ +### Code Example Anti-Pattern – Setting memory limit using V8 flags + +
+ +Kubernetes deployment yaml + +``` +apiVersion: v1 +kind: Pod +metadata: + name: my-node-app +spec: + containers: + - name: my-node-app + image: my-node-app + command: ["node index.js --max-old-space-size=400"] +``` + +
+ +

+ +### Kubernetes documentation: "If you do not specify a memory limit" + +From [K8S documentation](https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/) + +> The Container has no upper bound on the amount of memory it uses. The Container could use all of the memory available on the Node where it is running which in turn could invoke the OOM Killer. Further, in case of an OOM Kill, a container with no resource limits will have a greater chance of being killed. + +

+ +### Docker documentation: "it throws an OOME and starts killing processes " + +From [Docker official docs](https://docs.docker.com/config/containers/resource_constraints/) + +> It is important not to allow a running container to consume too much of the host machine’s memory. On Linux hosts, if the kernel detects that there is not enough memory to perform important system functions, it throws an OOME, or Out Of Memory Exception, and starts killing processes to free up memory.