To guarantee uniformity, streamline maintenance, and eliminate the perpetual “it works on my machine” issue, a modern infrastructure is built following the immutable infrastructure methodology.
Immutable infrastructure entails a strategy for overseeing services and software deployments on IT resources, involving the substitution of components rather than modification. Whenever any alteration is made, an application or service is essentially redeployed.
Within this configuration, the fundamental components are represented by container images. Constructing these images adhering to industry best practices is imperative, ensuring the overall ecosystem’s stability and full utilization of the advantages of immutable infrastructure.
Golden Rules
To ensure the relevance of the in-house container images and to facilitate the expansion and upkeep of our infrastructure, our containerization strategy abides by the following key principles:
Enforce Consistency
To establish uniformity and simplify maintenance, for every container image associated, we recommend using a repository template (such as a Cookiecutter template repository). This repository will encompass the desired structure, pipeline configuration, and mandatory files to validate compliance with industry best practices.
Ensure Reusability
Across our infrastructure, all utilized containers should inherit the primary container base image. For example, if you base your images on “ubuntu:jammy,” from here on, it will serve as a starting point for everything built on top. Additionally, you could extend the “base” terminology to more specialized images that can be correlated with programming languages like Python, OpenJDK, or process flows like Semantic Release. These child-container images should retain specialization, focused solely on a singular purpose. When the behavior of the desired container image can be generalized, it is advisable to create a fresh base image that includes common attributes. The intended child image can then extend this base and tailor it to specific requirements.
Avoid Running as Root
Default security policies within Kubernetes clusters or other container orchestration engines proscribe the execution of root containers. It is recommended to resist the temptation of running as root to circumvent permission-related challenges. Instead, emphasis should be placed on addressing the underlying issues causing such challenges.
Include Health/Liveness Checks
Containers acting as long-running services should be based on images incorporating health checks. This practice proves crucial for services with prolonged durations or persistence, as it ensures their health and facilitates automatic restarts when necessary. Concerning images operating within Kubernetes clusters, it becomes imperative to define Liveness, Readiness, and Startup Probes:
- Liveness probes are designed to facilitate container restarts, especially in scenarios involving deadlocks or instances where an application is operational but unable to progress.
- Readiness probes are instrumental in determining when a container is prepared to accept incoming traffic. The readiness status of all containers within a Pod influences the Pod’s readiness as a whole. This status plays a role in governing which Pods serve as backends for Services. When a Pod is not ready, it is automatically excluded from Service load balancing.
- Startup probes are employed to ascertain when a container application has been successfully initiated. If a startup probe is configured, it suspends liveness and readiness checks until a successful startup is confirmed. This prevents these checks from interfering with the startup process. Such probes prove valuable for containers with slow startups, preventing premature termination by the kubelet before the application is fully operational.
Frequent Image Updates
Plan for regularly updated base images and rebuild your own images on top of them. Given the ongoing discovery of security vulnerabilities, adopting the latest security patches is a prudent security practice. While immediately migrating to the latest version is not always advisable due to potentially disruptive changes, it is recommended to establish a versioning strategy:
- Prioritize adherence to stable or long-term support versions, as they offer timely security fixes.
- Prepare for migration before the base image version reaches the end of its support lifecycle.
In addition, consider periodic reconstruction of your images, employing a similar approach to acquire the most recent packages from the base distribution and language environments (e.g., Node, Golang, Python). Common package or dependency managers like npm or go mod frequently offer mechanisms to specify version ranges, facilitating incorporating the latest security updates.
Best practices for writing Dockerfiles
A team should consider various collections of rules when creating a new container image. We recommend looking at the following resources before defining your rule set. Keep in mind we’ll be showcasing Docker as the container runtime.
- Awesome Docker (source)
- Docker: Best practices for writing Dockerfiles (source)
- Sysdig: Top 20 Dockerfile best practices (source)
From our point of view, the following rules should be included in any Dockerfile guideline.
Use trusted base images
Thoughtfully selecting the foundation for your images (utilizing the FROM instruction) is paramount. Constructing your containers on top of untrusted or neglected images will inevitably inherit all the issues and vulnerabilities inherent in those images.
Adhere to these Dockerfile best practices when opting for your base images:
- Prioritize Verified and Official Images: It is advisable to favor authenticated and officially endorsed images sourced from reputable repositories and providers. These should take precedence over images crafted by unfamiliar users.
- Scrutinize Custom Images: Exercise caution by verifying the image source and reviewing the associated Dockerfile when employing customized images. In situations where customizations are essential, consider fabricating your own base image. The mere presence of an image in a public registry does not guarantee its adherence to the specified Dockerfile, nor does it guarantee ongoing updates to its content.
- Assess Official Image Suitability: While official images are often preferred, there might be instances where their compatibility with security standards and resource minimalism is in question. In such cases, exploring alternative options that align better with your requirements may be necessary.
Ultimately, meticulous consideration of your image foundation can significantly influence the security and stability of your containerized environment.
Avoid unnecessary privileges
A recent report highlighted that 58% of images are running the container entry point as root (UID 0).
Nevertheless, it is considered a Dockerfile best practice to steer clear of such an approach. Instances where a container must operate with root privileges are exceedingly rare. Therefore, it’s imperative to remember to incorporate the USER instruction, which alters the default effective User ID (UID) to that of a non-root user.
Additionally, it’s worth noting that your operational environment might automatically disallow the execution of containers as root. This could be enforced as part of security policies, as observed in environments like AKS (Azure Kubernetes Service) Security Policy.
Prevent confidential data leaks
Exercise extreme caution when handling sensitive data within containers. Under no circumstances should any secrets or credentials be included in Dockerfile instructions—whether in environment variables, arguments, or hard-coded within commands.
Furthermore, exercise heightened vigilance when dealing with files copied into the container. Even if a file is deleted in a subsequent Dockerfile instruction, it remains accessible in previous layers. It is not truly eradicated; rather, it is merely concealed in the final filesystem. To navigate these challenges and ensure security, adhere to the following practices during image construction:
- If your application supports configuration through environment variables, leverage this functionality to set secrets during execution (using the -e option in docker run). Alternatively, consider utilizing Docker or Kubernetes secrets to supply values as environment variables.
- When working with configuration files, opt to bind mount these files within Docker, or alternatively mount them from a Kubernetes secret. It is crucial to ensure that your images do not contain any confidential information or configuration values that tie them to specific environments (e.g., production, staging).
- Instead, design your images to be adaptable by allowing runtime injection of values, particularly secrets. Configuration files within the image should solely contain secure or placeholder values, serving as examples rather than actual sensitive data.
By adhering to these guidelines, you can bolster the security of your containers and safeguard sensitive information from exposure.
Avoid tag mutability
Tag mutability can introduce multiple functional and security issues. In container land, tags are a volatile reference to a concrete image version at a specific point in time. Tags can change unexpectedly and at any moment and may cause, among other things, the Time-of-check vs. Time-of-use (TOCTOU) issue. The image verified during the CI/CD pipeline or the Kubernetes admission phase differs from the image deployed in the cluster, bypassing image scanning security checks.
Using immutable tags would prevent these problems, but tag mutability is very convenient in many scenarios, and immutable tags are not widely supported in the registries.
Use version pinning
Using
RUN apt-get update && apt-get install -y |
ensures your Dockerfile installs the latest package versions with no further coding or manual intervention. This technique is known as “cache-busting.” You can also achieve cache-busting by specifying a package version. This is known as version pinning, for example:
RUN apt-get update && apt-get install -y \
package-bar \ package-baz \ package-foo=1.3.* |
Implementing version pinning serves multiple important purposes in the context of building container images. This practice mandates retrieving a specific version, irrespective of cached content, during the build process. Consequently, it not only aids in mitigating failures arising from unforeseen alterations in required packages but also ensures consistency by preventing the inclusion of different versions of third-party components across consecutive builds associated with the same container image tag.
Use metadata labels
It is a Dockerfile best practice to include metadata labels when building your image. Labels will help in image management, like including the application version, a link to the website, how to contact the maintainer and more.
You can look at the predefined annotations from the OCI image spec, which deprecate the previous Label schema standard draft.
Use multistage builds if possible
Make use of multistage building features to have reproducible builds inside containers.
In a multi-stage build, you create an intermediate container – or stage – with all the required tools to compile or produce your final artifacts (i.e., the final executable). Then, you copy only the resulting artifacts to the final image without additional development dependencies, temporary build files, etc.
A well-crafted multistage build includes only the minimal required binaries and dependencies in the final image and does not build tools or intermediate files. This reduces the attack surface, decreasing vulnerabilities. It is safer, and it also reduces image size.
By Alex Coman – Software Architect, and Gabriel Paiu – Infrastructure Architect
If you are interested in DevOps or infrastructure and cloud, then you may also be interested in:
Navigating towards Kubernetes by the same writing duo, Alex and Gabriel.
or attend our webinar Demystifying Cost-Savings – the Art of Finetuning Cloud Infrastructure Costs.
We look forward to seeing you on 7 September 2023.
Banner by Paul Teysen, on Unsplash.
STAY TUNED
Subscribe to our newsletter today and get regular updates on customer cases, blog posts, best practices and events.