Software Containers: Defending Against the Security Risks

Software developers are making more use of software containers to deploy applications, observes Marco Rottigni. Analyst firm Gartner has predicted that around 50% of all global businesses will be running containers in production compared with just 20% in 2017. The reason for this is simple – containers support faster deployments that can scale up rapidly. Running applications in containers can provide more flexibility and agility compared to more traditional application deployments.

However, this agility and speed can be problematic if deployments are not managed very well. As security professionals, we have to develop a thorough understanding of these environments when it comes to keeping these implementations secure. Taking a ‘security by design’ approach around containers from the start can help.

Similar to virtualisation, containers provide a way to package applications and decouple them from any underlying software and hardware. What makes containerisation different to virtualisation is that containers only include the specific components to support a specific application. Based on LXC – a Linux operating system-level virtualisation method for running multiple isolated Linux systems on a single host – containers offer a convenient way in which to encapsulate a small application component.

This makes it very easy to develop and deploy ‘micro services’ applications – this involves using multiple container images in clusters to deliver more work in response to demand levels. Any growth in demand can be met by adding more container images to the cluster. Any reduction in demand means that you can run fewer containers, reducing your costs when running on the cloud. For developers and operations teams, this approach makes it easier to manage and automate the growth of application infrastructures over time.

However, this ability to grow and shrink cluster volumes can be more problematic for security. The number of containers can vary over time, the individual containers may be ephemeral and changes can take place extremely rapidly. This makes the security and risk impact evaluation process a lot harder, as you don’t have a concrete list of IT assets and usage patterns over time for review and compliance purposes.

To remove this risk, it’s not enough to make use of container orchestration tools like Docker and Kubernetes alone. Instead, it’s important to build security and data management processes into the container orchestration process from the start so that all your containers are secure over time.

Making sure this is right involves educating developers and operations teams on what issues exist and what rules have to be followed, then helping them design their own processes to follow those rules automatically. By helping these teams to help themselves, we can avoid being perceived as an obstacle to container adoption and minimise any potential risk around these environments.

Building security management processes

To comprehend the new security challenges that containers bring, it’s first crucial to understand the phases of a container’s lifecycle. This overview is based on Docker as one of the most popular container management and creation tools.

All containers are based on a ‘Dockerfile’ – a text document that contains all the configuration information and commands needed to assemble a container image. This container image is a read-only template and acts as a static specification of what the container should include when it’s started and at run-time.

Each Dockerfile includes the application code and configuration settings needed to start the service. Each container created from this image will be identical, with a new writable layer that will be used to store any changes to the initial build. To manage these images, developers can use registries which can be thought of as public or private stores where one could upload or download images. If a new container is required, the image is pulled from the registry and then assigned to the appropriate cluster.

Finally, there’s the container itself. Each container is created from a Docker image and holds everything needed for an application to run. It’s an isolated and secure application platform. To run containers, you need an engine that can run on top of any server hardware with an operating system or, alternatively, on a hypervisor in a private or public cloud.

Checking and validation

From a security perspective, there are three areas that have to be checked and validated. The first is the Dockerfile configuration itself. Container images are not patched in the traditional sense. Rather than making changes to the same software or machine over and over again whenever a new patch is released, container images are essentially created from scratch each and every time.

From an update perspective, this means that making a change to the Dockerfile and restarting all the containers should update all the containers being used with the new security settings or updated code.

However, this makes it essential to maintain an accurate audit log around the Dockerfiles themselves. As each Dockerfile has new versions created, images could easily deviate from the initial build to include new code and new components. These changes have to be verified and audited over time. The containers themselves also have to be torn down and rebuilt so the new image is in place.

Second, each container should be based solely on the Dockerfile image. Developers may want to add more components to a container image using software. This can pull in a third party component or additional code into the image after initial run-time. However, this should not be a permanent approach to the images themselves. If your containers need those additional software assets over time, they should be added into the base configuration such that they can be adequately tested and validated for security. If this doesn’t take place then compliance and security risks can be introduced into the images at a later stage.

Third, there’s the registry. This collection can include both images taken from public sources and private internal images. However, it’s also possible to take images from a public registry directly which can bypass your security check process over time. It’s therefore better to only take images from an internal registry that can be adequately validated over time. Pointing at an out-of-date image – whether public or private – can lead to out-of-date and insecure containers running in production.

Containers and security by design

To build security by design into your container deployments, you should develop your visibility across all the environments where your services might run. By adopting this approach around containers from the start, you should be able to support better management around this new platform, whether you’re running on internal IT services, in public cloud environments or on a hybrid cloud mix.

To design security into container deployments, beginning at the single container level should be the start. It’s important to prevent any potentially vulnerable images from being added to the organisation’s repositories without adequate checks. Developing images internally should be secure as these are based on known components.

That said, many developers want to work with public images rather than those created internally as this can save valuable time in the software development process. These images should be automatically scanned before being added to any internal repository.

To help this process be both secure and simple for developers, any container image should pass through a workflow. Using REST APIs or native plug-ins, security checks can automatically run that workflow based on your DevOps’ team preferred tools for Continuous Integration/Continuous Deployment (CI/CD).

However, this should not be a case of simply blocking all images that might have out-of-date components. Indeed, there may be elements that require updating, but don’t represent security risks due to other configuration choices. In these cases, employing a ‘security by design’ approach should help developers achieve more flexibility over how they may use specific images or components.

By defining specific parameters for blocking images – from stopping images being used with vulnerabilities rated as severe or preventing those images that don’t adhere to known compliance standards – developers can see where there are issues and fix them before images are deployed to production.

These integrations should also work in reverse. Rather than relying on security teams to carry out security and vulnerability scans, developers should be able to build this activity into their own work patterns. This will let developers run scans automatically before images go into repositories, or before the applications running on these images pass through QA.

This approach makes it easier to manage shipping software out to the business, as well as preventing some problems from entering production deployment. However, registries and repositories should still be inventoried over time. Scheduling regular automated scans for all images in the repository can help pick up when existing software components have issues, and provides another check on all new images added to the repositories every day.

If developers do make use of public container images, it’s worth making sure that they’re familiar with how those images are kept up-to-date and trustworthy. Docker, for example, provides a notary service to sign container images and provide proof that the image can be trusted. Adding rules on image signing and provenance to your rule set on whether containers are allowed to execute can help ensure that these steps are not ignored.

Run-time security by design

Even with all of this preparation work, it’s important to look at how container images run in practice. As container images are built and executed at run-time, they will remain as versions of that image over time. However, it’s possible for those images to drift away from their original secure states.

For security teams, spotting rogue or vulnerable containers relies on having good insight into what images are running at any point, identifying where they are and assessing their status over time. Once running, containers that change and break off from the ‘immutable’ behaviour of their parent container image could indicate a potential vulnerability and breach.

To achieve this, it’s important to identify the typical behaviour of the containerised application and how the containers are implemented at the start. By regularly scanning this set of containers and the results they provide, it’s possible to detect any deviations from that behaviour over time. Examples of this could include unexpected system calls and processes/communications with other images that are not normally required of those containers or of the application as a whole.

By tracking this activity, security teams can determine where these images are cached in the run-time environment and investigate them when required. This can help to identify both active containers and also dormant ones that have not been removed completely. These dormant containers may be based on old images and therefore include vulnerabilities that can be exploited.

For these rogue containers, countermeasures like blocking or quarantining images can be applied for any behaviour that’s out-of-the-ordinary. While any anomaly or change in the specific container image is investigated, service operations can continue as normal as a replacement container can be implemented to replace that rogue image.

Defending against the risk

To improve defence against this kind of risk, all images should be automatically validated against security policies before they begin to work in the first place. This should make it easier to block unapproved images from being spun-up as containers. Here you can leverage orchestrators like Kubernetes to prevent rogue containers from entering the environment via admission controllers. Second, developers should be encouraged to update their images in the registry rather than adding further code or software changes to the images after the fact.

Marco Rottigni

Marco Rottigni

In addition, orchestration environments like Kubernetes have their own Best Practices for security as well. Applying the ‘least access’ model here can make it far easier to prevent security risks, as developers should only have access to operating environments that are directly related to their work.

For those developers that need to have access to the underlying host infrastructure or cloud implementation, additional security steps can be implemented to ensure that access is secure.

This process involves more collaboration and oversight to prevent some of the most common issues around security. From tracking containers so they don’t drift away from their original images through to assessing vulnerabilities and planning remediation, putting effective security processes in place can ensure that containers deliver the value that developers can provide using this technology.

More importantly, security teams can design these processes in from the start so that all teams involved can work together more effectively in the future.

Marco Rottigni is Chief Technical Security Officer at Qualys

About the Author
Brian Sims BA (Hons) Hon FSyI, Editor, Risk UK (Pro-Activ Publications) Beginning his career in professional journalism at The Builder Group in March 1992, Brian was appointed Editor of Security Management Today in November 2000 having spent eight years in engineering journalism across two titles: Building Services Journal and Light & Lighting. In 2005, Brian received the BSIA Chairman’s Award for Promoting The Security Industry and, a year later, the Skills for Security Special Award for an Outstanding Contribution to the Security Business Sector. In 2008, Brian was The Security Institute’s nomination for the Association of Security Consultants’ highly prestigious Imbert Prize and, in 2013, was a nominated finalist for the Institute's George van Schalkwyk Award. An Honorary Fellow of The Security Institute, Brian serves as a Judge for the BSIA’s Security Personnel of the Year Awards and the Securitas Good Customer Award. Between 2008 and 2014, Brian pioneered the use of digital media across the security sector, including webinars and Audio Shows. Brian’s actively involved in 50-plus security groups on LinkedIn and hosts the popular Risk UK Twitter site. Brian is a frequent speaker on the conference circuit. He has organised and chaired conference programmes for both IFSEC International and ASIS International and has been published in the national media. Brian was appointed Editor of Risk UK at Pro-Activ Publications in July 2014 and as Editor of The Paper (Pro-Activ Publications' dedicated business newspaper for security professionals) in September 2015. Brian was appointed Editor of Risk Xtra at Pro-Activ Publications in May 2018.

Related Posts