240-449-3988 info@adelatech.com

Kubernetes and containers are fast becoming the standard for deploying and scaling applications in production. Both technologies are still fairly new, however, and they open organizations up to various security risks. This is because Kubernetes and containers expose port services to the internet and make it easy for attackers to target sensitive data or privileged system accounts. In this post, we’ll explore 10 simple but effective ways to keep your Kubernetes and containers secure.

1. Use predefined environments

One of the easiest ways to keep your Kubernetes and containers secure is to use predefined environments. With the advent of stateful applications and microservices, the practice of setting up a new environment for each application has become the norm. Unfortunately, this also increases the likelihood of organizations skipping the security audit process altogether. If you have predefined environments in place, however, you can avoid this pitfall. This is because predefined environments limit the resources and privileges an application has. If a developer needs more resources or privileges, they can simply create a new environment specific to that need. You can set up predefined environments by creating a new file in the root directory of your Kubernetes cluster. This file should then be used to create a few environments, including a production environment. You may also want to create a staging environment for testing and auditing purposes. Cloud managed Kubernetes clusters is the easiest way to provision resources with minimal administration. The top cloud clusters are listed below.
1. AWS EKS — https://aws.amazon.com/eks/
2. AWS EKS Anywhere (on-prem deployments) https://aws.amazon.com/eks/eks-anywhere/
3. Google Cloud GKE — https://cloud.google.com/kubernetes-engine/docs/concepts/kubernetes-engine-overview
4. Azure AKS — https://azure.microsoft.com/en-us/products/kubernetes-service/
5. Linode K8S — https://www.linode.com/docs/guides/kubernetes/
6. Digital Ocean K8S — https://www.digitalocean.com/products/kubernetes

2. Enable auditing

One of the easiest ways to identify any security breaches is to enable auditing for your Kubernetes cluster. Despite this, surprisingly few organizations actually do this. In fact, according to a study by Varonis, only 25 percent of organizations have auditing enabled. This is despite the fact that the same study shows that 85 percent of organizations believe they have been victims of some form of cyber-attack. If you’re not enabling auditing, you’re losing out on a lot of valuable information. You’re also making it much harder to identify and fix sensitive data breaches. You can enable auditing for your Kubernetes cluster by creating a new file in the root directory of your cluster. You can then create a new service account and role for the audit logs.

A few top considerations when looking to enable auditing on your clusters are:
1. What are your audit policies? (what events do you log?)
2. Where do you save your logged events? (Locally or outside the cluster?)
3. Can your log event be sent as an HTTP API instead of local storage? (Used to connect third party logging tools like FalcoElastic.io, Splunk, etc.

Link to Kubernetes Auditing Docs

3. Use Network Isolation

While Kubernetes has built-in network security, containers expose their ports directly to the internet by default. While this makes it easy for you to access the application, it also leaves it vulnerable to various types of cyber-attacks. To protect your containers from these attacks, you’ll have to use network isolation. This allows you to create a virtual network that containers can connect to and communicate with. You can set up network isolation through the Kubernetes Dashboard. It requires a few simple steps and a small amount of configuration. You can also set up network isolation through the command line, although you’ll have to build your own virtual network. Virtual networks are also particularly useful if you have multiple tenants on the same Kubernetes cluster.

Link to Kubernetes Network Policy Docs

4. Ensure Kubernetes and containers are up to date

While it’s important to keep your systems up to date for many different applications, Kubernetes and containers are no different. In fact, since containers and Kubernetes are the containers of the application, it’s even more important that these technologies are up to date. You can ensure Kubernetes and containers are up to date by using a CI/CD pipeline. This pipeline can check the version of Kubernetes and containers and trigger an update if needed. You can set up a CI/CD pipeline for your Kubernetes cluster by creating a new file in the root directory of your cluster. You can then use the file to set up a new build and deployment process by defining and update strategy.

Link to Kubernetes Stateful Set Update Strategy Docs

5. Block known bad containers and kubectl commands

While you can block known bad IP addresses and domains, attackers have recently started to target Kubernetes and containers directly. This is because the Kubernetes API allows attackers to perform functions such as deleting containers and nodes. You can block known bad containers by defining authorization and admission policies for your Kubernetes cluster.

Kubernetes API access can be broken out by 3 operations:
1. Authentication (user account credentials)
2. Authorization (RBAC policies and actions requested by the user)
3. Admission Control (Software modules that proxy object actions)

Link to Kubernetes API Access Control Docs
Link to Kubernetes Kubectl Command Docs

6. Use Role Based Access Control (RBAC)

Another way to keep your Kubernetes and containers secure is to use RBAC. This uses the Kubernetes API to restrict access to sensitive resources and functions. You can also create a cluster level policy that applies to all users and containers. This way, you can restrict access to sensitive functions such as deleting pods or nodes.

Link to Kubernetes RBAC Best Practice Docs

7. Install a trustworthy container registry

While Kubernetes and containers reduce the need for on-premise infrastructure, they also make it easy to push container images to any public or private registry. Unfortunately, not all container registries are trustworthy. In fact, a recent study found that more than half of container images are vulnerable to remote code execution. You can avoid this issue by installing a trustworthy container registry. You can set up a private registry by installing a Container Registry on your Kubernetes cluster. You can then use the registry to push and pull images. You can also set up a public registry by uploading your images to a third-party service such as Google Container Registry or Amazon ECR. Enabling vulnerability scanning for your registry well help add a security layer to your container management by ensuring you know the risk/vulnerabilities associated with public images.

We recommend the following community (free) container image repositories for self-hosted deployments:
Quay
Jfrog Artifactory

8. Use Authenticated Channels for Container Communication

While you can use a trusted container registry, attackers can still use the unauthenticated Docker or Kubernetes API to communicate with the registry. You can prevent this by using authenticated channels for container communication. This means you can use the Kubernetes API to create a new inbound network rule. This rule should then point to a specific container in your cluster. Installing a network policy provider will enable granular access for “north/south” and “east/west” network traffic. The policy provider will create ingress and egress paths for each pod, namespace, port and other objects that require access or deny rules.

Link to Kubernetes Network Policy Provider Options

9. Encrypt Your Data at Rest

Although it’s important to encrypt communications over the internet, it’s also important to encrypt sensitive data at rest. You can do this by encrypting your data on the file system where it’s stored. You can also use a database encryption tool to encrypt sensitive data in a database. You can set up encryption on your file system by creating a new encryption policy for your Kubernetes cluster. You can then use the policy to create a new volume and specify the encryption type. You can also set up database encryption by connecting your database to your Kubernetes cluster. You can do this by creating a new database service and database schema.

Link to Kubernetes Data at Rest Policy Options

10. Rotate Your API Keys and Kube Configs

While you can use RBAC to restrict access to sensitive functions and resources, attackers can use stolen or misused API keys to bypass these restrictions. You can prevent this by rotating your API keys regularly. You can do this by rotation of kubelet API keys of your cluster and setting up a new cron job. Enabling the bootstrapping of TLS keys also ensures that nodes and services utilize trusted keys that are generated prior to connecting to the Kublet. TLS certificate rotation is automatically enabled for Kubernetes versions 1.8.0 and higher.

Conclusion

Kubernetes and containers are powerful tools that help organizations to deploy, scale, and manage their applications in production. Unfortunately, they also expose sensitive data and functionality to the internet by default. This makes it easy for attackers to target sensitive information or steal privileged system accounts. Fortunately, there are container security frameworks that specify exact controls to enable Kubernetes cluster security, like:
NIST 800–190 (container application security guide)
DISA Kubernetes Security Technical Implementation Guide (STIG)
CIS Kubernetes Benchmark