I have a soft spot for early adopters. They are the ones who voice those - sometimes pesky - requirements that push container projects forward, making the cluster more mature and the workflow more tailored for real life usecases.
Like on my current Kubernetes project, when a few weeks back an early adopter came to me (hey Jonas!) and raised his concern around secret management: “Look Laszlo, I’m a bit worried about the keys I’m uploading to Kubernetes. It seems devs from other teams are able to see them. You know, they are kind of important.. they allow access to confidential data”
He had a point. At that time there was only one shared namespace and developers saw each other’s deployments, nothing preventing them from wrongdoing. Except their morals of course.
It was time to iterate on the cluster and nail access control.
The upstream Kubernetes had Role Based Access Control (RBAC) since the 1.6 release in late March, and timing couldn’t have been better for Rancher - my distribution of choice - to finally support that major release. No wonder I was so pumped seeing the announcement.
Access control is quite an important feature if you ask me. When I worked with the Openshift Kubernetes distribution back in January it felt natural that it has the feature, and only recently I learned that upstream Kubernetes, nor my distribution had it at the time.
The only workaround I could provide until now is to physically separate the clusters along the security boundaries. Which is still the best practice for test and production environments: even though one can label nodes and set up namespaces accordingly, life is easier if those environments remain phisically separate.
For this article I quickly checked what’s up in the Docker Swarm world: “Docker’s out-of-the-box authorization model is all or nothing. Any user with permission to access the Docker daemon can run any Docker client command.”
But don’t I worry since “Anyone with the appropriate skills can develop an authorization plugin.” - well, thanks Docker!
Kubernetes RBAC allows cluster administrators to selectively grant particular users or service accounts fine-grained access to specific resources on a per-namespace basis.
The key primitives are:
Role and ClusterRole to represent a set of permissions to be granted together on a Kubernetes Namespace or on the whole cluster
RoleBinding and ClusterRoleBinding to associate the defined Roles and ClusterRoles to users or Kubernetes ServiceAccounts
The permissions can be fine-grained: granting rights to create, get, list, update, patch, delete any of the Kubernetes API resources, be those Pods, Secrets, NameSpaces, etc.
But luckily Kubernetes also provides sensible defaults for system components and for three common usecases: admin, edit and view, if you don’t want to write your own page long permission set.
Another nice property of the Roles that they are additive.
Using these primitives I was able to build the secure, multi-tenant container platform that multiple teams can use while not seeing or interfering with each other’s deployments or secrets.
And the icing on the cake was Rancher’s nice touch. Since they already support many authentication providers from Active Directory to Github, I was able to bind the RBAC roles to the Github teams that already existed in my client’s organization. How cool is that?
So the setup as of today follows the following principles:
and to further protect the api keys and certificates the following workflow is introduced for secrets:
Are you running Kubernetes?You can bootstrap logging, metrics and ingress with my new project. Head over 1clickinfra.com and see if it helps you.