Docker is a amazing container platform, it helps fast deploying and scaling service among multiple servers (called
Docker is not good at managing instances on different servers, DevOps need a new software. A great thank to Google,
Kubernetes (K8s) is well fitted for
automating deployment, scaling, and management of containerized applications.
In this post I will show my first impression on Kubernetes and how I setup a deployment of
Nginx and assign a ssl certificate automatically with
cert-manager and the steps of my troubleshooting.
Kubernetes is mainly for cloud platform who needs to manage many clusters, and the control-panel node (which can also be called
master node) is not recommended and not allowed to run any container as default. For a bare metal server (like VPS without upstream K8s support) it’s too heavy. A lightweighted K8s solution like
MicroK8s is the best choice.
MicroK8s is developed by Canonical, the author of
Ubuntu, and is
The smallest, simplest, pure production K8s.
For clusters, laptops, IoT and Edge, on Intel and ARM.
To install on
Ubuntu system is very simple with
sudo snap install microk8s --classic
and it’s done. You can watch the install status with
microk8s status --wait-ready if you want. More detailed information about installation see Offcial Docs.
For convenience, I recommend running following code:
alias kubectl='microk8s kubectl'
Docker, You’ll face lots of new concepts in order to start a single service:
ConfigMapfor providing configuration
Secretfor store private or secret information
Volumefor providing storage space
Deploymentfor deploying and scaling service
Podfor running the service instance in container
Ingressfor providing service to public network.
Those description is based on my understandings and may be not accurate enough. It’s really hard to understand how they work at the beginning, but they really helps to seperate configs and instance, allow you to generate different configs from a template for deploying on different nodes in clusters. But talk is cheap, now I show you how I setup a cluster with MySQL, PhpMyAdmin, Nginx.
Let’s setup a MySQL service as a try. As a container will lost its data after shutdown, I need a
PersistentVolume to persist database. A
PersistentVolume is like a disk for containers, every container can claim some space from it, therefore a extra
PersistentVolumeClaim is needed.
All configuration file is written in
yaml format, and a
yaml config file can contain multiple configs. The following code shows a
PersistentVolume and a
PersistentVolumeClaim for MySQL database storage:
Assuming the above code is written in
mysql-pv.yaml, run following code to create the actual resources:
kubectl apply -f mysql-pv.yaml
and check if the resource is sucessfully created:
$ kubectl get pv
The next step is to create a
Deployment configuration and a
Service configuration. The
PersistentVolumeClaim created above will be mounted to the
Deployment. As a database is a stateful application, we don’t need to take care of scaling problem.
To keep the root password safe, it is not directly written in
env section, but retrieved from
mysql-secret, which is a
Secret resource created from following code:
Attention that the value of
root_password must be base64 encoded.
Assume that the two above codes are saved as
mysql-secret.yaml, apply them using:
kubectl apply -f mysql-secret.yaml
By creating a
Deployment resource, a
Pod will also be created to run the instance. The
Pod is the real container, backended by
containerd by default.
$ kubectl describe secret mysql-secret
Now try connecting to the mysql instance to see if the deployment succeeds.
kubectl run -it --rm --image=mysql:5.6 --restart=Never mysql-client -- mysql -h mysql -p[your password]
It will run a instant
Pod running MySQL version 5.6 and call mysql client command to connect local mysql service. If you see a error message like
Unknown mysql server host 'mysql', it means that your deployment is not correct or you didn’t enable dns addon. You can follow steps in official guide to check your dns addon working state. When everything works, you can see following prompt:
If you dont see a command prompt, try pressing enter.
You can try executing some MySQL command here to check if MySQL server really works.
It’s time to deploy more services. Next one is
PhpMyAdmin, a popular database management web application written in PHP. I recommend you use the default docker image branch instead of the
fpm branch, at least I didn’t make the
fpm image work properly.
With following code you can deploy a pma service in one configuration file. Remember to set
PMA_ABSOLUTE_URI to the real public uri you want to use in development or production.
Then run following command to apply the configuration:
kubectl apply -f pma-deployment.yaml
and check the status of this deployment:
$ kubectl get deploy -l app=pma
Until now the deployed service is not able to be accessed from externel network, even not able from localhost. To allow external access, an
Ingress resource will be created. Furthermore, the web service will be secured with a ssl certificate.
Nginx-Ingress allows you deploy nginx service with some simple commands, based on
Kubernetes Ingress, use
ConfigMap to auto configure nginx. All you need to do is install it and write a
Ingress config file and then all done.
I recommend you use
helm to install
microk8s enable helm, replace helm with helm3 if you want to use
helm3. You will also need a tiller account for
$ kubectl create serviceaccount tiller --namespace=kube-system
helm repo update to update official repo. Assume that install service name is nginx, run
helm install stable/nginx-ingress --name nginx to install
Nginx-Ingress controller. But by default it requires a
LoadBalancer to assign an external ip to the controller. As a bare metal server, the provider will not give an upstream
LoadBalancer support. If you really want to use
LoadBalancer you can install
MetalLB which is still in beta phase and you need some IP available. I recommend using
NodePort mode rather than
LoadBalancer for convenience.
Helm supports config override using values. At
Helm Hub page the configurable values are listed. Values can be set in command like
--set config.service.type=NodePort, or in a file:
Remember to assign the external IP to your server IP. Check controller service status:
$ kubectl get svc -l app=nginx-ingress
You can access
http://[your server ip]:30123 and get a default backend response with 404.
Nginx-Ingress controller supports exposing a service with
Ingress configuration, where I just need to point desired host, path and reverse proxy (backend). A sample yaml is shown below:
and comment out line 6 and line 18-21 to disable tls for now. It means that we expose the backend service
pma-service to host, reverse proxy any request sent to the host (domain) to port 80 of
pma-service. Apply this yaml using:
kubectl apply -f pma-ingress.yaml
and check ingress status:
kubectl get ingress
the address could be pending for a while because the
Ingress will send config to the
Nginx-Ingress controller and wait it to activate. Once your server ip is shown in the
ADDRESS field, you can access the host you set in
pma-ingress.yaml to test if the ingress works. Remember to point the host to your server ip in DNS provider.
Normally I use
Let's encrypt to secure the connection using
acme.sh script and import private key and public certificate in Nginx virtual host config. But with Kubernetes I can use
cert-manager to automate this process.
At first run following commands to install
kubectl create namespace cert-manager
Verify the installation with:
$ kubectl get pods --namespace cert-manager
Then create two
Issuer, this is a new type imported by
cert-manager. The one is for test using staging acme server, one is for production using real acme server.
The class name defined at line 18 must match the class name set in
pma-ingress.yaml at line 5. The generated cert will be stored in
Secret, name is defined at line 13 (privateKeySecretRef.name). Apply them using:
kubectl apply -f le-staging.yaml
You can check the status of issuer with
kubectl describe issuer letsencrypt-staging or
kubectl describe issuer letsencrypt-prod.
Uncomment line 6 and line 18-21 in
pma-ingress.yaml, replace the value at line 6 with letsencrypt-staging for testing purpose. Then apply this ingress config again. Check the status of certificate:
$ kubectl get certificate
until the field
True. If it is always
False you can check detailed information about the certificate using
kubectl describe certificate pma-tls.
The certificate is expected to be stored in
$ kubectl describe secret pma-tls
https://[your hostname]/, you will get a certificate warning, it’s normal because the certificate is signed by staging acme server. It means that the certificate issuer is working.
letsencrypt-prod at line 6 in
pma-ingress.yaml, delete the secret
kubectl delete secret pma-tls
and apply the
pma-ingress.yaml again. Then wait a few minutes until new certificate is ready.
Now you should be able to access
https://[your hostname]/ without any certificate warning, otherwise check if you forget to delete old
pma-tls secret, or the certificate issue process is erroneous (execute
kubectl describe certificate pma-tls to check the status).
At the very beginning, the Kubernetes seems a little bit scared and complicated. I need to write some configuration yaml files to setup just one service. But it has great profit: I don’t need to set every config by myself, I don’t need to write nginx config, run acme.sh commands etc. And I can deploy another cluster with same configuration files in just a few minutes. With
kustomize it’s quiet easy to generete and reuse configurations among clusters (see GitHub repo and this blog post).
An obvious disadvantage is relatively high memory usage, for example my
Kubernetes configuration will eat up to 1.5 GiB memory. The recommended memory, according to
microk8s official docs, is 4GiB. But anyway,
Kubernetes worth a try.