As I’m starting to investigate K8’s as a replacement for docker swarm, I have been seeing how to integrate it with our cloud infrastructure. I found a plugin which integrates the volumes, networking etc in with openstack natively, meaning that instead of doing a “volume mount” in docker swarm, with the dirs pinned to a single host and creating single points of failure, you can bind the volume to a pod, and then the pod can float around nodes in the cluster, and if a node goes down – the volume mount auto moves over to the other node in openstack!
It’s pretty neat… on paper…. The documentation was lacklustre and very outdated, so I thought I would write an article up in case anyone else goes down the same path as myself. More will be written up about K8’s in the future I’m sure as I go along and learn the magic tricks.
Install package repositories
sudo apt update
sudo apt -y install curl apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt update
Install packages
sudo apt update
sudo apt -y install vim git curl wget kubelet kubeadm kubectl
(Optional) Mark package versions on hold
sudo apt-mark hold kubelet kubeadm kubectl
Disable swap
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
sudo swapoff -a
Configure sysctl
sudo modprobe overlay
sudo modprobe br_netfilter
sudo tee /etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl --system
Install and configure the docker runtime
# Add repo and Install packages
sudo apt update
sudo apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt update
sudo apt install -y containerd.io docker-ce docker-ce-cli
# Create required directories
sudo mkdir -p /etc/systemd/system/docker.service.d
# Create daemon json config file
sudo tee /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
# Start and enable Services
sudo systemctl daemon-reload
sudo systemctl restart docker
sudo systemctl enable docker
(optional) Add your user to the docker group
sudo adduser <username> docker
Double check the br_netfilter module is loaded
lsmod | grep br_netfilter

Start kubelet on the master node
sudo systemctl enable kubelet
Pre-Download K8 docker images
sudo kubeadm config images pull

Create your kube cluster init configuration
Create a file in /etc/kubernetes
/ called init-config
with the following contents:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
cloud-provider: "external"
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: "v1.22.1"
apiServer:
extraArgs:
enable-admission-plugins: NodeRestriction
runtime-config: "storage.k8s.io/v1=true"
controllerManager:
extraArgs:
external-cloud-volume-plugin: openstack
extraVolumes:
- name: "cloud-config"
hostPath: "/etc/kubernetes/cloud-config"
mountPath: "/etc/kubernetes/cloud-config"
readOnly: true
pathType: File
networking:
serviceSubnet: "10.96.0.0/12"
podSubnet: "10.224.0.0/16"
dnsDomain: "cluster.local"
Feel free to change the CIDR’s and dnsDomain of course
Create your kubernetes openstack configuration file
in /etc/kubernetes/
create a file called cloud-config
and add the following (replace as neccessary)
[Global]
region=RegionOne
username=username
password=password
auth-url=https://openstack.cloud:5000/v3
tenant-id=14ba698c0aec4fd6b7dc8c310f664009
domain-id=default
ca-file=/etc/kubernetes/ca.pem
[BlockStorage]
bs-version=v2
ignore-volume-az=true
rescan-on-resize=true
Initiate the cluster
sudo kubeadm init --config=/etc/kubernetes/init-config
Copy the admin.conf to your home dir
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Check the master and check it has the correct taint
The master node won’t have connected to openstack just yet – to verify this you can run:
kubectl describe node <servername>
You will see the following lines:
Taints: node-role.kubernetes.io/master:NoSchedule
node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule
node.kubernetes.io/not-ready:NoSchedule
Deploy secret to the cluster containing the cloud config
kubectl create secret -n kube-system generic cloud-config --from-literal=cloud.conf="$(cat /etc/kubernetes/cloud-config)" --dry-run -o yaml > cloud-config-secret.yaml
kubectl apply -f cloud-config-secret.yaml
Deploy CA Secret (use empty file if not using a custom CA file)
kubectl create secret -n kube-system generic openstack-ca-cert --from-literal=ca.pem="$(cat /etc/kubernetes/ca.pem)" --dry-run -o yaml > openstack-ca-cert.yaml
kubectl apply -f openstack-ca-cert.yaml
Create openstack cloud controller RBAC resources
kubectl apply -f https://github.com/kubernetes/cloud-provider-openstack/raw/release-1.15/cluster/addons/rbac/cloud-controller-manager-roles.yaml
kubectl apply -f https://github.com/kubernetes/cloud-provider-openstack/raw/release-1.15/cluster/addons/rbac/cloud-controller-manager-role-bindings.yaml
Create cloud controller deployment
Make a directory called deployments, this helps keep things nice and tidy. For my cluster I will use /opt/kube/deployments
create a file called openstack-cloud-controller-manager-ds.yml
with the following contents:
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: cloud-controller-manager
namespace: kube-system
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: openstack-cloud-controller-manager
namespace: kube-system
labels:
k8s-app: openstack-cloud-controller-manager
spec:
selector:
matchLabels:
k8s-app: openstack-cloud-controller-manager
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
k8s-app: openstack-cloud-controller-manager
spec:
nodeSelector:
node-role.kubernetes.io/master: ""
securityContext:
runAsUser: 1001
tolerations:
- key: node.cloudprovider.kubernetes.io/uninitialized
value: "true"
effect: NoSchedule
- key: node-role.kubernetes.io/master
effect: NoSchedule
- effect: NoSchedule
key: node.kubernetes.io/not-ready
serviceAccountName: cloud-controller-manager
containers:
- name: openstack-cloud-controller-manager
image: docker.io/k8scloudprovider/openstack-cloud-controller-manager:v1.15.0
args:
- /bin/openstack-cloud-controller-manager
- --v=1
- --cloud-config=$(CLOUD_CONFIG)
- --cloud-provider=openstack
- --use-service-account-credentials=true
- --address=127.0.0.1
volumeMounts:
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/config
name: cloud-config-volume
readOnly: true
- mountPath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
name: flexvolume-dir
- mountPath: /etc/kubernetes
name: ca-cert
readOnly: true
resources:
requests:
cpu: 200m
env:
- name: CLOUD_CONFIG
value: /etc/config/cloud.conf
hostNetwork: true
volumes:
- hostPath:
path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec
type: DirectoryOrCreate
name: flexvolume-dir
- hostPath:
path: /etc/kubernetes/pki
type: DirectoryOrCreate
name: k8s-certs
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
- name: cloud-config-volume
secret:
secretName: cloud-config
- name: ca-cert
secret:
secretName: openstack-ca-cert
and apply that withkubectl apply -f /opt/kube/deployments/openstack-cloud-controller-manager-ds.yml
you will now see the following when you run kubetcl get pods -n kube-system
:

You will see it’s sat at ContainerCreating – you can see why by running kubectl describe node <servername>
:

This is perfectly normal at this stage, as we haven’t deployed a CNI just yet.
Deploy a CNI to the cluster
For this runthrough, we will deploy the calico network plugin – there are others out there such as weave or flannel, but for this we’ll stick with calico.
curl https://docs.projectcalico.org/manifests/calico.yaml -O
kubectl apply -f calico.yaml
Verify controller is working
Check by running kubectl get pods -n kube-system
You should see something like this:

and now with kubectl describe node
:

see the ProviderID – this should show the Instance ID of the server within openstack.
Add worker nodes to the cluster
Generate a new join token with
kubeadm token create --print-join-command
Create the below file on each worker node, replace values with the values from the command above, should be fairly obvious which one goes where
apiVersion: kubeadm.k8s.io/v1beta3
discovery:
bootstrapToken:
apiServerEndpoint: <MASTERIP>:6443
token: <TOKEN>
caCertHashes: ["sha256:<CERT HASH>"]
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
cloud-provider: "external"
on the worker nodes, repeat the steps of:
- Install updates
- Install package repo
- Install kube packages
- Install docker
- Install the docker systemctl config and sysctl netfilter parts
Then copy the cloud-config to /etc/kubernetes/cloud.conf on ALL nodes. AND /etc/config/cloud.conf. to do this I just made a little bash script:
sudo mkdir /etc/config/
sudo tee /etc/config/cloud.conf << EOF
[Global]
region=regionOne
username=XXXXXX
password=XXXXXX
auth-url=https://XXXXXXXXXXX
tenant-id=XXXXXXXXXX
domain-id=default
[BlockStorage]
bs-version=v2
EOF
sudo cp /etc/config/cloud.conf /etc/kubernetes/cloud.conf
sudo cp /etc/config/cloud.conf /etc/kubernetes/cloud-init.conf
sudo tee init.yaml <<EOF
apiVersion: kubeadm.k8s.io/v1beta3
discovery:
bootstrapToken:
apiServerEndpoint: XXXXXXX:6443
token: XXXXXXXXXXXX
caCertHashes: ["sha256:XXXXXXXXXXXXXXXXX"]
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
cloud-provider: "external"
EOF
Then run:
kubeadm join --config <filename>
And then run kubectl get nodes

Check containers / pods are working:
For this I like to use a DNS container, this way you can see it spawn correctly and that networking is working alright:
kubectl apply -f https://k8s.io/examples/admin/dns/dnsutils.yaml
kubectl get pods dnsutils
And we see it working:

Deploy the Cinder CSI Integration RBAC resources
Apply these on the master node
kubectl apply -f https://raw.githubusercontent.com/kubernetes/cloud-provider-openstack/release-1.15/manifests/cinder-csi-plugin/cinder-csi-controllerplugin-rbac.yaml
kubectl apply -f https://github.com/kubernetes/cloud-provider-openstack/raw/release-1.15/manifests/cinder-csi-plugin/cinder-csi-nodeplugin-rbac.yaml
Create the cinder controller deployment
change back into your deployment directory, and create the following files
cinder-csi-controllerplugin.yaml
kind: Service
apiVersion: v1
metadata:
name: csi-cinder-controller-service
namespace: kube-system
labels:
app: csi-cinder-controllerplugin
spec:
selector:
app: csi-cinder-controllerplugin
ports:
- name: dummy
port: 12345
---
kind: StatefulSet
apiVersion: apps/v1
metadata:
name: csi-cinder-controllerplugin
namespace: kube-system
spec:
serviceName: "csi-cinder-controller-service"
replicas: 1
selector:
matchLabels:
app: csi-cinder-controllerplugin
template:
metadata:
labels:
app: csi-cinder-controllerplugin
spec:
serviceAccount: csi-cinder-controller-sa
containers:
- name: csi-attacher
image: quay.io/k8scsi/csi-attacher:v1.0.1
args:
- "--v=5"
- "--csi-address=$(ADDRESS)"
env:
- name: ADDRESS
value: /var/lib/csi/sockets/pluginproxy/csi.sock
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: socket-dir
mountPath: /var/lib/csi/sockets/pluginproxy/
- name: csi-provisioner
image: quay.io/k8scsi/csi-provisioner:v1.0.1
args:
- "--provisioner=csi-cinderplugin"
- "--csi-address=$(ADDRESS)"
env:
- name: ADDRESS
value: /var/lib/csi/sockets/pluginproxy/csi.sock
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: socket-dir
mountPath: /var/lib/csi/sockets/pluginproxy/
- name: csi-snapshotter
image: quay.io/k8scsi/csi-snapshotter:v1.0.1
args:
- "--connection-timeout=15s"
- "--csi-address=$(ADDRESS)"
env:
- name: ADDRESS
value: /var/lib/csi/sockets/pluginproxy/csi.sock
imagePullPolicy: Always
volumeMounts:
- mountPath: /var/lib/csi/sockets/pluginproxy/
name: socket-dir
- name: cinder-csi-plugin
image: docker.io/k8scloudprovider/cinder-csi-plugin:v1.15.0
args :
- /bin/cinder-csi-plugin
- "--v=5"
- "--nodeid=$(NODE_ID)"
- "--endpoint=$(CSI_ENDPOINT)"
- "--cloud-config=$(CLOUD_CONFIG)"
- "--cluster=$(CLUSTER_NAME)"
env:
- name: NODE_ID
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: CSI_ENDPOINT
value: unix://csi/csi.sock
- name: CLOUD_CONFIG
value: /etc/config/cloud.conf
- name: CLUSTER_NAME
value: kubernetes
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: socket-dir
mountPath: /csi
- name: secret-cinderplugin
mountPath: /etc/config
readOnly: true
- mountPath: /etc/kubernetes
name: ca-cert
readOnly: true
volumes:
- name: socket-dir
hostPath:
path: /var/lib/csi/sockets/pluginproxy/
type: DirectoryOrCreate
- name: secret-cinderplugin
secret:
secretName: cloud-config
- name: ca-cert
secret:
secretName: openstack-ca-cert
and then cinder-csi-nodeplugin.yaml
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: csi-cinder-nodeplugin
namespace: kube-system
spec:
selector:
matchLabels:
app: csi-cinder-nodeplugin
template:
metadata:
labels:
app: csi-cinder-nodeplugin
spec:
serviceAccount: csi-cinder-node-sa
hostNetwork: true
containers:
- name: node-driver-registrar
image: quay.io/k8scsi/csi-node-driver-registrar:v1.1.0
args:
- "--v=5"
- "--csi-address=$(ADDRESS)"
- "--kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)"
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "rm -rf /registration/cinder.csi.openstack.org /registration/cinder.csi.openstack.org-reg.sock"]
env:
- name: ADDRESS
value: /csi/csi.sock
- name: DRIVER_REG_SOCK_PATH
value: /var/lib/kubelet/plugins/cinder.csi.openstack.org/csi.sock
- name: KUBE_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: socket-dir
mountPath: /csi
- name: registration-dir
mountPath: /registration
- name: cinder-csi-plugin
securityContext:
privileged: true
capabilities:
add: ["SYS_ADMIN"]
allowPrivilegeEscalation: true
image: docker.io/k8scloudprovider/cinder-csi-plugin:v1.15.0
args :
- /bin/cinder-csi-plugin
- "--nodeid=$(NODE_ID)"
- "--endpoint=$(CSI_ENDPOINT)"
- "--cloud-config=$(CLOUD_CONFIG)"
env:
- name: NODE_ID
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: CSI_ENDPOINT
value: unix://csi/csi.sock
- name: CLOUD_CONFIG
value: /etc/config/cloud.conf
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: socket-dir
mountPath: /csi
- name: pods-mount-dir
mountPath: /var/lib/kubelet/pods
mountPropagation: "Bidirectional"
- name: kubelet-dir
mountPath: /var/lib/kubelet
mountPropagation: "Bidirectional"
- name: pods-cloud-data
mountPath: /var/lib/cloud/data
readOnly: true
- name: pods-probe-dir
mountPath: /dev
mountPropagation: "HostToContainer"
- name: secret-cinderplugin
mountPath: /etc/config
readOnly: true
- mountPath: /etc/kubernetes
name: ca-cert
readOnly: true
volumes:
- name: socket-dir
hostPath:
path: /var/lib/kubelet/plugins/cinder.csi.openstack.org
type: DirectoryOrCreate
- name: registration-dir
hostPath:
path: /var/lib/kubelet/plugins_registry/
type: Directory
- name: kubelet-dir
hostPath:
path: /var/lib/kubelet
type: Directory
- name: pods-mount-dir
hostPath:
path: /var/lib/kubelet/pods
type: Directory
- name: pods-cloud-data
hostPath:
path: /var/lib/cloud/data
type: Directory
- name: pods-probe-dir
hostPath:
path: /dev
type: Directory
- name: secret-cinderplugin
secret:
secretName: cloud-config
- name: ca-cert
secret:
secretName: openstack-ca-cert
and finally the storage class in cinder-csi-storageclass.ym
l :
NOTE!!!! the kubernetes documenation has the provisioner name wrong – it changed in V.13.0 – this took me ages to figure out. It should be cinder.csi.openstack.org NOT csi-cinderplugin!
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-sc-cinderplugin
provisioner: cinder.csi.openstack.org
apply all of these with the kubectl apply -f <filename>
command
Test your persistent volume
Now, all should be working – the best way to test is to create a PVC (persistent volume claim) –
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myvol
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: csi-sc-cinderplugin
apply this and then check with kubectl get pvc

Voila!
(Optional) Set the openstack driver as default for PVC’s
kubectl patch storageclass csi-sc-cinderplugin -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'