Introduction

I’m a big fan of Kubernetes. This site is ran on it! It’s no secret that I run a significantly sized bare metal cluster in my house.

I have always had a pain with exposing various HTTP/S services back into my home network, especially with much more strict browser settings, and I am slowly tiring of having to install my own certificate authority, configure haproxy and more, just to get a basic application deployed and available.

My current architecture is:

Client Device > DNS Cluster > HAproxy Cluster > Kubernetes Service exposed via MetalLB > Application Pods

This always felt a bit wasteful, needing to run two haproxy servers outside of my kubernetes stack, to manage inbound connectivity. It wasn’t too bad to look after with the likes of Ansible Playbooks and other automation tools. I just knew, it could be far better.

Breakdown of the services in play:

Nginx-Ingress

Nginx-Ingress is a prometheus operator designed to be able to expose websites or applications via a nginx pod acting as a load balancer. It uses kubernetes annotations to look for being invoked/configured, and supports endless possibilities when it comes to configuration.

MetalLB

MetalLB is another very useful service that allows us to expose LoadBalancer types to my external LAN via BGP. I have a configured BGP peering setup between my K8 cluster and my home routers (dual pfsense via CARP/VRRP), when I expose a service of LoadBalancer type, MetalLB announces an IP address from a supplied range, as a /32 to my router pair, with each kubernetes worker node as a route, allowing me to announce a single IP address for a service, from 6 worker nodes.

ExternalDNS

External-DNS is a kubernetes operator that allows us to specify DNS records via kubernetes annotations. When presented with an annotation, this service can reach out to external DNS servers and register those hostnames within the DNS zones dynamically. It supports many different DNS providers, but here we will be doing some clever things with split-horizon DNS via BIND9 and Azure DNS Zones.

Cert-Manager

Cert-Manager allows us to use annotations again, to provision Lets Encrypt (or any ACME compatible provider) certificates to our cluster via kubernetes secrets. Nginx-ingress can then read these secrets to expose our applications over HTTP/S. Cert-Manager allows all of the standard DNS challenges, webhooks etc to conform to the ACME standards for SSL certificate provisioning.

Azure DNS Zones

Finally, we have Azure DNS Zones. I chose these because I already have a fair few services in an Azure tenant for DR, and Azure DNS hosting is extremely cheap, however this could be replaced with any provider that works with the External-DNS operator.

New Topology

New Process Flow

Getting it all set up

We need to get all of our various operators installed and configured, so lets do that first.

Installation of MetalLB

The best thing to do is follow the docs here to install the operator. Once installed, you need to configure MetalLB to interact with your BGP Peers, the documentation on how to do this is pretty verbose, but as an example - you can provide a configuration like this:

Define an address pool:

apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: lanaddresspool
  namespace: metallb-system
spec:
  addresses:
  - XXX.XXX.XXX.XXX/24

Define the bgp peers:

apiVersion: metallb.io/v1beta2
kind: BGPPeer
metadata:
  name: peer-lan-router-1
  namespace: metallb-system
spec:
  holdTime: 9m0s
  keepaliveTime: 0s
  myASN: XXXXX
  peerASN: XXXXXXX
  peerAddress: XXX.XXX.XXX.XXX

And finally, the advertisment:

apiVersion: metallb.io/v1beta1
kind: BGPAdvertisement
metadata:
  name: empty
  namespace: metallb-system

once these are all applied, and you have configured the routing engine on your BGP peer, you can now announce services from Kube on “LAN” IP Addresses

to do this, you can simply add: type: LoadBalancer to your existing Service definition

Installation of nginx-ingress

This one I found to be a bit tricky to get perfect. In my instance I already had it deployed and needed to change some settings, if you are deploying fresh it should be a little easier!

I use the official NGINX repos for the installation, I didn’t use Helm for my deployment and used the manifests as I find them a little easier to version control.

There is some good installation instructions here

In short - Clone the repo (I am using version 3.1.1) and..

Open the deployments/daemon-set/nginx-ingress.yaml file Edit the args section to have the following:

  - -nginx-configmaps=$(POD_NAMESPACE)/nginx-config
  - -enable-prometheus-metrics
  - -enable-external-dns
  - -enable-cert-manager
  - -report-ingress-status
  - -external-service=nginx-ingress
  - -health-status
  - -enable-custom-resources
  - -enable-latency-metrics

You can find out what each of these settings do by looking at the docs here

The main part that made a difference for me, was setting these:

- -enable-external-dns
- -enable-cert-manager
- -report-ingress-status
- -external-service=nginx-ingress
- -enable-custom-resources

I found that there is another similar nginx-ingress resource out there, that uses different documentation and parameters. Don’t fall into the same trap I did!

Then:

cd /path/to/cloned/repo
cd deployments
kubectl apply -f common/ns-and-sa.yaml
kubectl apply -f rbac/rbac.yaml
kubectl apply -f common/nginx-config.yaml
kubectl apply -f common/ingress-class.yaml
kubectl apply -f common/crds/k8s.nginx.org_virtualservers.yaml
kubectl apply -f common/crds/k8s.nginx.org_virtualserverroutes.yaml
kubectl apply -f common/crds/k8s.nginx.org_transportservers.yaml
kubectl apply -f common/crds/k8s.nginx.org_policies.yaml
kubectl apply -f common/crds/k8s.nginx.org_globalconfigurations.yaml
kubectl apply -f daemon-set/nginx-ingress.yaml
kubectl apply -f service/loadbalancer.yaml 

The final line is key, this deploys the nginx-ingress central service, and announces it via the MetalLB service.

For me, this is what it looked like:

kubectl get services --namespace nginx-ingress
NAME            TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
nginx-ingress   LoadBalancer   10.102.237.73   XXX.XXX.XXX.XXX    80:32335/TCP,443:31026/TCP   122m

If you visit the IP under EXTERNAL-IP on your browser, you should see a 404 response from NGINX.

Installation of External-DNS

This one is a nice and simple one. There are no huge manifests for this one, just some role setups and containers. I have it split into three files:

  1. role.yaml
  2. internal-dns.yaml
  3. azure-dns.yaml

Here are links to my examples:

role.yaml

internal-dns.yaml

external-dns.yaml

You will see that there are various placeholders with XXXXX These will need to be changed to suit your network.

Here is a useful document on how to configure the external-dns.yaml file to work with Azure.

There are a lot of examples in that repo, you will need to find one best suited to your situation and follow that.

Installation of Cert-Manager

Again, install the manifests and the CRD’s - these can be found here I am running version v1.13.3

There are a few ways of configuring Cert-Manager, you can have it watch for only specific namespaces, or you can install it cluster-wide. I run many namespaces for segregation, so I opted for cluster-wide by using a ClusterIssuer resource.

I configured two resources:

  1. letsencrypt-staging
  2. letsencrypt-prod

To make this work on internally hosted sites, LE cannot hit a .acme/well-known path on your servers which is the usual method as we don’t have them exposed to the internet. So we must use a DNS challenge. This is where the Azure DNS comes in - our External-DNS service publishes the records within the Azure DNS service, which Lets Encrypt can then access as it is a public DNS zone. All of the A records point to internal IP’s, but LE doens’t care.

An Example staging config:

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-staging
spec:
  acme:
    server: https://acme-staging-v02.api.letsencrypt.org/directory
    email: XXXXXXXXX
    privateKeySecretRef:
      name: letsencrypt-staging
    solvers:
    - dns01:
        azureDNS:
          clientID: XXXXX-XXXXXXX_XXXXXXXX-XXXXXXXX
          clientSecretSecretRef:
          # The following is the secret we created in Kubernetes. Issuer will use this to present challenge to Azure DNS.
            name: azuredns-config
            key: client-secret
          subscriptionID: XXXXXXXXXXXX
          tenantID: XXXXXXXXXXXXXX
          resourceGroupName: XXXXXXXXXXXX
          hostedZoneName: XXXXXXXXX
          environment: AzurePublicCloud

The same applies for prod, but the only difference is server becomes https://acme-v02.api.letsencrypt.org/directory

You can use the same Service Principal credentials you created for External-DNS in the configuration file.

To create the secret client-secret for the authentication, run the following command:

kubectl create secret generic azuredns-config --from-literal=client-secret=YOUR_APP_PASSWORD_FROM_THE_SP_HERE -n cert-manager

Using it all together

Now we have all of our moving parts done, we can now create a service for an application, here is the syntax you would use:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: website-ingress
  namespace: somethinghere
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/proxy_ssl_server_name: "on"
    nginx.ingress.kubernetes.io/proxy_ssl_name: "something.domain.name"
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
    external-dns.alpha.kubernetes.io/hostname: something.domain.name
spec:
  tls:
  - hosts:
    - something.domain.name
    secretName: something-tls-secret
  rules:
  - host: something.domain.name
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: something
            port:
              number: 80

Testing

And now we can see, the LE certificate is now present, and we can access our service my git server for example:

openssl s_client -showcerts -servername git.XXXXXX.XXXXX -connect git.XXXXXXX.XXXXXX:443 </dev/null
CONNECTED(00000005)
depth=2 C = US, O = Internet Security Research Group, CN = ISRG Root X1
verify return:1
depth=1 C = US, O = Let's Encrypt, CN = R3
verify return:1
depth=0 CN = git.XXXXXXX.XXXXXX
verify return:1
---
Certificate chain
 0 s:CN = git.XXXXXXX.XXXXXX
   i:C = US, O = Let's Encrypt, CN = R3
   a:PKEY: rsaEncryption, 2048 (bit); sigalg: RSA-SHA256
   v:NotBefore: Dec 22 00:19:13 2023 GMT; NotAfter: Mar 21 00:19:12 2024 GMT