We have this semi-unique requirement with our EKS cluster. We need to have a publicly accessible ingress for api’s and public facing services, and we also want a private ingress that only we can access over the VPN. This will allow us to access services like Prometheus and Grafana privately and have public api’s on a separate load balancer.

We’re using ingress-nginx which is great, it’s like nginx on steroids. Before working with Kubernetes I had a lot of experience with nginx setting up servers by hand or with automated scripts and templates. This is a whole other level though. ingress-nginx does everything I used to do, and a lot more, with just a few simple parameters passed to a helm chart.

Figuring out how to get an internal only load balancer, while keeping a public facing load balancer was not a simple task. Eventually though I settled on 2 deployments that use the standard helm chart.

What I wanted to achieve is that the standard ingress class is nginx. Any resource created without a class will be picked up by nginx and then placed on the internal only network load balancer. Any ingress resource associated with he nginx-public class will be linked to the public load balancer.

The configuration below creates an internal only load balancer. Since I am using AWS I have set the annotations to give me a Network Load Balancer.

controller:
  ingressClassByName: true

  ingressClassResource:
    name: nginx
    enabled: true
    default: true
    controllerValue: "k8s.io/ingress-nginx"

  service:
    external:
      enabled: false
    internal:
      enabled: true
      annotations:
        service.beta.kubernetes.io/aws-load-balancer-internal: "true"
        service.beta.kubernetes.io/aws-load-balancer-name: "load-balancer-internal"
        service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
        service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
        service.beta.kubernetes.io/aws-load-balancer-type: nlb

Then deploy the load balancer using helm:

helm upgrade \
  --install \
  --create-namespace \
  ingress-nginx \
  ingress-nginx/ingress-nginx \
  --namespace ingress-nginx \
  -f values.yaml

Our second load balancer needs to be created in its own namespace and the internal load balancer should be disabled.

controller:
  ingressClassByName: true

  ingressClassResource:
    name: nginx-public
    enabled: true
    default: false
    controllerValue: "k8s.io/ingress-nginx-public"

  ingressClass: nginx-public
  service:
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
      service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
      service.beta.kubernetes.io/aws-load-balancer-type: nlb
    # Disable the external LB
    external:
      enabled: true

Then deploy the load balancer using helm:

helm upgrade \
 --install \
 --create-namespace \
 ingress-nginx-public \
 ingress-nginx/ingress-nginx \
 --namespace ingress-nginx-public \
 -f values-public.yaml

If you look closely at the public load balancer you will see I have included an extra setting

controller:
  ingressClass: nginx-public

If this is not set, the default value is nginx. This leads to some really weird behaviour when combined with external-dns.

external-dns is hooked up to Route53 and will modify the Route53 CNAME entries for each ingress. The problem is that each deployed ingress-nginx monitors the defined Ingress endpoints and assigns them to a load balancer based on the ingressClass setting. If both of them are looking for ingress definitions that are tagged with nginx, then the assigned address starts flip-flopping!

You can see this by running the watch command every 2 seconds and checking the differences.

watch -n 2 -d kubectl get ingress

external-dns then updates the CNAME entries and your ingress endpoints keep jumping load balancers meaning services jump between public and private every few seconds 😵‍💫

The solution is simple, just make sure that you clearly set the ingressClass value. Just setting this is not enough:

controller:
  ingressClassResource:
    name: nginx-public