Home Lab: Chapter 7 — Kubernetes DNS and SSL

posted Originally published at techquests.dev 5 min read

Howdy,

Note: This post has been shortened due to character limitations. For complete patch configurations and detailed explanations, refer to the original post.

Our environment is starting to take shape. We have a Kubernetes cluster up and running, an Ingress Controller managing external access to our services, and a way to handle secrets. The next step is making sure our services are accessible from the outside world. To do this, we need to configure DNS and SSL.

Getting a Domain

Register a domain from any registrar (Namecheap, GoDaddy, etc.) with full DNS management access. Using your own domain is recommended for straightforward SSL certificate issuance and HTTPS access without browser warnings.

DNS and SSL

DNS (Domain Name System) is what translates human-readable domain names into IP addresses, allowing us to access websites and services without memorizing numbers. We've touched on DNS before, but it's worth revisiting since it plays a crucial role in exposing our services. In a previous chapter, we set up a DNS server to resolve some of our internal services - mostly infrastructure-related. Now, we want to extend DNS to resolve the domain names for services that will be accessible both externally and internally.

Because we have two different scenarios, we'll need two DNS setups:

  • Internal-facing applications - accessible only within our network.
  • Public-facing applications - accessible from the internet.

To keep things simple, I'll use two separate DNS servers for these scenarios. One server will manage public records, and the other will manage internal records. This isn't strictly required - we could use a single DNS server for both - but separating them helps avoid conflicts and keeps things organized.

Some configuration details will vary depending on the DNS solution you choose. In this guide, I'll be using Bind9 for internal-facing applications and Cloudflare for public-facing applications. You can pick whichever DNS servers you prefer, as long as you can manage both internal and external records without conflicts.

Below is a high-level overview of the DNS and SSL setup for our Homelab:

flowchart TD subgraph Firewall["Firewall"] Unbound["Unbound"] end subgraph Internal_Facing["Kubernetes"] Bind9["Bind9 Authoritative DNS"] Internal_Services["Internal Services"] Internal_Ingresses["Internal Ingresses"] ExternalDNS["ExternalDNS"] CertManager["Cert-Manager"] end subgraph Internal["Homelab"] Internal_Facing Firewall end subgraph Public["Public"] CF["Cloudflare DNS"] CFIP["Cloudflare Anycast IP"] CF_Tunnel["Cloudflare Tunnel"] PublicResolvers end Unbound -- Forwards internal --> Bind9 Unbound -- Forwards public --> PublicResolvers[(Public DNS)] Bind9 --> Internal_Ingresses Internal_Ingresses --> Internal_Services ExternalDNS -- RFC2136 Updates --> Bind9 CertManager --> Internal_Services CF -- A/CNAME Record --> CFIP CFIP --> CF_Tunnel CF_Tunnel --> Internal_Services ExternalDNS -. Sync Records .-> Bind9 CertManager -. "DNS-01 Challenge" .-> CF

Internal facing DNS records

For internal-facing applications, we'll be using Bind9, an open-source authoritative DNS server. This setup allows us to:

  • Host internal DNS records for services accessible only within our network (e.g., nginx.<INTERNAL_DOMAIN>).
  • Resolve public domains by forwarding requests to external resolvers such as 1.1.1.1 (Cloudflare) or 8.8.8.8 (Google).

By combining Bind9 with Unbound as a forwarder, Bind9 becomes our primary internal DNS server capable of resolving both internal and external domains.

Bind9 Setup

Install Bind9 via Helm and ArgoCD with basic configuration:

helm:
  valuesObject:
    image:
      repository: internetsystemsconsortium/bind9
      tag: "9.21"
    service:
      dns-udp:
        type: NodePort
        ports:
          dns-udp:
            nodePort: 30053
    chartMode: authoritative
    persistence:
      enabled: true

Bind9 Configuration

Define zones using a named configuration file with TSIG keys for secure updates:

named.conf.local: |
    key "tsig-key" {
        algorithm hmac-sha512;
        secret "<SECRET>";
    };
    zone "<INTERNAL_DOMAIN>" in {
        type master;
        file "/named_config/<INTERNAL_DOMAIN>.zone";
        allow-transfer { key "tsig-key"; };
        update-policy { grant tsig-key zonesub ANY; };
    };

After creating the configuration files, push to Git and ArgoCD will deploy automatically.

Record Creation

ExternalDNS automatically manages DNS records for Kubernetes resources via RFC2136, a DNS update protocol supported by Bind9. Install ExternalDNS via Helm:

provider: rfc2136
rfc2136:
  host: dns-bind9-dns-tcp.dns.svc.cluster.local
  port: 53
  zone: <INTERNAL_DOMAIN>
  secretName: external-dns-tsig-key

Generate TSIG keys with tsig-keygen -a hmac-sha512 tsig-key and store as a Kubernetes secret for secure updates.

TLS Certificates

Install Cert Manager to automate certificate issuance via Let's Encrypt with DNS01 challenges. Configure a ClusterIssuer:

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: "<EMAIL>"
    privateKeySecretRef:
      name: letsencrypt-production
    solvers:
      - dns01:
          cloudflare:
            apiTokenSecretRef:
              name: cert-manager-cf-api-token

Annotate Ingress resources with cert-manager.io/cluster-issuer: letsencrypt to request certificates automatically.

Testing the Internal Setup

Deploy a test nginx application with an Ingress annotation for ExternalDNS and cert-manager to automatically create DNS records and TLS certificates:

kubectl apply -f nginx-internal-test.yaml

Verify the setup with:

kubectl get ingress nginx -n default
kubectl get certificate nginx-tls -n default
curl "https://nginx.<INTERNAL_DOMAIN>"

Unbound Forwarder

Configure Unbound to forward internal domain queries to Bind9 via OpnSense:

  1. Navigate to Services -> Unbound DNS -> Query Forwarding
  2. Add forwarding for <INTERNAL_DOMAIN> to Bind9 IP x.x.x.101 on port 30053

This enables DNS resolution for all internal services from anywhere in the network without specifying the DNS server.

Public facing DNS records

Use Cloudflare for public-facing applications. It provides free DNS services and Cloudflare Tunnels, which securely expose internal services to the internet without revealing your IP address.

Cloudflare Tunnels

Cloudflare Tunnels act as a tunnel between your services and Cloudflare, hiding your infrastructure IP and providing built-in attack protection. Use the cloudflare-operator to manage tunnels via Kubernetes.

Install the operator and create a tunnel:

apiVersion: networking.cfargotunnel.com/v1alpha1
kind: ClusterTunnel
metadata:
  name: cf-tunnel
spec:
  newTunnel:
    name: cf-tunnel
  cloudflare:
    email: "<EMAIL>"
    domain: "<PUBLIC_DOMAIN>"
    secret: cf-api-token

Expose applications using TunnelBinding:

apiVersion: networking.cfargotunnel.com/v1alpha1
kind: TunnelBinding
metadata:
  name: expose-nginx
subjects:
  - name: nginx
    spec:
      fqdn: nginx.<PUBLIC_DOMAIN>
      target: http://nginx.default.svc.cluster.local:8080
tunnelRef:
  kind: ClusterTunnel
  name: cf-tunnel

DNS records and TLS certificates are created automatically by Cloudflare.

Testing the Public Setup

Deploy a test nginx service and create a TunnelBinding to expose it:

kubectl apply -f nginx-external-test.yaml

Verify tunnel status:

kubectl describe tunnelbinding expose-nginx -n default

Access the application at https://nginx.<PUBLIC_DOMAIN>. DNS records and TLS certificates are automatically created and managed by Cloudflare.

Conclusion

In this chapter, we tackled one of the most important steps in making our cluster truly usable from anywhere: DNS and SSL. We mapped out the architecture, set up Bind9 for rock-solid internal DNS, and leaned on Cloudflare for public-facing names - all with automation in mind. Thanks to ExternalDNS and Cert-Manager, record creation and TLS issuance now happen without manual intervention, keeping everything secure and up to date.

With this in place, our homelab services have:

  • A clean separation between internal and public DNS management.
  • Automated DNS updates directly from Kubernetes resources.
  • Seamless HTTPS access - internally and externally - without scary browser warnings.

The end result? Any service we spin up can be securely exposed, tested, and shared with almost no extra work. We're no longer manually juggling DNS zones or dealing with certificate renewal headaches - it's all declarative, reproducible, and in sync with our GitOps flow.

From here, we can focus on deploying more useful applications, knowing that they'll just work whether we're inside the lab or halfway across the world. In the next chapter, we'll start putting this setup to use by deploying real workloads and integrating them into our automated homelab stack.

1 Comment

1 vote
1

More Posts

Home Lab: Chapter 8 — Kubernetes Storage with Rook-Ceph

aanogueira - Nov 5, 2025

Home Lab: Chapter 5 — Kubernetes Managing Secrets

aanogueira - Nov 5, 2025

Home Lab: Chapter 4 — Kubernetes GitOps with ArgoCD

aanogueira - Nov 5, 2025

Home Lab: Chapter 3 — Kubernetes Setup

aanogueira - Nov 5, 2025

Home Lab: Chapter 1 — Requirements, Hardware, Software and Architecture

aanogueira - Oct 3, 2025
chevron_left

Related Jobs

View all jobs →

Commenters (This Week)

3 comments
2 comments
1 comment

Contribute meaningful comments to climb the leaderboard and earn badges!