
<!--
SEO Title: Debugging Distroless Containers: kubectl debug, Ephemeral Containers, and When to Use Each
Meta Description: Distroless containers have no shell, no tools, no debugger. Learn every technique to debug them: kubectl debug, ephemeral containers, copy-to strategy, cdebug, and node-level access — with RBAC patterns for dev and production.
Focus Keywords: kubectl debug distroless,debug distroless container kubernetes,distroless container debugging,kubectl debug ephemeral container
Suggested Slug: debugging-distroless-containers
-->
The container works fine in CI. It deploys successfully to staging. Then something goes wrong in production and you type the command you always type: kubectl exec -it my-pod -- /bin/bash. The response is immediate: OCI runtime exec failed: exec failed: unable to start container process: exec: "/bin/bash": stat /bin/bash: no such file or directory.
You try /bin/sh. Same error. You try ls. Same error. The container image is distroless — it ships only your application binary and its runtime dependencies, with no shell, no package manager, no debugging tools of any kind. This is intentional and correct from a security standpoint. It is also a significant operational challenge the first time you face it in production.
This article covers every practical technique for debugging distroless containers in Kubernetes: kubectl debug with ephemeral containers (the standard approach), pod copy strategy (for Kubernetes versions without ephemeral container support, or when you need to modify the running pod spec), debug image variants (the pragmatic developer shortcut), cdebug (a purpose-built tool that simplifies the process), and node-level debugging (the last resort with the most power). For each technique I will explain what it can and cannot do, what Kubernetes version or RBAC permissions it requires, and in which scenario — developer in local, platform engineer in staging, ops in production — it is the appropriate choice.
Why Distroless Breaks the Normal Debugging Workflow
Traditional container debugging assumes you can exec into the container and use shell tools: ps, netstat, strace, curl, a text editor. Distroless images remove all of this by design. The Google distroless project, Chainguard’s Wolfi-based images, and the broader minimal image ecosystem deliberately exclude everything that is not required to run the application. The result is a dramatically smaller attack surface: no shell means no RCE via shell injection, no package manager means no easy escalation path, fewer binaries means fewer CVEs in the image scan.
The tradeoff is operational: when something goes wrong, you cannot use the tools that the process itself is not allowed to run. A Java application in gcr.io/distroless/java17-debian12 has the JRE and nothing else. A Go binary compiled with CGO disabled and shipped in gcr.io/distroless/static-debian12 has literally only the binary and the necessary CA certificates and timezone data. There is no wget to download a debug binary, no apt to install one, no bash to run a script.
Kubernetes solves this at the platform level with ephemeral containers, added as stable in Kubernetes 1.25. The principle is that a debug container — which can have a full shell and any tools you want — can be injected into a running pod and share its process namespace, network namespace, and filesystem mounts without modifying the original container or restarting the pod.
Option 1: kubectl debug with Ephemeral Containers
Ephemeral containers are the canonical solution. Since Kubernetes 1.25 (stable), kubectl debug can inject a temporary container into a running pod. The container shares the target pod’s network namespace by default, and with --target it can also share the process namespace of a specific container, allowing you to inspect its running processes and open file descriptors.
The basic invocation is:
kubectl debug -it my-pod \
--image=busybox:latest \
--target=my-container
The --target flag is the critical piece. Without it, the ephemeral container gets its own process namespace. With it, it shares the process namespace of the specified container — meaning you can run ps aux and see the application’s processes, use ls -la /proc/<pid>/fd to inspect open file descriptors, and read the application’s environment via cat /proc/<pid>/environ.
For a more capable debug environment, replace busybox with a richer image:
kubectl debug -it my-pod \
--image=nicolaka/netshoot \
--target=my-container
nicolaka/netshoot includes tcpdump, curl, dig, nmap, ss, iperf3, and dozens of other network diagnostic tools, making it the standard choice for network debugging scenarios.
What You Can and Cannot Do
Ephemeral containers share the pod’s network namespace and, when --target is used, the process namespace. This gives you:
- Full visibility into the application’s network traffic from inside the pod (tcpdump, ss, netstat)
- Process inspection via
/proc/<pid> — open files, memory maps, environment variables, CPU/memory usage
- Access to the pod’s DNS resolution context — exactly the same
/etc/resolv.conf the application sees
- Ability to make outbound network calls from the same network namespace (testing service endpoints, DNS resolution)
What you do not get with ephemeral containers:
- Access to the application container’s filesystem. The ephemeral container has its own root filesystem. You cannot
cat /app/config.yaml from the application container’s filesystem unless you access it via /proc/<pid>/root/.
- Ability to remove the container once added. Ephemeral containers are permanent until the pod is deleted. This is by design — the Kubernetes API does not allow removing them after creation.
- Volume mount modifications via CLI. You cannot add volume mounts to an ephemeral container via
kubectl debug (though the API spec supports it, the CLI does not expose this).
- Resource limits. Ephemeral containers do not support resource requests and limits in the
kubectl debug CLI, though this is evolving.
Accessing the Application Filesystem
The most common surprise for developers new to ephemeral containers is that they cannot directly browse the application container’s filesystem. The workaround is the /proc filesystem:
# Find the application's PID
ps aux
# Browse its filesystem via /proc
ls /proc/1/root/app/
cat /proc/1/root/etc/config.yaml
# Or set the root to the application's root
chroot /proc/1/root /bin/sh # only if /bin/sh exists in the app image
The /proc/<pid>/root path is a symlink to the container’s root filesystem as seen from the process namespace. Because the ephemeral container shares the process namespace with --target, the application’s PID is typically 1, and /proc/1/root gives you full read access to its filesystem.
RBAC Requirements
Ephemeral containers require the pods/ephemeralcontainers subresource permission. This is separate from pods/exec, which controls kubectl exec. A common mistake is to grant pods/exec for debugging purposes without realizing that ephemeral containers require an additional grant:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: ephemeral-debugger
rules:
- apiGroups: [""]
resources: ["pods/ephemeralcontainers"]
verbs: ["update", "patch"]
- apiGroups: [""]
resources: ["pods/attach"]
verbs: ["create", "get"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list"]
In production environments, this permission should be tightly scoped: time-limited via RoleBinding rather than permanent ClusterRoleBinding, restricted to specific namespaces, and ideally gated behind an approval workflow. The debug container runs as root by default, which can create privilege escalation paths if the application container runs as a non-root user with shared process namespace — the debug container can attach to the application’s processes with higher privileges.
Option 2: kubectl debug –copy-to (Pod Copy Strategy)
When you need to modify the pod’s container spec — replace the image, change environment variables, add a sidecar with a shared filesystem — the --copy-to flag creates a full copy of the pod with your modifications applied:
kubectl debug my-pod \
-it \
--copy-to=my-pod-debug \
--image=my-app:debug \
--share-processes
This creates a new pod named my-pod-debug that is a copy of my-pod but with the container image replaced by my-app:debug. If my-app:debug is your application image built with debug tooling included (or a debug variant from your registry), this lets you interact with the exact same binary in the exact same configuration as the original pod.
A more common use of --copy-to is to attach a debug container alongside the existing application container while keeping the original image unchanged:
kubectl debug my-pod \
-it \
--copy-to=my-pod-debug \
--image=busybox \
--share-processes \
--container=debugger
This creates the copy-pod with both the original containers and a new debugger container sharing the process namespace. Unlike ephemeral containers, this approach supports volume mounts and resource limits, and the debug pod can be deleted cleanly when you are done.
Limitations of the Copy Strategy
The pod copy approach has a critical limitation: it is not debugging the original pod. It creates a new pod that may behave differently because:
- It does not share the original pod’s in-memory state — if the issue is a goroutine leak or heap corruption that has been accumulating for hours, the fresh copy will not exhibit it immediately
- It creates a new Pod UID, which means any admission webhooks, network policies, or pod-level security contexts that depend on pod identity may apply differently
- If the original pod is crashing (
CrashLoopBackOff), the copy will also crash — this technique does not help for crash debugging unless you also change the entrypoint
For crash debugging specifically, combine --copy-to with a modified entrypoint to keep the container alive:
kubectl debug my-crashing-pod \
-it \
--copy-to=my-pod-debug \
--image=busybox \
--share-processes \
-- sleep 3600
Option 3: Debug Image Variants
The most pragmatic approach — and the one most appropriate for developer workflows — is to maintain a debug variant of your application image that includes shell tooling. Both the Google distroless project and Chainguard provide this pattern officially.
Google distroless images have a :debug tag that adds BusyBox to the image:
# Production image
FROM gcr.io/distroless/java17-debian12
# Debug variant — identical but with BusyBox shell
FROM gcr.io/distroless/java17-debian12:debug
Chainguard images follow a similar convention with :latest-dev variants that include apk, a shell, and common utilities:
# Production (zero shell, minimal footprint)
FROM cgr.dev/chainguard/go:latest
# Development/debug variant
FROM cgr.dev/chainguard/go:latest-dev
If you build your own base images, the recommended approach is to use multi-stage builds and maintain separate build targets:
FROM golang:1.22 AS builder
WORKDIR /app
COPY . .
RUN go build -o myapp .
# Production: static distroless image
FROM gcr.io/distroless/static-debian12 AS production
COPY --from=builder /app/myapp /myapp
ENTRYPOINT ["/myapp"]
# Debug variant: same binary, with shell tools
FROM gcr.io/distroless/static-debian12:debug AS debug
COPY --from=builder /app/myapp /myapp
ENTRYPOINT ["/myapp"]
In your CI/CD pipeline, build both targets and push my-app:${VERSION} (production) and my-app:${VERSION}-debug (debug variant) to your registry. The debug image is never deployed to production by default, but it exists and is ready to be used with kubectl debug --copy-to when needed.
Security Considerations for Debug Variants
Debug image variants defeat much of the security benefit of distroless if they are used in production, even temporarily. Track usage carefully: log when debug images are deployed, require explicit approval, and ensure they are removed after the debugging session. In regulated environments, consider whether deploying a debug variant to production namespaces is permitted by your security policy — in many cases it is not, and you must use ephemeral containers (which add a debug process to the pod without modifying the application image) instead.
Option 4: cdebug
cdebug is an open-source CLI tool that simplifies distroless debugging by wrapping kubectl debug with more ergonomic defaults and additional capabilities. Its primary value is in making ephemeral container debugging feel like a native shell experience:
# Install
brew install cdebug
# or: go install github.com/iximiuz/cdebug@latest
# Debug a running pod
cdebug exec -it my-pod
# Specify a namespace and container
cdebug exec -it -n production my-pod -c my-container
# Use a specific debug image
cdebug exec -it my-pod --image=nicolaka/netshoot
What cdebug adds over raw kubectl debug:
- Automatic filesystem chroot.
cdebug exec automatically sets the filesystem root of the debug container to the target container’s filesystem, so you browse / and see the application’s files — not the debug image’s files. This addresses the most common friction point with kubectl debug.
- Docker integration.
cdebug exec works identically for Docker containers (cdebug exec -it <container-id>), making it the same muscle memory for local and cluster debugging.
- No RBAC complications for Docker-based local development — useful for developer workflows before the code reaches Kubernetes.
The tradeoff: cdebug is a third-party dependency and requires installation. In environments with strict tooling policies (regulated industries, air-gapped clusters), it may not be an option. In those cases, the raw kubectl debug workflow with /proc/1/root filesystem navigation is the baseline.
Option 5: Node-Level Debugging
When everything else fails — the pod is in CrashLoopBackOff too fast to attach to, the issue is a kernel-level problem, or you need tools like strace that require elevated privileges — node-level debugging gives you direct access to the container’s processes from the host node.
kubectl debug node/ creates a privileged pod on the target node that mounts the node’s root filesystem under /host:
kubectl debug node/my-node-name \
-it \
--image=nicolaka/netshoot
From this privileged pod, you can use nsenter to enter the namespaces of any container running on the node:
# Find the container's PID on the node
# (from within the node debug pod)
crictl ps | grep my-container
crictl inspect <container-id> | grep pid
#