Description
Add hostNetwork and dnsPolicy fields to config.webhook in DevWorkspaceOperatorConfig, allowing clusters that use a custom CNI (e.g. Cilium on EKS) to configure the webhook server pod to bind to the node network so it remains reachable from the managed control plane.
Additional context
On EKS clusters running Cilium (or any custom CNI where pod CIDRs are not routable from the managed control plane), the API server cannot reach webhook server pods via their pod IP. This breaks all admission webhooks with errors like:
failed calling webhook "validate-exec.devworkspace-controller.svc":
Post "https://devworkspace-webhookserver.devworkspace-controller.svc:443/...":
dial tcp <pod-ip>:8443: connect: no route to host
The fix is to set hostNetwork: true and dnsPolicy: ClusterFirstWithHostNet on the webhook server Deployment. However, the operator currently reconciles this Deployment without exposing these fields — meaning any external patch (ArgoCD, kubectl) is overwritten on every controller restart, creating a permanent race condition.
Solution
Extend config.webhook in OperatorConfiguration with two new optional fields, mirroring the existing pattern for nodeSelector and tolerations:
// HostNetwork controls whether the webhook server pod uses the host's
// network namespace. Required on clusters where the control plane cannot
// reach pod-CIDR IPs (e.g. EKS with Cilium CNI).
// Defaults to false.
// +optional
HostNetwork bool `json:"hostNetwork,omitempty"`
// DNSPolicy sets the DNS policy for the webhook server pod.
// Should be set to ClusterFirstWithHostNet when HostNetwork is true.
// Defaults to ClusterFirst.
// +optional
DNSPolicy corev1.DNSPolicy `json:"dnsPolicy,omitempty"`
Prior Art
The operator already exposes nodeSelector and tolerations under
config.webhook for the same class of infrastructure-level pod
placement concerns. This PR extends that pattern consistently.
config.webhook reference:
https://github.com/devfile/devworkspace-operator/blob/main/deploy/deployment/kubernetes/combined.yaml
Testing
Related Issues
aws/containers-roadmap#2744
Description
Add
hostNetworkanddnsPolicyfields toconfig.webhookinDevWorkspaceOperatorConfig, allowing clusters that use a custom CNI (e.g. Cilium on EKS) to configure the webhook server pod to bind to the node network so it remains reachable from the managed control plane.Additional context
On EKS clusters running Cilium (or any custom CNI where pod CIDRs are not routable from the managed control plane), the API server cannot reach webhook server pods via their pod IP. This breaks all admission webhooks with errors like:
The fix is to set
hostNetwork: trueanddnsPolicy: ClusterFirstWithHostNeton the webhook server Deployment. However, the operator currently reconciles this Deployment without exposing these fields — meaning any external patch (ArgoCD, kubectl) is overwritten on every controller restart, creating a permanent race condition.Solution
Extend
config.webhookinOperatorConfigurationwith two new optional fields, mirroring the existing pattern fornodeSelectorandtolerations:Prior Art
The operator already exposes
nodeSelectorandtolerationsunderconfig.webhookfor the same class of infrastructure-level podplacement concerns. This PR extends that pattern consistently.
config.webhookreference:https://github.com/devfile/devworkspace-operator/blob/main/deploy/deployment/kubernetes/combined.yaml
Testing
hostNetwork: true+dnsPolicy: ClusterFirstWithHostNet— webhooks reachable fromcontrol plane after controller restart
Related Issues
aws/containers-roadmap#2744