feat(k8s): Add optional canary Deployment to PipelineStep#313
feat(k8s): Add optional canary Deployment to PipelineStep#313
Conversation
Emit a second Deployment when with_canary is set and replicas > 1, sharing the same ConfigMap and applying emergency_patch to both manifests. Co-authored-by: Cursor <cursoragent@cursor.com>
| Assembles a k8s deployment by layering these structures on top of the base deployment | ||
| manifest: | ||
| 1. deployment_template: provided by the user | ||
| 2. the steraming platform specific additions (including the container) |
There was a problem hiding this comment.
| 2. the steraming platform specific additions (including the container) | |
| 2. the streaming platform specific additions (including the container) |
| emergency_patch=emergency_patch, | ||
| deployment_name=main_deployment_name, | ||
| replica_count=replicas - 1, | ||
| step_labels=labels, |
There was a problem hiding this comment.
Should these labels have env: primary added to separate them from the canary deployments?
There was a problem hiding this comment.
Indeed it should.
Turns out sbc does not do that and the main deployment also redeploys canary.
I geuss it is a mistake
Use disjoint matchLabels (primary vs canary) to avoid overlapping selectors.
Also fix a docstring typo ("streaming").
Co-authored-by: Cursor <cursoragent@cursor.com>
| emergency_patch=emergency_patch, | ||
| deployment_name=main_deployment_name, | ||
| replica_count=replicas, | ||
| step_labels={**labels, "env": "primary"}, |
There was a problem hiding this comment.
Bug: The env: primary label is unconditionally added to Kubernetes deployments, which will break updates for existing non-canary deployments due to an immutable field.
Severity: HIGH
Suggested Fix
Remove the addition of the env: primary label from the else block at line 534. The step_labels dictionary should only be assigned labels when add_canary is False. This will ensure the label is only added for canary deployments as intended. Consider adding a test to assert the env label is absent when canaries are disabled.
Prompt for AI Agent
Review the code at the location below. A potential bug has been identified by an AI
agent. Verify if this is a real issue. If it is, propose a fix; if not, explain why it's
not valid.
Location: sentry_streams_k8s/sentry_streams_k8s/pipeline_step.py#L534
Potential issue: The `env: primary` label is unconditionally added to the
`spec.selector.matchLabels`, `metadata.labels`, and `spec.template.metadata.labels` of
all Kubernetes deployments, even when the `with_canary` flag is `False`. Because the
`spec.selector.matchLabels` field is immutable in Kubernetes, attempting to apply this
updated configuration to any existing deployment will be rejected. This forces users who
are not using the canary feature to manually delete and recreate their deployments,
causing a service interruption. The logic in the `else` branch at line 534 incorrectly
adds this label when `add_canary` is `False`.
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
❌ Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
Reviewed by Cursor Bugbot for commit 337de6a. Configure here.
| step_labels={**labels, "env": "primary"}, | ||
| container=container, | ||
| volumes=volumes, | ||
| ) |
There was a problem hiding this comment.
New selector label breaks existing non-canary deployments
High Severity
The else branch (non-canary path) unconditionally adds "env": "primary" to step_labels, which propagates into spec.selector.matchLabels. Before this PR, the selector only contained pipeline-app, pipeline, and service. Since Kubernetes Deployment selectors are immutable after creation, any existing deployment that was created without the env label will fail to update — kubectl apply (or equivalent) will be rejected by the API server. This affects all deployments, not just those opting into canary. The env label in the selector is only needed when canary splitting is active to differentiate primary from canary pods; for non-canary deployments, the original label set suffices.
Reviewed by Cursor Bugbot for commit 337de6a. Configure here.


The
PipelineStepsentry-kube macro can optionally emit a stable Deployment and a single-replica canary Deployment so pipeline steps can run split replicas while sharing one ConfigMap.with_canaryand replica splitWhen
with_canaryis true andreplicasis at least two, the main manifest usesreplicas - 1and the canary uses one replica. The canary resource name gets a-canarysuffix and carriesenv: canaryon deployment metadata labels, pod template labels, andspec.selector.matchLabelsso callers can target it separately. Ifreplicasis one with canary enabled, the macro emits only the main Deployment at full replica count and omitscanary_deployment.Return shape and patches
The macro still returns
deploymentandconfigmap. When canary splitting applies, it also returnscanary_deployment, which Jinja or other renderers should emit as a second manifest. Anemergency_patch, when provided, is merged into both Deployments.Made with Cursor