[OCPNODE-4516]: Migrate testcase 67564 from OTP to origin#31161
[OCPNODE-4516]: Migrate testcase 67564 from OTP to origin#31161BhargaviGudi wants to merge 1 commit into
Conversation
|
Pipeline controller notification For optional jobs, comment This repository is configured in: automatic mode |
|
Skipping CI for Draft Pull Request. |
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughAdds a disruptive E2E that verifies Node drain blocking E2E + supporting test helpers
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Caution Pre-merge checks failedPlease resolve all errors before merging. Addressing warnings is optional.
❌ Failed checks (1 error, 1 warning)
✅ Passed checks (10 passed)
✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: BhargaviGudi The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (1)
test/extended/node/node_e2e/node.go (1)
222-234: ⚡ Quick winUse
wait.PollUntilContextTimeoutand referencereplicasinstead of hardcoded6.
wait.Pollis deprecated in favor of the context-aware variantwait.PollUntilContextTimeoutink8s.io/apimachinery. Also,deploy.Status.ReadyReplicas == 6ignores the localreplicasvariable defined at the start of this test, so future changes to the replica count will silently drift.♻️ Suggested refactor
- err = wait.Poll(3*time.Second, 5*time.Minute, func() (bool, error) { - deploy, pollErr := oc.KubeClient().AppsV1().Deployments(namespace).Get(ctx, "hello-openshift", metav1.GetOptions{}) + err = wait.PollUntilContextTimeout(ctx, 3*time.Second, 5*time.Minute, true, func(ctx context.Context) (bool, error) { + deploy, pollErr := oc.KubeClient().AppsV1().Deployments(namespace).Get(ctx, "hello-openshift", metav1.GetOptions{}) if pollErr != nil { e2e.Logf("Error getting deployment: %v", pollErr) return false, nil } - if deploy.Status.ReadyReplicas == 6 { + if deploy.Status.ReadyReplicas == replicas { e2e.Logf("Deployment is ready with %d replicas", deploy.Status.ReadyReplicas) return true, nil } - e2e.Logf("Waiting for deployment, ready replicas: %d/6", deploy.Status.ReadyReplicas) + e2e.Logf("Waiting for deployment, ready replicas: %d/%d", deploy.Status.ReadyReplicas, replicas) return false, nil })🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@test/extended/node/node_e2e/node.go` around lines 222 - 234, Replace the deprecated wait.Poll call with the context-aware wait.PollUntilContextTimeout variant and use the local replicas variable instead of the hardcoded 6: change the wait invocation to wait.PollUntilContextTimeout(ctx, 3*time.Second, 5*time.Minute, func(...) { ... }) (or the exact k8s signature) and inside the closure compare deploy.Status.ReadyReplicas == replicas; keep the same oc.KubeClient().AppsV1().Deployments(namespace).Get(...) call and existing logging but update messages to reference replicas so the check follows the test's replicas variable.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@test/extended/node/node_e2e/node.go`:
- Around line 259-262: The pod-count assertion is wrong because
strings.Split(podsInWorker, " ") yields [""] for empty output; update the check
after the oc.AsAdmin().WithoutNamespace().Run(...).Output() call so it verifies
podsInWorker is non-empty or uses strings.Fields(podsInWorker) and asserts
len(...) > 0 (e.g., replace len(strings.Split(podsInWorker, " ")) check with
len(strings.Fields(podsInWorker)) > 0 or
o.Expect(podsInWorker).NotTo(o.BeEmpty())), referencing the podsInWorker
variable and the oc.AsAdmin().WithoutNamespace().Run(...) call that produces it
and keeping the existing o.Expect(err).NotTo(o.HaveOccurred()).
- Around line 165-167: This test assumes dedicated worker nodes; add a topology
guard (skip) before this g.It("[OTP] node's drain...") so it does not run on
SNO, TNF or TNA topologies — follow the pattern used in the existing BeforeEach
MicroShift guard: detect cluster topology (the same helper used elsewhere in the
file) and call Skipf when topology is SingleNode (SNO), TNF, or TNA so
GetSingleWorkerNode won't return the only schedulable host; place the guard at
the start of the test or in the surrounding BeforeEach to ensure the drain logic
(and GetSingleWorkerNode usage) is not exercised on those topologies.
- Around line 264-267: The current check reads .status.conditions[0].status
immediately which can be empty; instead poll until the DisruptionAllowed
condition exists and then assert its status is "False". Replace the single
immediate get that sets pdbStatus with a short retry/poll (e.g.,
wait.PollImmediate or Ginkgo Eventually) that runs
oc.AsAdmin().WithoutNamespace().Run("get").Args("poddisruptionbudget", "my-pdb",
"-n", namespace,
"-o=jsonpath={.status.conditions[?(@.type==\"DisruptionAllowed\")].status}")
until the jsonpath returns a non-empty value, then assert that the returned
value equals "False" (avoid strings.Contains on possibly empty string).
In `@test/extended/node/node_utils.go`:
- Around line 754-767: The WaitClusterOperatorAvailable helper currently can
return false positives when availableCOStatus is empty and uses an excessive
120-minute timeout; update WaitClusterOperatorAvailable to verify the output is
non-empty and that every token equals "True" (or, better, replace the jsonpath
approach by using the typed configv1 ClusterOperator client to iterate all
ClusterOperators and assert each has an Available condition with
Status==configv1.ConditionTrue), and reduce the timeout constant from
120*time.Minute to a shorter cap (e.g., 30–45 minutes) so
wait.PollUntilContextTimeout enforces the tighter limit; adjust the polling
closure (where availableCOStatus, err :=
oc.AsAdmin().WithoutNamespace().Run("get").Args(...).Output() is used) to return
false on empty output or any non-True condition and surface real errors via
waitErr.
---
Nitpick comments:
In `@test/extended/node/node_e2e/node.go`:
- Around line 222-234: Replace the deprecated wait.Poll call with the
context-aware wait.PollUntilContextTimeout variant and use the local replicas
variable instead of the hardcoded 6: change the wait invocation to
wait.PollUntilContextTimeout(ctx, 3*time.Second, 5*time.Minute, func(...) { ...
}) (or the exact k8s signature) and inside the closure compare
deploy.Status.ReadyReplicas == replicas; keep the same
oc.KubeClient().AppsV1().Deployments(namespace).Get(...) call and existing
logging but update messages to reference replicas so the check follows the
test's replicas variable.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Central YAML (inherited)
Review profile: CHILL
Plan: Enterprise
Run ID: edfa1ee0-5c80-4ee0-9bdd-d0a819f8f1bc
📒 Files selected for processing (2)
test/extended/node/node_e2e/node.gotest/extended/node/node_utils.go
bbcfd5b to
f6be5a1
Compare
|
/pipeline required |
|
Scheduling required tests: |
656bc72 to
b1da642
Compare
|
Scheduling required tests: |
b1da642 to
d360edd
Compare
|
Scheduling required tests: |
Migrate testcase OCP-67564 from OTP to origin
Summary by CodeRabbit