Skip to content

[OCPNODE-4516]: Migrate testcase 67564 from OTP to origin#31161

Open
BhargaviGudi wants to merge 1 commit into
openshift:mainfrom
BhargaviGudi:migrate-67564
Open

[OCPNODE-4516]: Migrate testcase 67564 from OTP to origin#31161
BhargaviGudi wants to merge 1 commit into
openshift:mainfrom
BhargaviGudi:migrate-67564

Conversation

@BhargaviGudi
Copy link
Copy Markdown
Contributor

@BhargaviGudi BhargaviGudi commented May 12, 2026

Migrate testcase OCP-67564 from OTP to origin

  Will run 1 of 1 specs
  ------------------------------
  [sig-node] [Jira:Node/Kubelet] Kubelet, CRI-O, CPU manager [OTP] node's drain should block when PodDisruptionBudget minAvailable equals 100 percentage and selector is empty [Disruptive] [OCP-67564]
  github.com/openshift/origin/test/extended/node/node_e2e/node.go:170
    STEP: Creating a kubernetes client @ 05/12/26 18:09:30.969
  I0512 18:09:30.970449 1575364 discovery.go:214] Invalidating discovery information
  I0512 18:09:30.971109 1575364 framework.go:2330] [precondition-check] checking if cluster is MicroShift
  I0512 18:09:31.311301 1575364 framework.go:2353] IsMicroShiftCluster: microshift-version configmap not found, not MicroShift
  I0512 18:09:36.863684 1575364 client.go:293] configPath is now "/tmp/configfile852360194"
  I0512 18:09:36.863726 1575364 client.go:368] The user is now "e2e-test-node-5r77j-user"
  I0512 18:09:36.863734 1575364 client.go:370] Creating project "e2e-test-node-5r77j"
  I0512 18:09:37.208074 1575364 client.go:378] Waiting on permissions in project "e2e-test-node-5r77j" ...
  I0512 18:09:39.468148 1575364 client.go:407] DeploymentConfig capability is enabled, adding 'deployer' SA to the list of default SAs
  I0512 18:09:39.847683 1575364 client.go:422] Waiting for ServiceAccount "default" to be provisioned...
  I0512 18:09:40.630808 1575364 client.go:422] Waiting for ServiceAccount "builder" to be provisioned...
  I0512 18:09:41.631735 1575364 client.go:422] Waiting for ServiceAccount "deployer" to be provisioned...
  I0512 18:09:42.249637 1575364 client.go:432] Waiting for RoleBinding "system:image-pullers" to be provisioned...
  I0512 18:09:42.632416 1575364 client.go:432] Waiting for RoleBinding "system:image-builders" to be provisioned...
  I0512 18:09:43.012047 1575364 client.go:432] Waiting for RoleBinding "system:deployers" to be provisioned...
  I0512 18:09:44.541813 1575364 client.go:469] Project "e2e-test-node-5r77j" has been fully provisioned.
    STEP: Create a deployment with 6 replicas @ 05/12/26 18:09:44.541
    STEP: Wait for deployment to be ready @ 05/12/26 18:09:44.798
  I0512 18:09:45.174059 1575364 node.go:245] Waiting for deployment, ready replicas: 0/6
  I0512 18:09:48.171823 1575364 node.go:245] Waiting for deployment, ready replicas: 4/6
  I0512 18:09:51.056030 1575364 node.go:242] Deployment is ready with 6 replicas
    STEP: Create PodDisruptionBudget with 100% minAvailable @ 05/12/26 18:09:51.056
    STEP: Get a single worker node @ 05/12/26 18:09:51.316
  I0512 18:09:51.924434 1575364 node_utils.go:750] Worker Node Name is ip-10-0-122-55.ec2.internal
  I0512 18:09:51.924513 1575364 node.go:270] Selected worker node: ip-10-0-122-55.ec2.internal
    STEP: Obtain the pods running on node ip-10-0-122-55.ec2.internal @ 05/12/26 18:09:51.924
    STEP: Make sure that PDB's DisruptionAllowed condition is False @ 05/12/26 18:09:54.23
    STEP: Drain the node ip-10-0-122-55.ec2.internal @ 05/12/26 18:09:55.013
  I0512 18:10:40.574192 1575364 client.go:1094] Error running oc --kubeconfig=/home/bgudi/Downloads/cluster/4.21/auth/kubeconfig adm drain ip-10-0-122-55.ec2.internal --ignore-daemonsets --delete-emptydir-data --timeout=30s:
  StdOut>
  node/ip-10-0-122-55.ec2.internal cordoned
  Warning: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5f9pt, openshift-cluster-node-tuning-operator/tuned-wgjfr, openshift-dns/dns-default-l9dxp, openshift-dns/node-resolver-lx4j6, openshift-image-registry/node-ca-d8mcx, openshift-ingress-canary/ingress-canary-4zjbk, openshift-insights/insights-runtime-extractor-hcb6j, openshift-machine-config-operator/machine-config-daemon-xf5f4, openshift-monitoring/node-exporter-2497d, openshift-multus/multus-99wzl, openshift-multus/multus-additional-cni-plugins-g47p5, openshift-multus/network-metrics-daemon-q9zdn, openshift-network-diagnostics/network-check-target-xdldx, openshift-network-operator/iptables-alerter-2tz5k, openshift-ovn-kubernetes/ovnkube-node-m9fcz
  evicting pod openshift-network-console/networking-console-plugin-77b766d84f-phbjp
  evicting pod openshift-monitoring/alertmanager-main-0
  evicting pod e2e-test-node-5r77j/hello-openshift-8db55bd55-gwt6s
  evicting pod openshift-ingress/router-default-5f887fc9-87j9j
  evicting pod openshift-image-registry/image-registry-7959dc7696-swqgf
  evicting pod e2e-test-node-5r77j/hello-openshift-8db55bd55-prg47
  evicting pod openshift-monitoring/prometheus-operator-admission-webhook-fbcddc5f7-jgdjq
  evicting pod openshift-monitoring/prometheus-k8s-0
  evicting pod openshift-monitoring/monitoring-plugin-56ccf75868-tfznk
  evicting pod openshift-monitoring/thanos-querier-5cb4bdfbb5-966pg
  error when evicting pods/"hello-openshift-8db55bd55-prg47" -n "e2e-test-node-5r77j" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
  error when evicting pods/"hello-openshift-8db55bd55-gwt6s" -n "e2e-test-node-5r77j" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
  pod/prometheus-operator-admission-webhook-fbcddc5f7-jgdjq evicted
  pod/monitoring-plugin-56ccf75868-tfznk evicted
  pod/networking-console-plugin-77b766d84f-phbjp evicted
  pod/thanos-querier-5cb4bdfbb5-966pg evicted
  pod/alertmanager-main-0 evicted
  pod/prometheus-k8s-0 evicted
  evicting pod e2e-test-node-5r77j/hello-openshift-8db55bd55-gwt6s
  evicting pod e2e-test-node-5r77j/hello-openshift-8db55bd55-prg47
  error when evicting pods/"hello-openshift-8db55bd55-prg47" -n "e2e-test-node-5r77j" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
  error when evicting pods/"hello-openshift-8db55bd55-gwt6s" -n "e2e-test-node-5r77j" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
  evicting pod e2e-test-node-5r77j/hello-openshift-8db55bd55-prg47
  evicting pod e2e-test-node-5r77j/hello-openshift-8db55bd55-gwt6s
  error when evicting pods/"hello-openshift-8db55bd55-gwt6s" -n "e2e-test-node-5r77j" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
  error when evicting pods/"hello-openshift-8db55bd55-prg47" -n "e2e-test-node-5r77j" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
  evicting pod e2e-test-node-5r77j/hello-openshift-8db55bd55-prg47
  evicting pod e2e-test-node-5r77j/hello-openshift-8db55bd55-gwt6s
  error when evicting pods/"hello-openshift-8db55bd55-prg47" -n "e2e-test-node-5r77j" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
  error when evicting pods/"hello-openshift-8db55bd55-gwt6s" -n "e2e-test-node-5r77j" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
  evicting pod e2e-test-node-5r77j/hello-openshift-8db55bd55-prg47
  evicting pod e2e-test-node-5r77j/hello-openshift-8db55bd55-gwt6s
  error when evicting pods/"hello-openshift-8db55bd55-prg47" -n "e2e-test-node-5r77j" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
  error when evicting pods/"hello-openshift-8db55bd55-gwt6s" -n "e2e-test-node-5r77j" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
  evicting pod e2e-test-node-5r77j/hello-openshift-8db55bd55-prg47
  evicting pod e2e-test-node-5r77j/hello-openshift-8db55bd55-gwt6s
  error when evicting pods/"hello-openshift-8db55bd55-prg47" -n "e2e-test-node-5r77j" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
  error when evicting pods/"hello-openshift-8db55bd55-gwt6s" -n "e2e-test-node-5r77j" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
  pod/image-registry-7959dc7696-swqgf evicted
  evicting pod e2e-test-node-5r77j/hello-openshift-8db55bd55-gwt6s
  evicting pod e2e-test-node-5r77j/hello-openshift-8db55bd55-prg47
  There are pending pods in node "ip-10-0-122-55.ec2.internal" when an error occurred: [error when waiting for pod "router-default-5f887fc9-87j9j" in namespace "openshift-ingress" to terminate: context deadline exceeded, error when evicting pods/"hello-openshift-8db55bd55-gwt6s" -n "e2e-test-node-5r77j": global timeout reached: 30s, error when evicting pods/"hello-openshift-8db55bd55-prg47" -n "e2e-test-node-5r77j": global timeout reached: 30s]
  pod/hello-openshift-8db55bd55-gwt6s
  pod/hello-openshift-8db55bd55-prg47
  pod/router-default-5f887fc9-87j9j
  error: unable to drain node "ip-10-0-122-55.ec2.internal" due to error: [error when waiting for pod "router-default-5f887fc9-87j9j" in namespace "openshift-ingress" to terminate: context deadline exceeded, error when evicting pods/"hello-openshift-8db55bd55-gwt6s" -n "e2e-test-node-5r77j": global timeout reached: 30s, error when evicting pods/"hello-openshift-8db55bd55-prg47" -n "e2e-test-node-5r77j": global timeout reached: 30s], continuing command...
  There are pending nodes to be drained:
   ip-10-0-122-55.ec2.internal
  error when waiting for pod "router-default-5f887fc9-87j9j" in namespace "openshift-ingress" to terminate: context deadline exceeded
  error when evicting pods/"hello-openshift-8db55bd55-gwt6s" -n "e2e-test-node-5r77j": global timeout reached: 30s
  error when evicting pods/"hello-openshift-8db55bd55-prg47" -n "e2e-test-node-5r77j": global timeout reached: 30s
  StdErr>
  node/ip-10-0-122-55.ec2.internal cordoned
  Warning: ignoring DaemonSet-managed Pods: openshift-cluster-csi-drivers/aws-ebs-csi-driver-node-5f9pt, openshift-cluster-node-tuning-operator/tuned-wgjfr, openshift-dns/dns-default-l9dxp, openshift-dns/node-resolver-lx4j6, openshift-image-registry/node-ca-d8mcx, openshift-ingress-canary/ingress-canary-4zjbk, openshift-insights/insights-runtime-extractor-hcb6j, openshift-machine-config-operator/machine-config-daemon-xf5f4, openshift-monitoring/node-exporter-2497d, openshift-multus/multus-99wzl, openshift-multus/multus-additional-cni-plugins-g47p5, openshift-multus/network-metrics-daemon-q9zdn, openshift-network-diagnostics/network-check-target-xdldx, openshift-network-operator/iptables-alerter-2tz5k, openshift-ovn-kubernetes/ovnkube-node-m9fcz
  evicting pod openshift-network-console/networking-console-plugin-77b766d84f-phbjp
  evicting pod openshift-monitoring/alertmanager-main-0
  evicting pod e2e-test-node-5r77j/hello-openshift-8db55bd55-gwt6s
  evicting pod openshift-ingress/router-default-5f887fc9-87j9j
  evicting pod openshift-image-registry/image-registry-7959dc7696-swqgf
  evicting pod e2e-test-node-5r77j/hello-openshift-8db55bd55-prg47
  evicting pod openshift-monitoring/prometheus-operator-admission-webhook-fbcddc5f7-jgdjq
  evicting pod openshift-monitoring/prometheus-k8s-0
  evicting pod openshift-monitoring/monitoring-plugin-56ccf75868-tfznk
  evicting pod openshift-monitoring/thanos-querier-5cb4bdfbb5-966pg
  error when evicting pods/"hello-openshift-8db55bd55-prg47" -n "e2e-test-node-5r77j" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
  error when evicting pods/"hello-openshift-8db55bd55-gwt6s" -n "e2e-test-node-5r77j" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
  pod/prometheus-operator-admission-webhook-fbcddc5f7-jgdjq evicted
  pod/monitoring-plugin-56ccf75868-tfznk evicted
  pod/networking-console-plugin-77b766d84f-phbjp evicted
  pod/thanos-querier-5cb4bdfbb5-966pg evicted
  pod/alertmanager-main-0 evicted
  pod/prometheus-k8s-0 evicted
  evicting pod e2e-test-node-5r77j/hello-openshift-8db55bd55-gwt6s
  evicting pod e2e-test-node-5r77j/hello-openshift-8db55bd55-prg47
  error when evicting pods/"hello-openshift-8db55bd55-prg47" -n "e2e-test-node-5r77j" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
  error when evicting pods/"hello-openshift-8db55bd55-gwt6s" -n "e2e-test-node-5r77j" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
  evicting pod e2e-test-node-5r77j/hello-openshift-8db55bd55-prg47
  evicting pod e2e-test-node-5r77j/hello-openshift-8db55bd55-gwt6s
  error when evicting pods/"hello-openshift-8db55bd55-gwt6s" -n "e2e-test-node-5r77j" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
  error when evicting pods/"hello-openshift-8db55bd55-prg47" -n "e2e-test-node-5r77j" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
  evicting pod e2e-test-node-5r77j/hello-openshift-8db55bd55-prg47
  evicting pod e2e-test-node-5r77j/hello-openshift-8db55bd55-gwt6s
  error when evicting pods/"hello-openshift-8db55bd55-prg47" -n "e2e-test-node-5r77j" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
  error when evicting pods/"hello-openshift-8db55bd55-gwt6s" -n "e2e-test-node-5r77j" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
  evicting pod e2e-test-node-5r77j/hello-openshift-8db55bd55-prg47
  evicting pod e2e-test-node-5r77j/hello-openshift-8db55bd55-gwt6s
  error when evicting pods/"hello-openshift-8db55bd55-prg47" -n "e2e-test-node-5r77j" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
  error when evicting pods/"hello-openshift-8db55bd55-gwt6s" -n "e2e-test-node-5r77j" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
  evicting pod e2e-test-node-5r77j/hello-openshift-8db55bd55-prg47
  evicting pod e2e-test-node-5r77j/hello-openshift-8db55bd55-gwt6s
  error when evicting pods/"hello-openshift-8db55bd55-prg47" -n "e2e-test-node-5r77j" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
  error when evicting pods/"hello-openshift-8db55bd55-gwt6s" -n "e2e-test-node-5r77j" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
  pod/image-registry-7959dc7696-swqgf evicted
  evicting pod e2e-test-node-5r77j/hello-openshift-8db55bd55-gwt6s
  evicting pod e2e-test-node-5r77j/hello-openshift-8db55bd55-prg47
  There are pending pods in node "ip-10-0-122-55.ec2.internal" when an error occurred: [error when waiting for pod "router-default-5f887fc9-87j9j" in namespace "openshift-ingress" to terminate: context deadline exceeded, error when evicting pods/"hello-openshift-8db55bd55-gwt6s" -n "e2e-test-node-5r77j": global timeout reached: 30s, error when evicting pods/"hello-openshift-8db55bd55-prg47" -n "e2e-test-node-5r77j": global timeout reached: 30s]
  pod/hello-openshift-8db55bd55-gwt6s
  pod/hello-openshift-8db55bd55-prg47
  pod/router-default-5f887fc9-87j9j
  error: unable to drain node "ip-10-0-122-55.ec2.internal" due to error: [error when waiting for pod "router-default-5f887fc9-87j9j" in namespace "openshift-ingress" to terminate: context deadline exceeded, error when evicting pods/"hello-openshift-8db55bd55-gwt6s" -n "e2e-test-node-5r77j": global timeout reached: 30s, error when evicting pods/"hello-openshift-8db55bd55-prg47" -n "e2e-test-node-5r77j": global timeout reached: 30s], continuing command...
  There are pending nodes to be drained:
   ip-10-0-122-55.ec2.internal
  error when waiting for pod "router-default-5f887fc9-87j9j" in namespace "openshift-ingress" to terminate: context deadline exceeded
  error when evicting pods/"hello-openshift-8db55bd55-gwt6s" -n "e2e-test-node-5r77j": global timeout reached: 30s
  error when evicting pods/"hello-openshift-8db55bd55-prg47" -n "e2e-test-node-5r77j": global timeout reached: 30s

    STEP: Verify that the pods were not drained from the node @ 05/12/26 18:10:40.574
  node/ip-10-0-122-55.ec2.internal uncordoned
  I0512 18:10:46.245171 1575364 client.go:689] Deleted {user.openshift.io/v1, Resource=users  e2e-test-node-5r77j-user}, err: <nil>
  I0512 18:10:46.486835 1575364 client.go:689] Deleted {oauth.openshift.io/v1, Resource=oauthclients  e2e-client-e2e-test-node-5r77j}, err: <nil>
  I0512 18:10:47.146367 1575364 client.go:689] Deleted {oauth.openshift.io/v1, Resource=oauthaccesstokens  sha256~Asr0jX2TaAo3ybwiQNkuEHsOhgX9YZ90sDxkPK5-_CA}, err: <nil>
    STEP: Destroying namespace "e2e-test-node-5r77j" for this suite. @ 05/12/26 18:10:47.146
  • [76.679 seconds]
  ------------------------------

  Ran 1 of 1 Specs in 76.679 seconds
  SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped

Summary by CodeRabbit

  • Tests
    • Added an end-to-end test verifying node drain is blocked when a PodDisruptionBudget requires 100% availability; confirms drain fails with expected PDB-related output and that pods on the drained node remain unchanged.
    • Added test helpers to reliably select a single worker node and to wait until cluster operators report Available, improving test stability and readiness checks.

@openshift-merge-bot
Copy link
Copy Markdown
Contributor

Pipeline controller notification
This repo is configured to use the pipeline controller. Second-stage tests will be triggered either automatically or after lgtm label is added, depending on the repository configuration. The pipeline controller will automatically detect which contexts are required and will utilize /test Prow commands to trigger the second stage.

For optional jobs, comment /test ? to see a list of all defined jobs. To trigger manually all jobs from second stage use /pipeline required command.

This repository is configured in: automatic mode

@openshift-ci openshift-ci Bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label May 12, 2026
@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented May 12, 2026

Skipping CI for Draft Pull Request.
If you want CI signal for your change, please convert it to an actual PR.
You can still manually trigger a test run with /test all

@openshift-ci openshift-ci Bot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label May 12, 2026
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented May 12, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

Walkthrough

Adds a disruptive E2E that verifies oc adm drain is blocked by a PodDisruptionBudget with minAvailable: 100% and empty selector, plus two test helpers: GetSingleWorkerNode(ctx, oc) (string, error) and WaitClusterOperatorAvailable(ctx, oc).

Node drain blocking E2E + supporting test helpers

Layer / File(s) Summary
Worker node selection helper
test/extended/node/node_utils.go
Adds GetSingleWorkerNode(ctx context.Context, oc *exutil.CLI) (string, error) — lists nodes with label node-role.kubernetes.io/worker, errors if none, logs and returns the first worker node name.
ClusterOperator readiness poll helper
test/extended/node/node_utils.go
Adds WaitClusterOperatorAvailable(ctx context.Context, oc *exutil.CLI) — polls oc get clusteroperator with a JSONPath that extracts all Available condition .status values and waits until every extracted status equals "True", failing the expectation on timeout.
Disruptive E2E: drain blocked by PDB
test/extended/node/node_e2e/node.go
Adds typed Kubernetes API imports and a new [OTP] Ginkgo test that skips on SingleReplica/External control-plane topologies; creates a 6-replica Deployment and waits for readiness; creates a PodDisruptionBudget with minAvailable: "100%" and empty selector; selects a single worker node and records pods on it; polls until the PDB's DisruptionAllowed condition is present and equals "False"; runs oc adm drain --ignore-daemonsets --delete-emptydir-data --timeout=30s expecting failure with PDB-violation output substrings; re-fetches pods and asserts the pod set on the node is unchanged.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes


Caution

Pre-merge checks failed

Please resolve all errors before merging. Addressing warnings is optional.

  • Ignore

❌ Failed checks (1 error, 1 warning)

Check name Status Explanation Resolution
Stable And Deterministic Test Names ❌ Error g.By() statements at lines 273 and 296 include dynamic node names using the workerNode variable, which changes between test runs. Remove workerNode variable from g.By() descriptions. Use static text like "Obtain the pods running on the selected node" instead of concatenating the node name variable.
Test Structure And Quality ⚠️ Warning Test fails requirement #4: 10 of 14 assertions lack meaningful failure messages. Lines 175, 231, 265, 270, 275, 276, 302, 303, 307, 308 violate pattern seen in lines 248, 293, 294, 301. Add meaningful failure messages to all assertions at lines 175, 231, 265, 270, 275, 276, 302, 303, 307, 308. Follow existing pattern with messages describing what each assertion failure indicates.
✅ Passed checks (10 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main change: migrating a specific test case (OCP-67564) from OTP to origin, which is reflected in the added Ginkgo test.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.
Microshift Test Compatibility ✅ Passed New test is protected by parent Describe's BeforeEach hook that skips all tests on MicroShift clusters. The parent block has exutil.IsMicroShiftCluster() check with g.Skip().
Single Node Openshift (Sno) Test Compatibility ✅ Passed The new test includes proper SNO protection via explicit ControlPlaneTopology check that skips on SingleReplica/External topologies. No multi-node assumptions detected.
Topology-Aware Scheduling Compatibility ✅ Passed Test code skips SNO/External, hardcodes replicas, has no anti-affinity or control-plane selectors, and handles missing workers with proper errors.
Ote Binary Stdout Contract ✅ Passed No process-level stdout writes detected. All logging uses framework.Logf (safe for OTE contract). Test properly encapsulated in g.It() block. Helper functions only called from test blocks.
Ipv6 And Disconnected Network Test Compatibility ✅ Passed Test contains no IPv4-specific assumptions, IP parsing logic, or external connectivity requirements. Image reference uses sha256 digest and is standard for OpenShift test suites.
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@openshift-ci
Copy link
Copy Markdown
Contributor

openshift-ci Bot commented May 12, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: BhargaviGudi
Once this PR has been reviewed and has the lgtm label, please assign rphillips for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@BhargaviGudi BhargaviGudi changed the title Migrate testcase 67564 from OTP to origin [OCPNODE-4516]: Migrate testcase 67564 from OTP to origin May 12, 2026
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (1)
test/extended/node/node_e2e/node.go (1)

222-234: ⚡ Quick win

Use wait.PollUntilContextTimeout and reference replicas instead of hardcoded 6.

wait.Poll is deprecated in favor of the context-aware variant wait.PollUntilContextTimeout in k8s.io/apimachinery. Also, deploy.Status.ReadyReplicas == 6 ignores the local replicas variable defined at the start of this test, so future changes to the replica count will silently drift.

♻️ Suggested refactor
-		err = wait.Poll(3*time.Second, 5*time.Minute, func() (bool, error) {
-			deploy, pollErr := oc.KubeClient().AppsV1().Deployments(namespace).Get(ctx, "hello-openshift", metav1.GetOptions{})
+		err = wait.PollUntilContextTimeout(ctx, 3*time.Second, 5*time.Minute, true, func(ctx context.Context) (bool, error) {
+			deploy, pollErr := oc.KubeClient().AppsV1().Deployments(namespace).Get(ctx, "hello-openshift", metav1.GetOptions{})
 			if pollErr != nil {
 				e2e.Logf("Error getting deployment: %v", pollErr)
 				return false, nil
 			}
-			if deploy.Status.ReadyReplicas == 6 {
+			if deploy.Status.ReadyReplicas == replicas {
 				e2e.Logf("Deployment is ready with %d replicas", deploy.Status.ReadyReplicas)
 				return true, nil
 			}
-			e2e.Logf("Waiting for deployment, ready replicas: %d/6", deploy.Status.ReadyReplicas)
+			e2e.Logf("Waiting for deployment, ready replicas: %d/%d", deploy.Status.ReadyReplicas, replicas)
 			return false, nil
 		})
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@test/extended/node/node_e2e/node.go` around lines 222 - 234, Replace the
deprecated wait.Poll call with the context-aware wait.PollUntilContextTimeout
variant and use the local replicas variable instead of the hardcoded 6: change
the wait invocation to wait.PollUntilContextTimeout(ctx, 3*time.Second,
5*time.Minute, func(...) { ... }) (or the exact k8s signature) and inside the
closure compare deploy.Status.ReadyReplicas == replicas; keep the same
oc.KubeClient().AppsV1().Deployments(namespace).Get(...) call and existing
logging but update messages to reference replicas so the check follows the
test's replicas variable.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@test/extended/node/node_e2e/node.go`:
- Around line 259-262: The pod-count assertion is wrong because
strings.Split(podsInWorker, " ") yields [""] for empty output; update the check
after the oc.AsAdmin().WithoutNamespace().Run(...).Output() call so it verifies
podsInWorker is non-empty or uses strings.Fields(podsInWorker) and asserts
len(...) > 0 (e.g., replace len(strings.Split(podsInWorker, " ")) check with
len(strings.Fields(podsInWorker)) > 0 or
o.Expect(podsInWorker).NotTo(o.BeEmpty())), referencing the podsInWorker
variable and the oc.AsAdmin().WithoutNamespace().Run(...) call that produces it
and keeping the existing o.Expect(err).NotTo(o.HaveOccurred()).
- Around line 165-167: This test assumes dedicated worker nodes; add a topology
guard (skip) before this g.It("[OTP] node's drain...") so it does not run on
SNO, TNF or TNA topologies — follow the pattern used in the existing BeforeEach
MicroShift guard: detect cluster topology (the same helper used elsewhere in the
file) and call Skipf when topology is SingleNode (SNO), TNF, or TNA so
GetSingleWorkerNode won't return the only schedulable host; place the guard at
the start of the test or in the surrounding BeforeEach to ensure the drain logic
(and GetSingleWorkerNode usage) is not exercised on those topologies.
- Around line 264-267: The current check reads .status.conditions[0].status
immediately which can be empty; instead poll until the DisruptionAllowed
condition exists and then assert its status is "False". Replace the single
immediate get that sets pdbStatus with a short retry/poll (e.g.,
wait.PollImmediate or Ginkgo Eventually) that runs
oc.AsAdmin().WithoutNamespace().Run("get").Args("poddisruptionbudget", "my-pdb",
"-n", namespace,
"-o=jsonpath={.status.conditions[?(@.type==\"DisruptionAllowed\")].status}")
until the jsonpath returns a non-empty value, then assert that the returned
value equals "False" (avoid strings.Contains on possibly empty string).

In `@test/extended/node/node_utils.go`:
- Around line 754-767: The WaitClusterOperatorAvailable helper currently can
return false positives when availableCOStatus is empty and uses an excessive
120-minute timeout; update WaitClusterOperatorAvailable to verify the output is
non-empty and that every token equals "True" (or, better, replace the jsonpath
approach by using the typed configv1 ClusterOperator client to iterate all
ClusterOperators and assert each has an Available condition with
Status==configv1.ConditionTrue), and reduce the timeout constant from
120*time.Minute to a shorter cap (e.g., 30–45 minutes) so
wait.PollUntilContextTimeout enforces the tighter limit; adjust the polling
closure (where availableCOStatus, err :=
oc.AsAdmin().WithoutNamespace().Run("get").Args(...).Output() is used) to return
false on empty output or any non-True condition and surface real errors via
waitErr.

---

Nitpick comments:
In `@test/extended/node/node_e2e/node.go`:
- Around line 222-234: Replace the deprecated wait.Poll call with the
context-aware wait.PollUntilContextTimeout variant and use the local replicas
variable instead of the hardcoded 6: change the wait invocation to
wait.PollUntilContextTimeout(ctx, 3*time.Second, 5*time.Minute, func(...) { ...
}) (or the exact k8s signature) and inside the closure compare
deploy.Status.ReadyReplicas == replicas; keep the same
oc.KubeClient().AppsV1().Deployments(namespace).Get(...) call and existing
logging but update messages to reference replicas so the check follows the
test's replicas variable.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository YAML (base), Central YAML (inherited)

Review profile: CHILL

Plan: Enterprise

Run ID: edfa1ee0-5c80-4ee0-9bdd-d0a819f8f1bc

📥 Commits

Reviewing files that changed from the base of the PR and between 16bf93d and bbcfd5b.

📒 Files selected for processing (2)
  • test/extended/node/node_e2e/node.go
  • test/extended/node/node_utils.go

Comment thread test/extended/node/node_e2e/node.go
Comment thread test/extended/node/node_e2e/node.go Outdated
Comment thread test/extended/node/node_e2e/node.go Outdated
Comment thread test/extended/node/node_utils.go
@openshift-ci openshift-ci Bot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label May 12, 2026
@BhargaviGudi BhargaviGudi marked this pull request as ready for review May 12, 2026 09:06
@openshift-ci openshift-ci Bot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label May 12, 2026
@BhargaviGudi BhargaviGudi changed the title [OCPNODE-4516]: Migrate testcase 67564 from OTP to origin WIP [OCPNODE-4516]: Migrate testcase 67564 from OTP to origin May 12, 2026
@openshift-ci openshift-ci Bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label May 12, 2026
@openshift-ci openshift-ci Bot requested review from asahay19 and cpmeadors May 12, 2026 09:06
@BhargaviGudi
Copy link
Copy Markdown
Contributor Author

/pipeline required

@openshift-merge-bot
Copy link
Copy Markdown
Contributor

Scheduling required tests:
/test e2e-aws-csi
/test e2e-aws-ovn-fips
/test e2e-aws-ovn-microshift
/test e2e-aws-ovn-microshift-serial
/test e2e-aws-ovn-serial-1of2
/test e2e-aws-ovn-serial-2of2
/test e2e-gcp-csi
/test e2e-gcp-ovn
/test e2e-gcp-ovn-upgrade
/test e2e-metal-ipi-ovn-ipv6
/test e2e-vsphere-ovn
/test e2e-vsphere-ovn-upi

@openshift-merge-bot openshift-merge-bot Bot added the ready-for-human-review Indicates a PR has been reviewed by automated tools and is ready for human review label May 12, 2026
@openshift-merge-bot
Copy link
Copy Markdown
Contributor

Scheduling required tests:
/test e2e-aws-csi
/test e2e-aws-ovn-fips
/test e2e-aws-ovn-microshift
/test e2e-aws-ovn-microshift-serial
/test e2e-aws-ovn-serial-1of2
/test e2e-aws-ovn-serial-2of2
/test e2e-gcp-csi
/test e2e-gcp-ovn
/test e2e-gcp-ovn-upgrade
/test e2e-metal-ipi-ovn-ipv6
/test e2e-vsphere-ovn
/test e2e-vsphere-ovn-upi

@BhargaviGudi BhargaviGudi changed the title WIP [OCPNODE-4516]: Migrate testcase 67564 from OTP to origin [OCPNODE-4516]: Migrate testcase 67564 from OTP to origin May 12, 2026
@openshift-ci openshift-ci Bot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label May 12, 2026
Comment thread test/extended/node/node_e2e/node.go
Comment thread test/extended/node/node_e2e/node.go Outdated
Comment thread test/extended/node/node_utils.go Outdated
@openshift-merge-bot
Copy link
Copy Markdown
Contributor

Scheduling required tests:
/test e2e-aws-csi
/test e2e-aws-ovn-fips
/test e2e-aws-ovn-microshift
/test e2e-aws-ovn-microshift-serial
/test e2e-aws-ovn-serial-1of2
/test e2e-aws-ovn-serial-2of2
/test e2e-gcp-csi
/test e2e-gcp-ovn
/test e2e-gcp-ovn-upgrade
/test e2e-metal-ipi-ovn-ipv6
/test e2e-vsphere-ovn
/test e2e-vsphere-ovn-upi

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready-for-human-review Indicates a PR has been reviewed by automated tools and is ready for human review

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants