OCPNODE-4505: Automation creation of OCP-57401#31142
Conversation
|
Pipeline controller notification For optional jobs, comment This repository is configured in: automatic mode |
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Repository YAML (base), Central YAML (inherited) Review profile: CHILL Plan: Enterprise Run ID: 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
WalkthroughAdds a Ginkgo serial disruptive-longrunning e2e that creates ImageDigestMirrorSet and ImageTagMirrorSet, waits for worker/master MCP rollouts, reads ChangesImageMirrorSet E2E Test & Node Helpers
sequenceDiagram
participant Test as Test Runner
participant API as API Server
participant MCP as MachineConfigPool
participant Node as Worker Node
participant FS as Node Filesystem
Test->>API: create ImageDigestMirrorSet & ImageTagMirrorSet
API->>MCP: MCP spec updated (worker, master)
MCP->>Node: trigger config rollout
Node->>FS: write /etc/containers/registries.conf
Test->>Node: oc exec/read /etc/containers/registries.conf
Test->>Test: verify locations, pull-from-mirror, and blocked entries
Test->>API: delete mirror sets (DeferCleanup)
API->>MCP: MCP spec revert triggers
MCP->>Node: revert config, rollout complete
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~22 minutes 🚥 Pre-merge checks | ✅ 11 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (11 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@test/extended/node/node_e2e/node.go`:
- Around line 331-332: In verifyRegistryBlocked, replace the fixed "j >= i-5"
backward line-limit with a scan that continues until you hit the previous
registry boundary or start of file: starting from j := i-1 loop while j >= 0 and
!strings.Contains(lines[j], "[[registry]]") (stop before the prior
"[[registry]]") and check for strings.Contains(lines[j], "blocked = true"); do
the same for the symmetric forward scan (the other occurrence around lines
340-341) so both searches stop at the neighboring "[[registry]]" block marker
instead of a hard ±5 line window.
- Around line 182-183: The test uses fixed cluster-scoped mirror-set names
"digest-mirror" and "tag-mirror" which can collide between runs; change the code
that constructs those mirror-set resource Names to append a unique suffix (e.g.,
test run id, timestamp, or random string) so each invocation creates
uniquely-named mirror-sets, and propagate that generated name to all places that
reference "digest-mirror" and "tag-mirror" (the mirror-set creation structs and
any subsequent lookups/assertions) so tests no longer fail with AlreadyExists
from leftover artifacts.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Central YAML (inherited)
Review profile: CHILL
Plan: Enterprise
Run ID: 2dc75fc1-8480-42ae-b540-eab4ad3c4e55
📒 Files selected for processing (1)
test/extended/node/node_e2e/node.go
|
Scheduling required tests: |
| initialWorkerSpec := imagepolicy.GetMCPCurrentSpecConfigName(oc, "worker") | ||
| initialMasterSpec := imagepolicy.GetMCPCurrentSpecConfigName(oc, "master") | ||
|
|
||
| createdIDMS, err := configClient.ImageDigestMirrorSets().Create(ctx, idms, metav1.CreateOptions{}) |
There was a problem hiding this comment.
Coderabbit pointed out that this could fail with AlreadExists. Why not use Apply() rather than create? or you need to check it if exists first before create.
There was a problem hiding this comment.
Yeah, I addressed CodeRabbit comments. Resource names now include a random suffix via utilrand.String(5) (e.g., digest-mirror-xk4qz), so AlreadyExists collisions from leftover artifacts won't happen.
Using Apply() would work too, but it could silently overwrite a resource from a concurrent test run rather than failing explicitly, which is less safe. With unique names + Create(), we get a clean resource every time and would catch genuine conflicts.
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: asahay19 The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
test/extended/node/node_utils.go (1)
746-764: ⚡ Quick win
GetFirstReadyWorkerNodeis inconsistent with the rest of the file: no context, N+1 subprocess callsEvery other function in this file takes a
context.Contextand uses the Kubernetes client directly.GetFirstReadyWorkerNodeinstead:
- Takes no context — callers cannot propagate cancellation or timeouts.
- Issues one subprocess call to list node names, then one more per node to read its
Readycondition — the existinggetNodesByLabel+isNodeInReadyStatehelpers already cover this in a single API round-trip.Consider implementing it with the existing helpers:
♻️ Proposed refactor
-func GetFirstReadyWorkerNode(oc *exutil.CLI) string { - nodeNames, err := oc.AsAdmin().WithoutNamespace().Run("get").Args( - "nodes", "-l", "node-role.kubernetes.io/worker", - "-o=jsonpath={.items[*].metadata.name}", - ).Output() - o.Expect(err).NotTo(o.HaveOccurred()) - workers := strings.Fields(nodeNames) - o.Expect(workers).NotTo(o.BeEmpty(), "no worker nodes found") - - for _, w := range workers { - status, statusErr := oc.AsAdmin().WithoutNamespace().Run("get").Args( - "nodes", w, - "-o=jsonpath={.status.conditions[?(@.type=='Ready')].status}", - ).Output() - if statusErr == nil && status == "True" { - return w - } - } - return "" -} +func GetFirstReadyWorkerNode(ctx context.Context, oc *exutil.CLI) string { + workers, err := getNodesByLabel(ctx, oc, "node-role.kubernetes.io/worker") + o.Expect(err).NotTo(o.HaveOccurred()) + o.Expect(workers).NotTo(o.BeEmpty(), "no worker nodes found") + + for _, w := range workers { + if isNodeInReadyState(&w) { + return w.Name + } + } + o.Expect(false).To(o.BeTrue(), "no Ready worker node found") + return "" // unreachable +}🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@test/extended/node/node_utils.go` around lines 746 - 764, GetFirstReadyWorkerNode should accept a context.Context and use the existing Kubernetes client helpers instead of spawning kubectl per-node: change the signature of GetFirstReadyWorkerNode to take ctx context.Context, call getNodesByLabel(ctx, oc, "node-role.kubernetes.io/worker") to get nodes, then iterate and call isNodeInReadyState(ctx, oc, node) (or the equivalent helper names present) to find the first Ready node and return its name; remove the subprocess Run("get") calls and preserve the same return semantics (empty string if none found).
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@test/extended/node/node_utils.go`:
- Around line 755-764: GetFirstReadyWorkerNode currently returns an empty string
when no worker with Ready=True is found, causing downstream cryptic failures;
change the post-loop behavior in GetFirstReadyWorkerNode (the function that
iterates workers and calls oc.AsAdmin().WithoutNamespace().Run("get").Args(...))
to fail the test immediately with a clear message (e.g. call the test
framework's fail/assert function or t.Fatalf/Failf) indicating "no ready worker
node found" instead of returning "" so callers never receive an invalid node
name.
---
Nitpick comments:
In `@test/extended/node/node_utils.go`:
- Around line 746-764: GetFirstReadyWorkerNode should accept a context.Context
and use the existing Kubernetes client helpers instead of spawning kubectl
per-node: change the signature of GetFirstReadyWorkerNode to take ctx
context.Context, call getNodesByLabel(ctx, oc, "node-role.kubernetes.io/worker")
to get nodes, then iterate and call isNodeInReadyState(ctx, oc, node) (or the
equivalent helper names present) to find the first Ready node and return its
name; remove the subprocess Run("get") calls and preserve the same return
semantics (empty string if none found).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Central YAML (inherited)
Review profile: CHILL
Plan: Enterprise
Run ID: ef32c950-ef14-4009-bfa5-3caf3802f43c
📒 Files selected for processing (2)
test/extended/node/node_e2e/node.gotest/extended/node/node_utils.go
🚧 Files skipped from review as they are similar to previous changes (1)
- test/extended/node/node_e2e/node.go
|
Scheduling required tests: |
|
@asahay19: This pull request references OCPNODE-4505 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the sub-task to target the "5.0.0" version, but no target version was set. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
# Conflicts: # test/extended/node/node_e2e/node.go
|
Scheduling required tests: |
|
@asahay19: all tests passed! Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
PR Summary
This test automates manual test case OCP-57401 by verifying end-to-end creation of both ImageDigestMirrorSet (IDMS) and ImageTagMirrorSet (ITMS) resources. It creates an IDMS and ITMS with both
AllowContactingSourceandNeverContactSourcemirror policies, waits for MachineConfigPool rollouts, and then validates that/etc/containers/registries.confon a worker node contains the correct digest-only and tag-only mirror entries along with blocked = true for NeverContactSource registries.This test is already present in openshift-test-private. As part of migrating test cases , I am automating it origin.
Polarian Link: https://polarion.engineering.redhat.com/polarion/#/project/OSE/workitem?id=OCP-57401
Jira task : https://redhat.atlassian.net/browse/OCPNODE-4505
Output
Tested locally on 4.22 openshift cluster and it got passed.
./openshift-tests run-test '[sig-node][Suite:openshift/disruptive-longrunning][Disruptive][Serial] ImageTagMirrorSet and ImageDigestMirrorSet [OTP] Create ImageDigestMirrorSet and ImageTagMirrorSet and verify registries.conf [OCP-57401]'PTAL @cpmeadors @BhargaviGudi
Summary by CodeRabbit