Refs #38953 - Remove import pool event consumption#11647
Refs #38953 - Remove import pool event consumption#11647jeremylenz merged 2 commits intoKatello:masterfrom
Conversation
0745621 to
cc44f7c
Compare
cc44f7c to
a95a954
Compare
There was a problem hiding this comment.
Hey - I've found 4 issues, and left some high level feedback:
- The
AddFieldsToKatelloPoolmigration’sdownmethod doesn’t restore themulti_entitlementcolumn default to its prior state, so the migration isn’t fully reversible; consider explicitly changing the default back (or making the migration explicitly irreversible). - The new
katello:upgrades:4.21:import_poolstask will runKatello::Pool.import_allon upgrade without any long-running/skip-failure flags; it may be worth revisiting whether this should be marked as long-running or otherwise guarded for large installations.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The `AddFieldsToKatelloPool` migration’s `down` method doesn’t restore the `multi_entitlement` column default to its prior state, so the migration isn’t fully reversible; consider explicitly changing the default back (or making the migration explicitly irreversible).
- The new `katello:upgrades:4.21:import_pools` task will run `Katello::Pool.import_all` on upgrade without any long-running/skip-failure flags; it may be worth revisiting whether this should be marked as long-running or otherwise guarded for large installations.
## Individual Comments
### Comment 1
<location path="app/models/katello/glue/candlepin/pool.rb" line_range="77-86" />
<code_context>
+ pool_attributes = {}
</code_context>
<issue_to_address>
**issue (bug_risk):** Mixed symbol access on `pool_json` and `attr` is likely incompatible with string-keyed backend data.
The previous version used string keys on `pool_json`/`attr`, which matches typical Candlepin JSON. The new symbol-based access will return `nil` unless `backend_data` is deep-symbolized elsewhere, leading to `NoMethodError` when you use `+` or iterate. Either keep string-based access or explicitly symbolize the keys when `backend_data` is constructed.
</issue_to_address>
### Comment 2
<location path="app/models/katello/glue/candlepin/pool.rb" line_range="80-85" />
<code_context>
-
- if pool_attributes.key?(:virtual)
- pool_attributes[:virt_only] = pool_attributes["virtual"] == 'true'
+ IMPORT_ROOT_ATTRIBUTES.each do |name|
+ pool_attributes[name.underscore] = pool_json[name]
end
- if pool_attributes.key?("requires_host")
- pool_attributes[:hypervisor_id] = ::Katello::Host::SubscriptionFacet.find_by(:uuid => pool_attributes["requires_host"])
- .try(:host_id)
- end
-
- if pool_attributes.key?(:unmapped_guests_only) && pool_attributes[:unmapped_guests_only] == 'true'
- pool_attributes[:unmapped_guest] = true
+ pool_attributes[:pool_type] = pool_json[:type]
+ pool_attributes[:stacking_id] = pool_json[:stackId]
+
+ combined_attributes = pool_json[:productAttributes] + pool_json[:attributes]
</code_context>
<issue_to_address>
**question (bug_risk):** Some previously-imported root fields (e.g. `consumed`, `type`) have changed handling and may no longer be populated as expected.
Previously we imported `%w(accountNumber contractNumber quantity startDate endDate accountNumber consumed)` and then set `:pool_type`/`:upstream_pool_id`. Now we only import `IMPORT_ROOT_ATTRIBUTES` and set `:pool_type`/`:stacking_id` from `pool_json[:type]`/`[:stackId]`, so:
* `consumed` is no longer imported.
* `type` is now mapped into `:pool_type` via a symbol key, which may be wrong given how other keys are accessed.
If `consumed`/`type` are still used elsewhere (reporting, UI, etc.), consider restoring `consumed` import or removing its usages to avoid a silent behavior change.
</issue_to_address>
### Comment 3
<location path="app/models/katello/glue/candlepin/pool.rb" line_range="17" />
<code_context>
module ClassMethods
def candlepin_data(cp_id)
Katello::Resources::Candlepin::Pool.find(cp_id)
- rescue Katello::Errors::CandlepinPoolGone
- {}
end
</code_context>
<issue_to_address>
**question:** Removing the `CandlepinPoolGone` rescue changes how missing pools are handled and may surface exceptions in existing flows.
With the rescue removed, any lookup for a deleted or expired pool will now raise instead of returning `{}`. If any remaining callers still expect missing pools to be tolerated (e.g., cleanup/upgrade paths), this could cause new errors. Consider reviewing call sites or adding targeted handling where missing pools are acceptable.
</issue_to_address>
### Comment 4
<location path="db/migrate/20260301205720_add_fields_to_katello_pool.katello.rb" line_range="10" />
<code_context>
+ add_column :katello_pools, :upstream_entitlement_id, :string
+ add_column :katello_pools, :description, :string
+
+ change_column :katello_pools, :multi_entitlement, :boolean, default: false
+ end
+
</code_context>
<issue_to_address>
**issue (bug_risk):** `down` does not restore the original `multi_entitlement` column definition.
The `up` migration changes `multi_entitlement` to a boolean with default `false`, but `down` only drops the new columns and leaves `multi_entitlement` in its modified state. Rolling back this migration will not fully restore the schema. Please either add a `change_column` in `down` to restore the original type/default, or refactor to a reversible `change` with `reversible do |dir|` so the migration is symmetric.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| end | ||
|
|
||
| # rubocop:disable Metrics/CyclomaticComplexity | ||
| def import_data |
There was a problem hiding this comment.
This method was (imo) kind of a disaster. I took the opportunity to rewrite it because it was doing strange, unnecessary things and I couldn't understand it.
|
|
||
| # rubocop:disable Metrics/CyclomaticComplexity | ||
| def import_data | ||
| pool_attributes = {}.with_indifferent_access |
There was a problem hiding this comment.
Idk if sourcery knows about .with_indifferent_access, but it sure gets mad that the keys might not be strings..
|
@jeremylenz you'll definitely want this in place (db migrated + upgrade task run) for those expiring subscriptions you have in a few days. One last (I think) scenario to consider here is to do with expired subscriptions on the Subscriptions page. Modifying their entitlements won't work since the upstream pool and entitlement are likely gone by the time someone would try to do that. Need to take a look at that, but I think there's plenty here to test / review in the mean time if you don't want to wait. |
| end | ||
|
|
||
| if pool_attributes.key?(:virtual) | ||
| pool_attributes[:virt_only] = pool_attributes["virtual"] == 'true' |
There was a problem hiding this comment.
I think this was a bug? I don't see an attribute called virtual, but I do see virt-only:
I changed it below to look at virt-only instead.
| rescue ActiveRecord::RecordInvalid | ||
| # if we hit this block it's likely that the pool's subscription, product are being created | ||
| # as a result of manifest import/refresh or custom product creation | ||
| Rails.logger.warn("Unable to import pool. It will likely be created by another process.") |
There was a problem hiding this comment.
I sure hope it's created by another process now! 😄
f10d52a to
036b911
Compare
📝 WalkthroughWalkthroughRemoved event-driven pool import behavior triggered by Changes
Sequence Diagram(s)mermaid mermaid Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
|
Just a rebase to ensure the build is still good |
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (1)
db/seeds.d/111-upgrade_tasks.rb (1)
14-14: Consider marking this upgrade task as long-running.Given this performs a global pool import, adding
:long_running => truewould improve upgrade scheduling/timeout handling.🔧 Suggested tweak
- {:name => 'katello:upgrades:4.21:import_pools'}, + {:name => 'katello:upgrades:4.21:import_pools', :long_running => true},🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@db/seeds.d/111-upgrade_tasks.rb` at line 14, The upgrade task entry with name 'katello:upgrades:4.21:import_pools' should be marked as long-running; update the hash that defines this task to include :long_running => true (i.e., add the :long_running key to the same hash that contains :name => 'katello:upgrades:4.21:import_pools') so the upgrade scheduler treats the global pool import with extended timeout/handling.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@app/models/katello/glue/candlepin/pool.rb`:
- Around line 77-107: The import must clear optional cached fields before
applying current payload so stale DB values get removed; initialize
pool_attributes with default nil/false values for the optional keys you expect
(e.g. :multi_entitlement, :hypervisor_id, :unmapped_guest, :virt_only,
:virt_who, :roles, etc.) immediately after pool_json is set and before you
iterate combined_attributes/IMPORT_ROOT_ATTRIBUTES, then continue populating
values from IMPORT_ROOT_ATTRIBUTES, IMPORT_ATTRIBUTES and the
combined_attributes case statement and call update! as before so absent
attributes will be written as nil/false and clear previous cached values.
- Around line 84-87: The code reads symbol keys from plain JSON hashes
(pool_json) which can be string-keyed, causing pool_attributes[:pool_type],
[:stacking_id] and combined_attributes = pool_json[:productAttributes] +
pool_json[:attributes] to crash; fix by normalizing pool_json to indifferent
access before those reads (e.g., in import_data right before using pool_json
call pool_json = pool_json.with_indifferent_access) and ensure
productAttributes/attributes default to empty arrays (e.g., set product =
pool_json[:productAttributes] || [] and attrs = pool_json[:attributes] || []
before concatenating) so combined_attributes calculation in
import_candlepin_records/import_data and the references in
Katello::Resources::Candlepin::Pool#get_for_owner/find behave consistently.
In `@app/models/katello/pool.rb`:
- Around line 65-67: The expired? method can raise when end_date is nil; update
the method in class Pool (expired?) to guard against nil by returning false (or
safe boolean) when end_date is nil—e.g., change the logic to check end_date
presence first (use end_date.present? or a nil-safe conditional like end_date &&
end_date < Time.zone.now) so serialization won't blow up on records with no
end_date.
In `@db/migrate/20260301205720_add_fields_to_katello_pool.katello.rb`:
- Around line 13-20: The migration’s down method currently removes several
columns but does not revert the default change applied to
katello_pools.multi_entitlement in up; update the down method to explicitly
restore the previous default for the multi_entitlement column (i.e., add a
change_column_default / revert default call for :katello_pools,
:multi_entitlement to what it was before this migration — for example revert
from true back to the prior value, commonly false or nil) so rolling back fully
restores the prior schema state; locate the change in the same migration file
and modify the down method to include that revert alongside the existing
remove_column calls.
---
Nitpick comments:
In `@db/seeds.d/111-upgrade_tasks.rb`:
- Line 14: The upgrade task entry with name 'katello:upgrades:4.21:import_pools'
should be marked as long-running; update the hash that defines this task to
include :long_running => true (i.e., add the :long_running key to the same hash
that contains :name => 'katello:upgrades:4.21:import_pools') so the upgrade
scheduler treats the global pool import with extended timeout/handling.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: bbb7e318-5fbe-4ca6-b861-8ea02a0d3f50
📒 Files selected for processing (18)
app/lib/katello/resources/candlepin/upstream_consumer.rbapp/models/katello/events/import_pool.rbapp/models/katello/glue/candlepin/pool.rbapp/models/katello/pool.rbapp/services/katello/candlepin/event_handler.rbapp/services/katello/candlepin/message_handler.rbapp/views/katello/api/v2/subscriptions/base.json.rabldb/migrate/20260301205720_add_fields_to_katello_pool.katello.rbdb/seeds.d/111-upgrade_tasks.rblib/katello/engine.rblib/katello/tasks/upgrades/4.21/import_pools.raketest/fixtures/candlepin_messages/pool.created.jsontest/models/pool_test.rbtest/services/candlepin/message_handler_test.rbtest/services/katello/candlepin/event_handler_test.rbtest/services/katello/event_monitor/poller_thread_test.rbwebpack/components/pf3Table/formatters/EntitlementsInlineEditFormatter.jswebpack/scenes/Subscriptions/components/SubscriptionsTable/SubscriptionsTable.js
💤 Files with no reviewable changes (8)
- app/services/katello/candlepin/message_handler.rb
- test/fixtures/candlepin_messages/pool.created.json
- app/lib/katello/resources/candlepin/upstream_consumer.rb
- app/models/katello/events/import_pool.rb
- app/services/katello/candlepin/event_handler.rb
- test/services/candlepin/message_handler_test.rb
- test/services/katello/candlepin/event_handler_test.rb
- lib/katello/engine.rb
| pool_attributes = {} | ||
| pool_json = self.backend_data | ||
|
|
||
| product_attributes = pool_json["productAttributes"] + pool_json["attributes"] | ||
| product_attributes.map { |attr| pool_attributes[attr["name"].underscore.to_sym] = attr["value"] } | ||
|
|
||
| %w(accountNumber contractNumber quantity startDate endDate accountNumber consumed).each do |json_attribute| | ||
| pool_attributes[json_attribute.underscore] = pool_json[json_attribute] | ||
| end | ||
| pool_attributes[:pool_type] = pool_json["type"] if pool_json.key?("type") | ||
| pool_attributes[:upstream_pool_id] = pool_json["upstreamPoolId"] if pool_json.key?("upstreamPoolId") | ||
|
|
||
| if pool_attributes.key?(:multi_entitlement) | ||
| pool_attributes[:multi_entitlement] = pool_attributes[:multi_entitlement] == "yes" | ||
| end | ||
|
|
||
| if pool_attributes.key?(:virtual) | ||
| pool_attributes[:virt_only] = pool_attributes["virtual"] == 'true' | ||
| IMPORT_ROOT_ATTRIBUTES.each do |name| | ||
| pool_attributes[name.underscore] = pool_json[name] | ||
| end | ||
|
|
||
| if pool_attributes.key?("requires_host") | ||
| pool_attributes[:hypervisor_id] = ::Katello::Host::SubscriptionFacet.find_by(:uuid => pool_attributes["requires_host"]) | ||
| .try(:host_id) | ||
| end | ||
|
|
||
| if pool_attributes.key?(:unmapped_guests_only) && pool_attributes[:unmapped_guests_only] == 'true' | ||
| pool_attributes[:unmapped_guest] = true | ||
| pool_attributes[:pool_type] = pool_json[:type] | ||
| pool_attributes[:stacking_id] = pool_json[:stackId] | ||
|
|
||
| combined_attributes = pool_json[:productAttributes] + pool_json[:attributes] | ||
| combined_attributes.each do |attr| | ||
| case attr[:name] | ||
| when 'multi-entitlement' | ||
| pool_attributes[:multi_entitlement] = attr[:value] == 'yes' | ||
| when 'requires_host' | ||
| pool_attributes[:hypervisor_id] = ::Katello::Host::SubscriptionFacet.find_by(uuid: attr[:value])&.host_id | ||
| when 'unmapped_guests_only' | ||
| pool_attributes[:unmapped_guest] = attr[:value] == 'true' | ||
| when 'virt_only' | ||
| pool_attributes[:virt_only] = attr[:value] == 'true' | ||
| when 'virt_limit' | ||
| pool_attributes[:virt_who] = attr[:value].to_i > 0 | ||
| else | ||
| if IMPORT_ATTRIBUTES.include?(attr[:name]) | ||
| pool_attributes[attr[:name]] = attr[:value] | ||
| end | ||
| end | ||
| end | ||
|
|
||
| pool_attributes[:virt_who] = (pool_attributes['virt_limit'].present? && pool_attributes['virt_limit'] != "0") | ||
|
|
||
| pool_attributes['stack_id'] = pool_json['stackId'] | ||
| exceptions = pool_attributes.keys.map(&:to_sym) - self.attribute_names.map(&:to_sym) | ||
| self.update(pool_attributes.except!(*exceptions)) | ||
| self.save! | ||
| self.create_product_associations | ||
| update!(pool_attributes) |
There was a problem hiding this comment.
Reset optional cached fields before each re-import.
pool_attributes only gets keys for attributes that are present in the current Candlepin payload. If a later full reconcile drops requires_host, virt_limit, roles, etc., update! leaves the previous DB values behind, so the local cache can drift from Candlepin instead of reflecting the new state.
Proposed fix
def import_data
- pool_attributes = {}
+ pool_attributes = {
+ multi_entitlement: false,
+ hypervisor_id: nil,
+ unmapped_guest: false,
+ virt_only: false,
+ virt_who: false,
+ ram: nil,
+ arch: nil,
+ support_type: nil,
+ roles: nil,
+ usage: nil,
+ description: nil,
+ }
pool_json = self.backend_data🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/models/katello/glue/candlepin/pool.rb` around lines 77 - 107, The import
must clear optional cached fields before applying current payload so stale DB
values get removed; initialize pool_attributes with default nil/false values for
the optional keys you expect (e.g. :multi_entitlement, :hypervisor_id,
:unmapped_guest, :virt_only, :virt_who, :roles, etc.) immediately after
pool_json is set and before you iterate
combined_attributes/IMPORT_ROOT_ATTRIBUTES, then continue populating values from
IMPORT_ROOT_ATTRIBUTES, IMPORT_ATTRIBUTES and the combined_attributes case
statement and call update! as before so absent attributes will be written as
nil/false and clear previous cached values.
| def expired? | ||
| end_date < Time.zone.now | ||
| end |
There was a problem hiding this comment.
Guard expired? against nil end_date.
Line 66 can raise when end_date is null. Since this method is now serialized to API, one bad record can fail response generation.
🔧 Proposed fix
def expired?
- end_date < Time.zone.now
+ return false if end_date.blank?
+ end_date < Time.zone.now
end📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def expired? | |
| end_date < Time.zone.now | |
| end | |
| def expired? | |
| return false if end_date.blank? | |
| end_date < Time.zone.now | |
| end |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/models/katello/pool.rb` around lines 65 - 67, The expired? method can
raise when end_date is nil; update the method in class Pool (expired?) to guard
against nil by returning false (or safe boolean) when end_date is nil—e.g.,
change the logic to check end_date presence first (use end_date.present? or a
nil-safe conditional like end_date && end_date < Time.zone.now) so serialization
won't blow up on records with no end_date.
There was a problem hiding this comment.
If end_date is nil then let the error raise because there's a data consistency problem that needs to be known.
There was a problem hiding this comment.
@jturel, that's a fair point — failing loudly on a nil end_date is a reasonable data-integrity stance, and silently returning false would mask a corrupted record. I'll defer to your judgment here.
✏️ Learnings added
Learnt from: jturel
Repo: Katello/katello PR: 11647
File: app/models/katello/pool.rb:65-67
Timestamp: 2026-04-02T22:11:04.461Z
Learning: In `app/models/katello/pool.rb` (Ruby on Rails, Katello engine), the `expired?` method on `Katello::Pool` intentionally does NOT guard against a nil `end_date`. A nil `end_date` is considered a data consistency problem that should surface as an exception rather than being silently ignored. Do not suggest adding a nil guard to this method.
Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.
| def down | ||
| remove_column :katello_pools, :arch | ||
| remove_column :katello_pools, :roles | ||
| remove_column :katello_pools, :usage | ||
| remove_column :katello_pools, :support_type | ||
| remove_column :katello_pools, :upstream_entitlement_id | ||
| remove_column :katello_pools, :description | ||
| end |
There was a problem hiding this comment.
Make rollback fully reversible for multi_entitlement.
up changes the default for katello_pools.multi_entitlement (Line 10), but down never restores the prior default. A rollback will leave schema state altered.
🔧 Proposed fix
def down
remove_column :katello_pools, :arch
remove_column :katello_pools, :roles
remove_column :katello_pools, :usage
remove_column :katello_pools, :support_type
remove_column :katello_pools, :upstream_entitlement_id
remove_column :katello_pools, :description
+ change_column_default :katello_pools, :multi_entitlement, from: false, to: nil
end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@db/migrate/20260301205720_add_fields_to_katello_pool.katello.rb` around lines
13 - 20, The migration’s down method currently removes several columns but does
not revert the default change applied to katello_pools.multi_entitlement in up;
update the down method to explicitly restore the previous default for the
multi_entitlement column (i.e., add a change_column_default / revert default
call for :katello_pools, :multi_entitlement to what it was before this migration
— for example revert from true back to the prior value, commonly false or nil)
so rolling back fully restores the prior schema state; locate the change in the
same migration file and modify the down method to include that revert alongside
the existing remove_column calls.
There was a problem hiding this comment.
This is fine - this won't be rolled back.
There was a problem hiding this comment.
@jturel, understood — acknowledged and moving on!
✏️ Learnings added
Learnt from: jturel
Repo: Katello/katello PR: 11647
File: db/migrate/20260301205720_add_fields_to_katello_pool.katello.rb:13-20
Timestamp: 2026-04-02T22:11:20.312Z
Learning: In `db/migrate/20260301205720_add_fields_to_katello_pool.katello.rb`, the `down` method intentionally does not revert the `change_column :katello_pools, :multi_entitlement, :boolean, default: false` applied in `up`. This migration is not intended to be rolled back, so the incomplete `down` method is acceptable by design.
Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.
🧠 Learnings used
Learnt from: jturel
Repo: Katello/katello PR: 11647
File: app/models/katello/glue/candlepin/pool.rb:77-107
Timestamp: 2026-04-02T22:10:30.149Z
Learning: In `app/models/katello/glue/candlepin/pool.rb`, the `import_data` method in `Katello::Glue::Candlepin::Pool::InstanceMethods` intentionally does NOT reset optional cached fields (e.g., `hypervisor_id`, `virt_only`, `virt_who`, `roles`, `arch`, `support_type`, etc.) to nil/false before applying the current Candlepin payload. Fields absent from the payload are left as-is in the DB; this is expected behavior by design.
|
Added testing steps on how to create an expired pool from an existing Red Hat pool. |
|
Okay, I still see the subscriptions but I cannot edit the entitlements by hovering over them, unlike all the others. I'm assuming that's the expected behavior? The subscription will be gone when I refresh my manifest, right? |
|
The rh_cloud failures should be fixed after a rebase: theforeman/foreman_rh_cloud#1193 |
036b911 to
1b9561e
Compare
RIght - the reason you can't edit them is because the upstream pool and entitlement are also expired/gone so it would be a 404. Refreshing cleans it up - but don't take my word for it :) |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@app/models/katello/glue/candlepin/pool.rb`:
- Around line 77-78: The code in Pool#import_all is re-querying Candlepin via
self.backend_data (which falls back to Pool.find) causing N+1 lookups and aborts
when a pool vanishes; change the assignment in
app/models/katello/glue/candlepin/pool.rb so pool_json is taken from the
already-preloaded payload (the per-item backend_data set by get_for_owner) and
do not call Pool.find or re-fetch; if backend_data is nil treat the pool as
missing/skip or destroy it (mirror the pattern shown in CandlepinObject where
item.backend_data = candlepin_records[item.cp_id] and item.import_data) and
remove any reliance on CandlepinPoolGone around this lookup.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: dd38cb6a-d272-4306-9cdb-75cbbe9b45cd
📒 Files selected for processing (18)
app/lib/katello/resources/candlepin/upstream_consumer.rbapp/models/katello/events/import_pool.rbapp/models/katello/glue/candlepin/pool.rbapp/models/katello/pool.rbapp/services/katello/candlepin/event_handler.rbapp/services/katello/candlepin/message_handler.rbapp/views/katello/api/v2/subscriptions/base.json.rabldb/migrate/20260301205720_add_fields_to_katello_pool.katello.rbdb/seeds.d/111-upgrade_tasks.rblib/katello/engine.rblib/katello/tasks/upgrades/4.21/import_pools.raketest/fixtures/candlepin_messages/pool.created.jsontest/models/pool_test.rbtest/services/candlepin/message_handler_test.rbtest/services/katello/candlepin/event_handler_test.rbtest/services/katello/event_monitor/poller_thread_test.rbwebpack/components/pf3Table/formatters/EntitlementsInlineEditFormatter.jswebpack/scenes/Subscriptions/components/SubscriptionsTable/SubscriptionsTable.js
💤 Files with no reviewable changes (8)
- app/services/katello/candlepin/event_handler.rb
- lib/katello/engine.rb
- app/models/katello/events/import_pool.rb
- test/fixtures/candlepin_messages/pool.created.json
- app/services/katello/candlepin/message_handler.rb
- test/services/candlepin/message_handler_test.rb
- test/services/katello/candlepin/event_handler_test.rb
- app/lib/katello/resources/candlepin/upstream_consumer.rb
✅ Files skipped from review due to trivial changes (3)
- db/seeds.d/111-upgrade_tasks.rb
- webpack/scenes/Subscriptions/components/SubscriptionsTable/SubscriptionsTable.js
- lib/katello/tasks/upgrades/4.21/import_pools.rake
🚧 Files skipped from review as they are similar to previous changes (5)
- app/views/katello/api/v2/subscriptions/base.json.rabl
- app/models/katello/pool.rb
- webpack/components/pf3Table/formatters/EntitlementsInlineEditFormatter.js
- db/migrate/20260301205720_add_fields_to_katello_pool.katello.rb
- test/services/katello/event_monitor/poller_thread_test.rb
| pool_attributes = {} | ||
| pool_json = self.backend_data |
There was a problem hiding this comment.
Avoid re-fetching pool JSON during batch import.
import_all already has each pool payload from get_for_owner, but self.backend_data here still falls back to Pool.find(cp_id) when it has not been preloaded. The updated tests stub both calls, so this extra lookup is live today. With the CandlepinPoolGone rescue removed, a pool that disappears between the bulk fetch and this per-pool fetch now aborts the org import/upgrade, and every full reconcile pays an N+1 Candlepin cost.
Suggested direction
# app/models/katello/glue/candlepin/candlepin_object.rb
objects.each do |item|
exists_in_candlepin = candlepin_records.key?(item.cp_id)
if exists_in_candlepin
item.backend_data = candlepin_records[item.cp_id]
item.import_data
else
item.destroy
end
end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/models/katello/glue/candlepin/pool.rb` around lines 77 - 78, The code in
Pool#import_all is re-querying Candlepin via self.backend_data (which falls back
to Pool.find) causing N+1 lookups and aborts when a pool vanishes; change the
assignment in app/models/katello/glue/candlepin/pool.rb so pool_json is taken
from the already-preloaded payload (the per-item backend_data set by
get_for_owner) and do not call Pool.find or re-fetch; if backend_data is nil
treat the pool as missing/skip or destroy it (mirror the pattern shown in
CandlepinObject where item.backend_data = candlepin_records[item.cp_id] and
item.import_data) and remove any reliance on CandlepinPoolGone around this
lookup.
There was a problem hiding this comment.
I agree with this feedback but that wasn't changed as part of this PR and shouldn't be addressed here. Pool are in fact loaded from Candlepin twice during import_all. Once via the /owners API and again individually.
There was a problem hiding this comment.
@jturel, understood — thanks for the clarification. The double-fetch (once via the /owners bulk API and once individually per pool via backend_data) is pre-existing behavior and out of scope here. I'll stop pressing on this.
Would you like me to open a follow-up GitHub issue to track the N+1 Candlepin lookup optimization so it isn't lost?
🧠 Learnings used
Learnt from: jturel
Repo: Katello/katello PR: 11647
File: app/models/katello/glue/candlepin/pool.rb:77-107
Timestamp: 2026-04-02T22:10:30.149Z
Learning: In `app/models/katello/glue/candlepin/pool.rb`, the `import_data` method in `Katello::Glue::Candlepin::Pool::InstanceMethods` intentionally does NOT reset optional cached fields (e.g., `hypervisor_id`, `virt_only`, `virt_who`, `roles`, `arch`, `support_type`, etc.) to nil/false before applying the current Candlepin payload. Fields absent from the payload are left as-is in the DB; this is expected behavior by design.
Learnt from: jturel
Repo: Katello/katello PR: 11647
File: app/models/katello/glue/candlepin/pool.rb:84-87
Timestamp: 2026-04-02T22:10:17.185Z
Learning: In Katello (app/models/katello/glue/candlepin/pool.rb), `pool_json` (i.e., `self.backend_data`) in `import_data` is always a `HashWithIndifferentAccess` because it is sourced exclusively through `Katello::Resources::Candlepin::Pool.find`, which normalizes the Candlepin JSON response with `.with_indifferent_access`. Symbol-key reads on `pool_json` (e.g., `pool_json[:productAttributes]`, `pool_json[:attributes]`, `pool_json[:type]`, `pool_json[:stackId]`) are therefore safe.
Learnt from: jturel
Repo: Katello/katello PR: 11647
File: app/models/katello/pool.rb:65-67
Timestamp: 2026-04-02T22:11:04.461Z
Learning: In `app/models/katello/pool.rb` (Ruby on Rails, Katello engine), the `expired?` method on `Katello::Pool` intentionally does NOT guard against a nil `end_date`. A nil `end_date` is considered a data consistency problem that should surface as an exception rather than being silently ignored. Do not suggest adding a nil guard to this method.
Learnt from: jturel
Repo: Katello/katello PR: 11647
File: db/migrate/20260301205720_add_fields_to_katello_pool.katello.rb:13-20
Timestamp: 2026-04-02T22:11:20.312Z
Learning: In `db/migrate/20260301205720_add_fields_to_katello_pool.katello.rb`, the `down` method intentionally does not revert the `change_column :katello_pools, :multi_entitlement, :boolean, default: false` applied in `up`. This migration is not intended to be rolled back, so the incomplete `down` method is acceptable by design.
I'm not so sure.
katello/app/models/katello/concerns/organization_extensions.rb Lines 256 to 258 in 6c0c016
Testing that against an expired pool: Let me know if I missed your point! |
|
Can you show which pools were included in that notification? If the list is empty, compare the end dates of any expired pools vs the timestamps (from DB - not the UI) of that notification, and share here. |
|
Turns out my notification is for an unrelated subscription, so all is good! |
jeremylenz
left a comment
There was a problem hiding this comment.
ok, I'm good with this! Works as expected and the refresh cleaned up the expired subscription.
ACK 👍
Thanks! Merge when ready then |


What are the changes introduced in this pull request?
Removes both the "candlepin events" pool import and the "katello events" pool import. These aren't necessary because:
Custom product creation creates its own pool
katello/app/models/katello/glue/candlepin/product.rb
Line 121 in 79fa7bd
Manifest import & refresh do a full reconciliation against Candlepin pools for the given org
katello/app/models/katello/glue/provider.rb
Line 153 in 79fa7bd
Since this covers the import side, I added a number of attributes to
Katello::Poolso that all necessary data is cached locally. When the pool is removed in Candlepin, Katello won't be affected.Considerations taken when implementing this change?
What are the testing steps for this pull request?
Expired pool testing steps
Testing an expired pool without waiting for one to actually expire can be done like so, given an imported manifest:
You'll see it in your subscriptions list and can view its details without any issue because there are no calls to upstream or downstream related to it.
Summary by Sourcery
Remove Candlepin pool-created event-based pool import and rely on full pool import while expanding locally cached pool attributes.
Enhancements:
Tests:
Chores:
Summary by CodeRabbit
New Features
expiredattribute.New Fields
Chores