Skip to content

Refs #38953 - Remove import pool event consumption#11647

Merged
jeremylenz merged 2 commits intoKatello:masterfrom
jturel:candlepin_import_events
Apr 6, 2026
Merged

Refs #38953 - Remove import pool event consumption#11647
jeremylenz merged 2 commits intoKatello:masterfrom
jturel:candlepin_import_events

Conversation

@jturel
Copy link
Copy Markdown
Member

@jturel jturel commented Feb 20, 2026

What are the changes introduced in this pull request?

Removes both the "candlepin events" pool import and the "katello events" pool import. These aren't necessary because:

  • Custom product creation creates its own pool

    pools.each { |pool_json| ::Katello::Pool.import_pool(pool_json['id']) }

  • Manifest import & refresh do a full reconciliation against Candlepin pools for the given org

    Katello::Pool.import_all(organization)

  • Since this covers the import side, I added a number of attributes to Katello::Pool so that all necessary data is cached locally. When the pool is removed in Candlepin, Katello won't be affected.

Considerations taken when implementing this change?

  • Maintain consistency

What are the testing steps for this pull request?

  • Run the migration and upgrade rake task to update your local db
  • Create a custom product and verify that the correct pools are created (ie same before and after this change)
  • Import a manifest with subscriptions and verify that the correct pools are created (ie same before and after this change)
  • Check out the subscriptions API and/or details page to see that all data is there (some may be absent depending on the subscription)

Expired pool testing steps

Testing an expired pool without waiting for one to actually expire can be done like so, given an imported manifest:

p = Katello::Pool.redhat.last # Take any red hat pool
new = Katello::Pool.new(p.attributes.except('id')) # build a new pool with the same attributes
new.products = p.products

# override any attributes that map it to upstream or downstream candlepin
new.cp_id = 'FAKE'
new.upstream_entitlement_id = 'FAKE'
new.upstream_pool_id = 'FAKE'

# make it expired
new.end_date = 1.day.ago

new.save!

You'll see it in your subscriptions list and can view its details without any issue because there are no calls to upstream or downstream related to it.

Summary by Sourcery

Remove Candlepin pool-created event-based pool import and rely on full pool import while expanding locally cached pool attributes.

Enhancements:

  • Expand Katello pool import to persist additional Candlepin attributes (architecture, roles, usage, support type, upstream entitlement, description, and related flags) locally for offline robustness.
  • Simplify pool import logic by removing lazy attribute loading and direct Candlepin pool-gone handling, using a single import path based on backend data.

Tests:

  • Update and extend pool model tests to validate the new import_data behavior and attribute mapping, and adjust event monitor tests to reflect the removal of import_pool events.

Chores:

  • Remove the ImportPool event type, associated Candlepin pool-created message handling, and related fixtures and tests.
  • Add a migration and upgrade task to backfill new pool attributes via a one-time pool import during the 4.21 upgrade.

Summary by CodeRabbit

  • New Features

    • Subscriptions API now includes an expired attribute.
    • Expired subscriptions are treated as read-only in the UI.
  • New Fields

    • Pool records now include additional attributes (architecture, roles, usage, support type, upstream entitlement id, description) and a boolean multi-entitlement flag.
  • Chores

    • Added an upgrade task to import pools and populate cached pool attributes.

Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've reviewed your changes and they look great!


Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

@jturel jturel force-pushed the candlepin_import_events branch from 0745621 to cc44f7c Compare February 24, 2026 19:49
@jturel jturel changed the title Remove pool import events Refs #38953 - Remove import pool event consumption Feb 24, 2026
@jturel jturel marked this pull request as draft March 11, 2026 16:41
@jturel jturel force-pushed the candlepin_import_events branch from cc44f7c to a95a954 Compare March 20, 2026 01:47
@jturel jturel marked this pull request as ready for review March 20, 2026 14:03
Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 4 issues, and left some high level feedback:

  • The AddFieldsToKatelloPool migration’s down method doesn’t restore the multi_entitlement column default to its prior state, so the migration isn’t fully reversible; consider explicitly changing the default back (or making the migration explicitly irreversible).
  • The new katello:upgrades:4.21:import_pools task will run Katello::Pool.import_all on upgrade without any long-running/skip-failure flags; it may be worth revisiting whether this should be marked as long-running or otherwise guarded for large installations.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- The `AddFieldsToKatelloPool` migration’s `down` method doesn’t restore the `multi_entitlement` column default to its prior state, so the migration isn’t fully reversible; consider explicitly changing the default back (or making the migration explicitly irreversible).
- The new `katello:upgrades:4.21:import_pools` task will run `Katello::Pool.import_all` on upgrade without any long-running/skip-failure flags; it may be worth revisiting whether this should be marked as long-running or otherwise guarded for large installations.

## Individual Comments

### Comment 1
<location path="app/models/katello/glue/candlepin/pool.rb" line_range="77-86" />
<code_context>
+        pool_attributes = {}
</code_context>
<issue_to_address>
**issue (bug_risk):** Mixed symbol access on `pool_json` and `attr` is likely incompatible with string-keyed backend data.

The previous version used string keys on `pool_json`/`attr`, which matches typical Candlepin JSON. The new symbol-based access will return `nil` unless `backend_data` is deep-symbolized elsewhere, leading to `NoMethodError` when you use `+` or iterate. Either keep string-based access or explicitly symbolize the keys when `backend_data` is constructed.
</issue_to_address>

### Comment 2
<location path="app/models/katello/glue/candlepin/pool.rb" line_range="80-85" />
<code_context>
-
-        if pool_attributes.key?(:virtual)
-          pool_attributes[:virt_only] = pool_attributes["virtual"] == 'true'
+        IMPORT_ROOT_ATTRIBUTES.each do |name|
+          pool_attributes[name.underscore] = pool_json[name]
         end

-        if pool_attributes.key?("requires_host")
-          pool_attributes[:hypervisor_id] = ::Katello::Host::SubscriptionFacet.find_by(:uuid => pool_attributes["requires_host"])
-                                                                              .try(:host_id)
-        end
-
-        if pool_attributes.key?(:unmapped_guests_only) && pool_attributes[:unmapped_guests_only] == 'true'
-          pool_attributes[:unmapped_guest] = true
+        pool_attributes[:pool_type] = pool_json[:type]
+        pool_attributes[:stacking_id] = pool_json[:stackId]
+
+        combined_attributes = pool_json[:productAttributes] + pool_json[:attributes]
</code_context>
<issue_to_address>
**question (bug_risk):** Some previously-imported root fields (e.g. `consumed`, `type`) have changed handling and may no longer be populated as expected.

Previously we imported `%w(accountNumber contractNumber quantity startDate endDate accountNumber consumed)` and then set `:pool_type`/`:upstream_pool_id`. Now we only import `IMPORT_ROOT_ATTRIBUTES` and set `:pool_type`/`:stacking_id` from `pool_json[:type]`/`[:stackId]`, so:

* `consumed` is no longer imported.
* `type` is now mapped into `:pool_type` via a symbol key, which may be wrong given how other keys are accessed.

If `consumed`/`type` are still used elsewhere (reporting, UI, etc.), consider restoring `consumed` import or removing its usages to avoid a silent behavior change.
</issue_to_address>

### Comment 3
<location path="app/models/katello/glue/candlepin/pool.rb" line_range="17" />
<code_context>
     module ClassMethods
       def candlepin_data(cp_id)
         Katello::Resources::Candlepin::Pool.find(cp_id)
-      rescue Katello::Errors::CandlepinPoolGone
-        {}
       end
</code_context>
<issue_to_address>
**question:** Removing the `CandlepinPoolGone` rescue changes how missing pools are handled and may surface exceptions in existing flows.

With the rescue removed, any lookup for a deleted or expired pool will now raise instead of returning `{}`. If any remaining callers still expect missing pools to be tolerated (e.g., cleanup/upgrade paths), this could cause new errors. Consider reviewing call sites or adding targeted handling where missing pools are acceptable.
</issue_to_address>

### Comment 4
<location path="db/migrate/20260301205720_add_fields_to_katello_pool.katello.rb" line_range="10" />
<code_context>
+    add_column :katello_pools, :upstream_entitlement_id, :string
+    add_column :katello_pools, :description, :string
+
+    change_column :katello_pools, :multi_entitlement, :boolean, default: false
+  end
+
</code_context>
<issue_to_address>
**issue (bug_risk):** `down` does not restore the original `multi_entitlement` column definition.

The `up` migration changes `multi_entitlement` to a boolean with default `false`, but `down` only drops the new columns and leaves `multi_entitlement` in its modified state. Rolling back this migration will not fully restore the schema. Please either add a `change_column` in `down` to restore the original type/default, or refactor to a reversible `change` with `reversible do |dir|` so the migration is symmetric.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment thread app/models/katello/glue/candlepin/pool.rb
Comment thread app/models/katello/glue/candlepin/pool.rb
Comment thread app/models/katello/glue/candlepin/pool.rb
Comment thread db/migrate/20260301205720_add_fields_to_katello_pool.katello.rb
end

# rubocop:disable Metrics/CyclomaticComplexity
def import_data
Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This method was (imo) kind of a disaster. I took the opportunity to rewrite it because it was doing strange, unnecessary things and I couldn't understand it.


# rubocop:disable Metrics/CyclomaticComplexity
def import_data
pool_attributes = {}.with_indifferent_access
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Idk if sourcery knows about .with_indifferent_access, but it sure gets mad that the keys might not be strings..

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can see that! 🤣

@jturel
Copy link
Copy Markdown
Member Author

jturel commented Mar 20, 2026

@jeremylenz you'll definitely want this in place (db migrated + upgrade task run) for those expiring subscriptions you have in a few days.

One last (I think) scenario to consider here is to do with expired subscriptions on the Subscriptions page. Modifying their entitlements won't work since the upstream pool and entitlement are likely gone by the time someone would try to do that. Need to take a look at that, but I think there's plenty here to test / review in the mean time if you don't want to wait.

end

if pool_attributes.key?(:virtual)
pool_attributes[:virt_only] = pool_attributes["virtual"] == 'true'
Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jturel jturel requested a review from jeremylenz March 31, 2026 19:44
Copy link
Copy Markdown
Member

@jeremylenz jeremylenz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, I've run the migration + upgrade task and it was fine. My subscription expires Friday at midnight and just to be safe I've synced content and installed a package on my hosts. let's see what happens!

Image

Comment thread app/services/katello/candlepin/event_handler.rb
rescue ActiveRecord::RecordInvalid
# if we hit this block it's likely that the pool's subscription, product are being created
# as a result of manifest import/refresh or custom product creation
Rails.logger.warn("Unable to import pool. It will likely be created by another process.")
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I sure hope it's created by another process now! 😄

@jturel jturel force-pushed the candlepin_import_events branch from f10d52a to 036b911 Compare April 2, 2026 20:51
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 2, 2026

📝 Walkthrough

Walkthrough

Removed event-driven pool import behavior triggered by pool.created messages and related test/fixture code; refactored pool import to a single-pass, whitelist-driven implementation; added DB columns and a rake upgrade task to import pools in batch; surfaced pool expiration in API and UI to disable editing for expired pools.

Changes

Cohort / File(s) Summary
Event handling & message consumer
app/lib/katello/resources/candlepin/upstream_consumer.rb, app/services/katello/candlepin/event_handler.rb, app/services/katello/candlepin/message_handler.rb, lib/katello/engine.rb
Removed content_access method from UpstreamConsumer, removed subject-based pool.created import invocation, deleted import_pool on MessageHandler, and deregistered the ImportPool event from engine registration.
Removed event class
app/models/katello/events/import_pool.rb
Deleted the Katello::Events::ImportPool event class and its EVENT_TYPE constant.
Pool import refactor
app/models/katello/glue/candlepin/pool.rb, app/models/katello/pool.rb
Rewrote import logic to use IMPORT_ROOT_ATTRIBUTES and IMPORT_ATTRIBUTES whitelists, removed lazy accessor/import_lazy_attributes, simplified import_data to build pool_attributes in one pass and call update! + create_product_associations; added expired? instance method on Pool.
Database schema & migration
db/migrate/20260301205720_add_fields_to_katello_pool.katello.rb
Added columns arch, roles, usage, support_type, upstream_entitlement_id, description to katello_pools; changed multi_entitlement to boolean with default false (migration up).
Batch import task & upgrade registration
lib/katello/tasks/upgrades/4.21/import_pools.rake, db/seeds.d/111-upgrade_tasks.rb
Added rake task katello:upgrades:4.21:import_pools that calls Katello::Pool.import_all and registered the task in upgrade seeds.
API & UI changes (expired state)
app/views/katello/api/v2/subscriptions/base.json.rabl, webpack/components/pf3Table/formatters/EntitlementsInlineEditFormatter.js, webpack/scenes/Subscriptions/components/SubscriptionsTable/SubscriptionsTable.js
Added expired attribute to subscription JSON; disabled inline editing for expired entitlements and prevented editing of expired subscription rows with upstream pool IDs.
Tests & fixtures
test/fixtures/candlepin_messages/pool.created.json, test/services/candlepin/message_handler_test.rb, test/services/katello/candlepin/event_handler_test.rb, test/models/pool_test.rb, test/services/katello/event_monitor/poller_thread_test.rb
Removed pool.created fixture and related tests; updated pool import tests and fixtures to validate new import mapping; adjusted PollerThread test to use generic event instance creation.

Sequence Diagram(s)

mermaid
sequenceDiagram
participant Candlepin
participant MessageHandler
participant EventQueue
participant ImportPoolEvent
participant PoolModel
participant DB

Note over Candlepin,MessageHandler: Old flow (event-driven)
Candlepin->>MessageHandler: publish pool.created
MessageHandler->>EventQueue: enqueue import_pool
EventQueue->>ImportPoolEvent: schedule/run event
ImportPoolEvent->>PoolModel: import_pool(pool_id)
PoolModel->>DB: update pool attributes

mermaid
sequenceDiagram
participant Admin
participant RakeTask
participant PoolModel
participant Candlepin
participant DB

Note over Admin,RakeTask: New flow (batch import)
Admin->>RakeTask: run katello:upgrades:4.21:import_pools
RakeTask->>PoolModel: Katello::Pool.import_all
PoolModel->>Candlepin: fetch pool backend data
PoolModel->>DB: update! pool attributes in one pass

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Refs #38953 - Remove import pool event consumption' directly and clearly summarizes the main change: removal of event-driven pool import handling, which is the primary objective of the PR.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

@jturel
Copy link
Copy Markdown
Member Author

jturel commented Apr 2, 2026

Just a rebase to ensure the build is still good

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (1)
db/seeds.d/111-upgrade_tasks.rb (1)

14-14: Consider marking this upgrade task as long-running.

Given this performs a global pool import, adding :long_running => true would improve upgrade scheduling/timeout handling.

🔧 Suggested tweak
-    {:name => 'katello:upgrades:4.21:import_pools'},
+    {:name => 'katello:upgrades:4.21:import_pools', :long_running => true},
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@db/seeds.d/111-upgrade_tasks.rb` at line 14, The upgrade task entry with name
'katello:upgrades:4.21:import_pools' should be marked as long-running; update
the hash that defines this task to include :long_running => true (i.e., add the
:long_running key to the same hash that contains :name =>
'katello:upgrades:4.21:import_pools') so the upgrade scheduler treats the global
pool import with extended timeout/handling.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@app/models/katello/glue/candlepin/pool.rb`:
- Around line 77-107: The import must clear optional cached fields before
applying current payload so stale DB values get removed; initialize
pool_attributes with default nil/false values for the optional keys you expect
(e.g. :multi_entitlement, :hypervisor_id, :unmapped_guest, :virt_only,
:virt_who, :roles, etc.) immediately after pool_json is set and before you
iterate combined_attributes/IMPORT_ROOT_ATTRIBUTES, then continue populating
values from IMPORT_ROOT_ATTRIBUTES, IMPORT_ATTRIBUTES and the
combined_attributes case statement and call update! as before so absent
attributes will be written as nil/false and clear previous cached values.
- Around line 84-87: The code reads symbol keys from plain JSON hashes
(pool_json) which can be string-keyed, causing pool_attributes[:pool_type],
[:stacking_id] and combined_attributes = pool_json[:productAttributes] +
pool_json[:attributes] to crash; fix by normalizing pool_json to indifferent
access before those reads (e.g., in import_data right before using pool_json
call pool_json = pool_json.with_indifferent_access) and ensure
productAttributes/attributes default to empty arrays (e.g., set product =
pool_json[:productAttributes] || [] and attrs = pool_json[:attributes] || []
before concatenating) so combined_attributes calculation in
import_candlepin_records/import_data and the references in
Katello::Resources::Candlepin::Pool#get_for_owner/find behave consistently.

In `@app/models/katello/pool.rb`:
- Around line 65-67: The expired? method can raise when end_date is nil; update
the method in class Pool (expired?) to guard against nil by returning false (or
safe boolean) when end_date is nil—e.g., change the logic to check end_date
presence first (use end_date.present? or a nil-safe conditional like end_date &&
end_date < Time.zone.now) so serialization won't blow up on records with no
end_date.

In `@db/migrate/20260301205720_add_fields_to_katello_pool.katello.rb`:
- Around line 13-20: The migration’s down method currently removes several
columns but does not revert the default change applied to
katello_pools.multi_entitlement in up; update the down method to explicitly
restore the previous default for the multi_entitlement column (i.e., add a
change_column_default / revert default call for :katello_pools,
:multi_entitlement to what it was before this migration — for example revert
from true back to the prior value, commonly false or nil) so rolling back fully
restores the prior schema state; locate the change in the same migration file
and modify the down method to include that revert alongside the existing
remove_column calls.

---

Nitpick comments:
In `@db/seeds.d/111-upgrade_tasks.rb`:
- Line 14: The upgrade task entry with name 'katello:upgrades:4.21:import_pools'
should be marked as long-running; update the hash that defines this task to
include :long_running => true (i.e., add the :long_running key to the same hash
that contains :name => 'katello:upgrades:4.21:import_pools') so the upgrade
scheduler treats the global pool import with extended timeout/handling.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: bbb7e318-5fbe-4ca6-b861-8ea02a0d3f50

📥 Commits

Reviewing files that changed from the base of the PR and between cc28088 and 036b911.

📒 Files selected for processing (18)
  • app/lib/katello/resources/candlepin/upstream_consumer.rb
  • app/models/katello/events/import_pool.rb
  • app/models/katello/glue/candlepin/pool.rb
  • app/models/katello/pool.rb
  • app/services/katello/candlepin/event_handler.rb
  • app/services/katello/candlepin/message_handler.rb
  • app/views/katello/api/v2/subscriptions/base.json.rabl
  • db/migrate/20260301205720_add_fields_to_katello_pool.katello.rb
  • db/seeds.d/111-upgrade_tasks.rb
  • lib/katello/engine.rb
  • lib/katello/tasks/upgrades/4.21/import_pools.rake
  • test/fixtures/candlepin_messages/pool.created.json
  • test/models/pool_test.rb
  • test/services/candlepin/message_handler_test.rb
  • test/services/katello/candlepin/event_handler_test.rb
  • test/services/katello/event_monitor/poller_thread_test.rb
  • webpack/components/pf3Table/formatters/EntitlementsInlineEditFormatter.js
  • webpack/scenes/Subscriptions/components/SubscriptionsTable/SubscriptionsTable.js
💤 Files with no reviewable changes (8)
  • app/services/katello/candlepin/message_handler.rb
  • test/fixtures/candlepin_messages/pool.created.json
  • app/lib/katello/resources/candlepin/upstream_consumer.rb
  • app/models/katello/events/import_pool.rb
  • app/services/katello/candlepin/event_handler.rb
  • test/services/candlepin/message_handler_test.rb
  • test/services/katello/candlepin/event_handler_test.rb
  • lib/katello/engine.rb

Comment on lines +77 to +107
pool_attributes = {}
pool_json = self.backend_data

product_attributes = pool_json["productAttributes"] + pool_json["attributes"]
product_attributes.map { |attr| pool_attributes[attr["name"].underscore.to_sym] = attr["value"] }

%w(accountNumber contractNumber quantity startDate endDate accountNumber consumed).each do |json_attribute|
pool_attributes[json_attribute.underscore] = pool_json[json_attribute]
end
pool_attributes[:pool_type] = pool_json["type"] if pool_json.key?("type")
pool_attributes[:upstream_pool_id] = pool_json["upstreamPoolId"] if pool_json.key?("upstreamPoolId")

if pool_attributes.key?(:multi_entitlement)
pool_attributes[:multi_entitlement] = pool_attributes[:multi_entitlement] == "yes"
end

if pool_attributes.key?(:virtual)
pool_attributes[:virt_only] = pool_attributes["virtual"] == 'true'
IMPORT_ROOT_ATTRIBUTES.each do |name|
pool_attributes[name.underscore] = pool_json[name]
end

if pool_attributes.key?("requires_host")
pool_attributes[:hypervisor_id] = ::Katello::Host::SubscriptionFacet.find_by(:uuid => pool_attributes["requires_host"])
.try(:host_id)
end

if pool_attributes.key?(:unmapped_guests_only) && pool_attributes[:unmapped_guests_only] == 'true'
pool_attributes[:unmapped_guest] = true
pool_attributes[:pool_type] = pool_json[:type]
pool_attributes[:stacking_id] = pool_json[:stackId]

combined_attributes = pool_json[:productAttributes] + pool_json[:attributes]
combined_attributes.each do |attr|
case attr[:name]
when 'multi-entitlement'
pool_attributes[:multi_entitlement] = attr[:value] == 'yes'
when 'requires_host'
pool_attributes[:hypervisor_id] = ::Katello::Host::SubscriptionFacet.find_by(uuid: attr[:value])&.host_id
when 'unmapped_guests_only'
pool_attributes[:unmapped_guest] = attr[:value] == 'true'
when 'virt_only'
pool_attributes[:virt_only] = attr[:value] == 'true'
when 'virt_limit'
pool_attributes[:virt_who] = attr[:value].to_i > 0
else
if IMPORT_ATTRIBUTES.include?(attr[:name])
pool_attributes[attr[:name]] = attr[:value]
end
end
end

pool_attributes[:virt_who] = (pool_attributes['virt_limit'].present? && pool_attributes['virt_limit'] != "0")

pool_attributes['stack_id'] = pool_json['stackId']
exceptions = pool_attributes.keys.map(&:to_sym) - self.attribute_names.map(&:to_sym)
self.update(pool_attributes.except!(*exceptions))
self.save!
self.create_product_associations
update!(pool_attributes)
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot Apr 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Reset optional cached fields before each re-import.

pool_attributes only gets keys for attributes that are present in the current Candlepin payload. If a later full reconcile drops requires_host, virt_limit, roles, etc., update! leaves the previous DB values behind, so the local cache can drift from Candlepin instead of reflecting the new state.

Proposed fix
       def import_data
-        pool_attributes = {}
+        pool_attributes = {
+          multi_entitlement: false,
+          hypervisor_id: nil,
+          unmapped_guest: false,
+          virt_only: false,
+          virt_who: false,
+          ram: nil,
+          arch: nil,
+          support_type: nil,
+          roles: nil,
+          usage: nil,
+          description: nil,
+        }
         pool_json = self.backend_data
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/models/katello/glue/candlepin/pool.rb` around lines 77 - 107, The import
must clear optional cached fields before applying current payload so stale DB
values get removed; initialize pool_attributes with default nil/false values for
the optional keys you expect (e.g. :multi_entitlement, :hypervisor_id,
:unmapped_guest, :virt_only, :virt_who, :roles, etc.) immediately after
pool_json is set and before you iterate
combined_attributes/IMPORT_ROOT_ATTRIBUTES, then continue populating values from
IMPORT_ROOT_ATTRIBUTES, IMPORT_ATTRIBUTES and the combined_attributes case
statement and call update! as before so absent attributes will be written as
nil/false and clear previous cached values.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is expected

Comment thread app/models/katello/glue/candlepin/pool.rb
Comment on lines +65 to +67
def expired?
end_date < Time.zone.now
end
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot Apr 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Guard expired? against nil end_date.

Line 66 can raise when end_date is null. Since this method is now serialized to API, one bad record can fail response generation.

🔧 Proposed fix
 def expired?
-  end_date < Time.zone.now
+  return false if end_date.blank?
+  end_date < Time.zone.now
 end
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def expired?
end_date < Time.zone.now
end
def expired?
return false if end_date.blank?
end_date < Time.zone.now
end
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/models/katello/pool.rb` around lines 65 - 67, The expired? method can
raise when end_date is nil; update the method in class Pool (expired?) to guard
against nil by returning false (or safe boolean) when end_date is nil—e.g.,
change the logic to check end_date presence first (use end_date.present? or a
nil-safe conditional like end_date && end_date < Time.zone.now) so serialization
won't blow up on records with no end_date.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If end_date is nil then let the error raise because there's a data consistency problem that needs to be known.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jturel, that's a fair point — failing loudly on a nil end_date is a reasonable data-integrity stance, and silently returning false would mask a corrupted record. I'll defer to your judgment here.


✏️ Learnings added
Learnt from: jturel
Repo: Katello/katello PR: 11647
File: app/models/katello/pool.rb:65-67
Timestamp: 2026-04-02T22:11:04.461Z
Learning: In `app/models/katello/pool.rb` (Ruby on Rails, Katello engine), the `expired?` method on `Katello::Pool` intentionally does NOT guard against a nil `end_date`. A nil `end_date` is considered a data consistency problem that should surface as an exception rather than being silently ignored. Do not suggest adding a nil guard to this method.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.

Comment on lines +13 to +20
def down
remove_column :katello_pools, :arch
remove_column :katello_pools, :roles
remove_column :katello_pools, :usage
remove_column :katello_pools, :support_type
remove_column :katello_pools, :upstream_entitlement_id
remove_column :katello_pools, :description
end
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot Apr 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Make rollback fully reversible for multi_entitlement.

up changes the default for katello_pools.multi_entitlement (Line 10), but down never restores the prior default. A rollback will leave schema state altered.

🔧 Proposed fix
 def down
   remove_column :katello_pools, :arch
   remove_column :katello_pools, :roles
   remove_column :katello_pools, :usage
   remove_column :katello_pools, :support_type
   remove_column :katello_pools, :upstream_entitlement_id
   remove_column :katello_pools, :description
+  change_column_default :katello_pools, :multi_entitlement, from: false, to: nil
 end
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@db/migrate/20260301205720_add_fields_to_katello_pool.katello.rb` around lines
13 - 20, The migration’s down method currently removes several columns but does
not revert the default change applied to katello_pools.multi_entitlement in up;
update the down method to explicitly restore the previous default for the
multi_entitlement column (i.e., add a change_column_default / revert default
call for :katello_pools, :multi_entitlement to what it was before this migration
— for example revert from true back to the prior value, commonly false or nil)
so rolling back fully restores the prior schema state; locate the change in the
same migration file and modify the down method to include that revert alongside
the existing remove_column calls.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is fine - this won't be rolled back.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jturel, understood — acknowledged and moving on!


✏️ Learnings added
Learnt from: jturel
Repo: Katello/katello PR: 11647
File: db/migrate/20260301205720_add_fields_to_katello_pool.katello.rb:13-20
Timestamp: 2026-04-02T22:11:20.312Z
Learning: In `db/migrate/20260301205720_add_fields_to_katello_pool.katello.rb`, the `down` method intentionally does not revert the `change_column :katello_pools, :multi_entitlement, :boolean, default: false` applied in `up`. This migration is not intended to be rolled back, so the incomplete `down` method is acceptable by design.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.


🧠 Learnings used
Learnt from: jturel
Repo: Katello/katello PR: 11647
File: app/models/katello/glue/candlepin/pool.rb:77-107
Timestamp: 2026-04-02T22:10:30.149Z
Learning: In `app/models/katello/glue/candlepin/pool.rb`, the `import_data` method in `Katello::Glue::Candlepin::Pool::InstanceMethods` intentionally does NOT reset optional cached fields (e.g., `hypervisor_id`, `virt_only`, `virt_who`, `roles`, `arch`, `support_type`, etc.) to nil/false before applying the current Candlepin payload. Fields absent from the payload are left as-is in the DB; this is expected behavior by design.

@jturel
Copy link
Copy Markdown
Member Author

jturel commented Apr 3, 2026

Added testing steps on how to create an expired pool from an existing Red Hat pool.

@jeremylenz
Copy link
Copy Markdown
Member

Okay, I still see the subscriptions but I cannot edit the entitlements by hovering over them, unlike all the others. I'm assuming that's the expected behavior? The subscription will be gone when I refresh my manifest, right?

@jeremylenz
Copy link
Copy Markdown
Member

The rh_cloud failures should be fixed after a rebase: theforeman/foreman_rh_cloud#1193

@jturel jturel force-pushed the candlepin_import_events branch from 036b911 to 1b9561e Compare April 6, 2026 14:24
@jturel
Copy link
Copy Markdown
Member Author

jturel commented Apr 6, 2026

Okay, I still see the subscriptions but I cannot edit the entitlements by hovering over them, unlike all the others. I'm assuming that's the expected behavior? The subscription will be gone when I refresh my manifest, right?

RIght - the reason you can't edit them is because the upstream pool and entitlement are also expired/gone so it would be a 404. Refreshing cleans it up - but don't take my word for it :)

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@app/models/katello/glue/candlepin/pool.rb`:
- Around line 77-78: The code in Pool#import_all is re-querying Candlepin via
self.backend_data (which falls back to Pool.find) causing N+1 lookups and aborts
when a pool vanishes; change the assignment in
app/models/katello/glue/candlepin/pool.rb so pool_json is taken from the
already-preloaded payload (the per-item backend_data set by get_for_owner) and
do not call Pool.find or re-fetch; if backend_data is nil treat the pool as
missing/skip or destroy it (mirror the pattern shown in CandlepinObject where
item.backend_data = candlepin_records[item.cp_id] and item.import_data) and
remove any reliance on CandlepinPoolGone around this lookup.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: dd38cb6a-d272-4306-9cdb-75cbbe9b45cd

📥 Commits

Reviewing files that changed from the base of the PR and between 036b911 and 1b9561e.

📒 Files selected for processing (18)
  • app/lib/katello/resources/candlepin/upstream_consumer.rb
  • app/models/katello/events/import_pool.rb
  • app/models/katello/glue/candlepin/pool.rb
  • app/models/katello/pool.rb
  • app/services/katello/candlepin/event_handler.rb
  • app/services/katello/candlepin/message_handler.rb
  • app/views/katello/api/v2/subscriptions/base.json.rabl
  • db/migrate/20260301205720_add_fields_to_katello_pool.katello.rb
  • db/seeds.d/111-upgrade_tasks.rb
  • lib/katello/engine.rb
  • lib/katello/tasks/upgrades/4.21/import_pools.rake
  • test/fixtures/candlepin_messages/pool.created.json
  • test/models/pool_test.rb
  • test/services/candlepin/message_handler_test.rb
  • test/services/katello/candlepin/event_handler_test.rb
  • test/services/katello/event_monitor/poller_thread_test.rb
  • webpack/components/pf3Table/formatters/EntitlementsInlineEditFormatter.js
  • webpack/scenes/Subscriptions/components/SubscriptionsTable/SubscriptionsTable.js
💤 Files with no reviewable changes (8)
  • app/services/katello/candlepin/event_handler.rb
  • lib/katello/engine.rb
  • app/models/katello/events/import_pool.rb
  • test/fixtures/candlepin_messages/pool.created.json
  • app/services/katello/candlepin/message_handler.rb
  • test/services/candlepin/message_handler_test.rb
  • test/services/katello/candlepin/event_handler_test.rb
  • app/lib/katello/resources/candlepin/upstream_consumer.rb
✅ Files skipped from review due to trivial changes (3)
  • db/seeds.d/111-upgrade_tasks.rb
  • webpack/scenes/Subscriptions/components/SubscriptionsTable/SubscriptionsTable.js
  • lib/katello/tasks/upgrades/4.21/import_pools.rake
🚧 Files skipped from review as they are similar to previous changes (5)
  • app/views/katello/api/v2/subscriptions/base.json.rabl
  • app/models/katello/pool.rb
  • webpack/components/pf3Table/formatters/EntitlementsInlineEditFormatter.js
  • db/migrate/20260301205720_add_fields_to_katello_pool.katello.rb
  • test/services/katello/event_monitor/poller_thread_test.rb

Comment on lines +77 to 78
pool_attributes = {}
pool_json = self.backend_data
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot Apr 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Avoid re-fetching pool JSON during batch import.

import_all already has each pool payload from get_for_owner, but self.backend_data here still falls back to Pool.find(cp_id) when it has not been preloaded. The updated tests stub both calls, so this extra lookup is live today. With the CandlepinPoolGone rescue removed, a pool that disappears between the bulk fetch and this per-pool fetch now aborts the org import/upgrade, and every full reconcile pays an N+1 Candlepin cost.

Suggested direction
# app/models/katello/glue/candlepin/candlepin_object.rb
objects.each do |item|
  exists_in_candlepin = candlepin_records.key?(item.cp_id)

  if exists_in_candlepin
    item.backend_data = candlepin_records[item.cp_id]
    item.import_data
  else
    item.destroy
  end
end
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/models/katello/glue/candlepin/pool.rb` around lines 77 - 78, The code in
Pool#import_all is re-querying Candlepin via self.backend_data (which falls back
to Pool.find) causing N+1 lookups and aborts when a pool vanishes; change the
assignment in app/models/katello/glue/candlepin/pool.rb so pool_json is taken
from the already-preloaded payload (the per-item backend_data set by
get_for_owner) and do not call Pool.find or re-fetch; if backend_data is nil
treat the pool as missing/skip or destroy it (mirror the pattern shown in
CandlepinObject where item.backend_data = candlepin_records[item.cp_id] and
item.import_data) and remove any reliance on CandlepinPoolGone around this
lookup.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with this feedback but that wasn't changed as part of this PR and shouldn't be addressed here. Pool are in fact loaded from Candlepin twice during import_all. Once via the /owners API and again individually.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jturel, understood — thanks for the clarification. The double-fetch (once via the /owners bulk API and once individually per pool via backend_data) is pre-existing behavior and out of scope here. I'll stop pressing on this.

Would you like me to open a follow-up GitHub issue to track the N+1 Candlepin lookup optimization so it isn't lost?


🧠 Learnings used
Learnt from: jturel
Repo: Katello/katello PR: 11647
File: app/models/katello/glue/candlepin/pool.rb:77-107
Timestamp: 2026-04-02T22:10:30.149Z
Learning: In `app/models/katello/glue/candlepin/pool.rb`, the `import_data` method in `Katello::Glue::Candlepin::Pool::InstanceMethods` intentionally does NOT reset optional cached fields (e.g., `hypervisor_id`, `virt_only`, `virt_who`, `roles`, `arch`, `support_type`, etc.) to nil/false before applying the current Candlepin payload. Fields absent from the payload are left as-is in the DB; this is expected behavior by design.

Learnt from: jturel
Repo: Katello/katello PR: 11647
File: app/models/katello/glue/candlepin/pool.rb:84-87
Timestamp: 2026-04-02T22:10:17.185Z
Learning: In Katello (app/models/katello/glue/candlepin/pool.rb), `pool_json` (i.e., `self.backend_data`) in `import_data` is always a `HashWithIndifferentAccess` because it is sourced exclusively through `Katello::Resources::Candlepin::Pool.find`, which normalizes the Candlepin JSON response with `.with_indifferent_access`. Symbol-key reads on `pool_json` (e.g., `pool_json[:productAttributes]`, `pool_json[:attributes]`, `pool_json[:type]`, `pool_json[:stackId]`) are therefore safe.

Learnt from: jturel
Repo: Katello/katello PR: 11647
File: app/models/katello/pool.rb:65-67
Timestamp: 2026-04-02T22:11:04.461Z
Learning: In `app/models/katello/pool.rb` (Ruby on Rails, Katello engine), the `expired?` method on `Katello::Pool` intentionally does NOT guard against a nil `end_date`. A nil `end_date` is considered a data consistency problem that should surface as an exception rather than being silently ignored. Do not suggest adding a nil guard to this method.

Learnt from: jturel
Repo: Katello/katello PR: 11647
File: db/migrate/20260301205720_add_fields_to_katello_pool.katello.rb:13-20
Timestamp: 2026-04-02T22:11:20.312Z
Learning: In `db/migrate/20260301205720_add_fields_to_katello_pool.katello.rb`, the `down` method intentionally does not revert the `change_column :katello_pools, :multi_entitlement, :boolean, default: false` applied in `up`. This migration is not intended to be rolled back, so the incomplete `down` method is acceptable by design.

@jeremylenz
Copy link
Copy Markdown
Member

I also got these notifications even after the subs expired:

image

might be worth thinking about updating that.

@jturel
Copy link
Copy Markdown
Member Author

jturel commented Apr 6, 2026

I also got these notifications even after the subs expired:
might be worth thinking about updating that.

I'm not so sure.

  • The notifications are driven by the pools returned here:

def expiring_subscriptions
subscriptions.select(&:expiring_soon?)
end

  • Organization#expiring_subscriptions ultimately calls this:
    def expiring_soon?
    days_until_expiration >= 0 &&
    days_until_expiration <= Setting[:expire_soon_days].to_i
    end

Testing that against an expired pool:

[14] pry(main)> p.end_date
=> Thu, 02 Apr 2026 00:42:53.434344000 UTC +00:00
[15] pry(main)> p.expired?
=> true
[16] pry(main)> p.expiring_soon?
=> false

Let me know if I missed your point!

@jturel
Copy link
Copy Markdown
Member Author

jturel commented Apr 6, 2026

Can you show which pools were included in that notification? Organization.find_by(label: 'Default_Organization').pools.select(&:expiring_soon?) Assuming you have a non-empty list ... hopefully

If the list is empty, compare the end dates of any expired pools vs the timestamps (from DB - not the UI) of that notification, and share here.

@jeremylenz
Copy link
Copy Markdown
Member

Turns out my notification is for an unrelated subscription, so all is good!

Copy link
Copy Markdown
Member

@jeremylenz jeremylenz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok, I'm good with this! Works as expected and the refresh cleaned up the expired subscription.

ACK 👍

@jturel
Copy link
Copy Markdown
Member Author

jturel commented Apr 6, 2026

ok, I'm good with this! Works as expected and the refresh cleaned up the expired subscription.

ACK 👍

Thanks! Merge when ready then

@jeremylenz jeremylenz merged commit 2d9ea7f into Katello:master Apr 6, 2026
23 of 24 checks passed
@jturel jturel deleted the candlepin_import_events branch April 6, 2026 18:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants