Skip to content

Continue with RG-L Fix in the HitReader and AHDCEngine #1207

Open
mathieuouillon wants to merge 6 commits intodevelopmentfrom
rgl-hit-fix
Open

Continue with RG-L Fix in the HitReader and AHDCEngine #1207
mathieuouillon wants to merge 6 commits intodevelopmentfrom
rgl-hit-fix

Conversation

@mathieuouillon
Copy link
Copy Markdown
Collaborator

@mathieuouillon mathieuouillon commented Apr 11, 2026

Skip per-wire raw-hit cut lookups in simulation mode in HitReader.fetch_AHDCHits; they were being read and then discarded, and could fail when the sim CCDB run has no cut entries. The cut lookups now live inside the existing !sim branch alongside the ToT and ADC-gain corrections.

Fix a sticky-mode bug in AHDCEngine: once a single event exceeded MAX_HITS_FOR_AI, the instance modeTrackFinding field was overwritten to CV_Distance for the rest of the run. It now uses a per-event effectiveMode local.

Register AHDC::interclusters and AHDC::docaclusters with registerOutputBank — both were being appended to the event without being declared.

Minor HitReader cleanup: drop unused instance fields for the calibration tables (they are passed as parameters), make T2Dfunction / fetch_AHDCHits / fetch_TrueAHDCHits private, collapse the DOCA branch to a ternary, and add Javadoc.

Drop the unused materialMap field and its MaterialMap/Material imports from AHDCEngine.

Rename all IndexedTable fields, parameters, and javadoc references in the ALERT engine suite to carry a "Table" suffix, making calibration-table variables easy to spot at a glance.

In simulation mode fetch_AHDCHits was still reading the per-wire
rawHitCuts table even though the pass/fail result was discarded,
which is wasted work and can fail when the sim CCDB run has no
cut entries. The cut lookups and pass check are now nested inside
the existing !sim branch alongside the ToT and ADC-gain corrections,
so sim events go straight from time calibration to DOCA.

While in the file:
- Drop the rawHitCutsTable/timeOffsetsTable/... instance fields;
  the IndexedTables are passed through to fetch_AHDCHits and
  T2Dfunction as parameters, matching how they are used.
- Make T2Dfunction / fetch_AHDCHits / fetch_TrueAHDCHits private;
  nothing outside HitReader calls them.
- Collapse the DOCA branch to a single ternary.
- Add Javadoc on the class, constructor, calibration pipeline,
  T2D function, and the hit/true-hit accessors.
…p unused MaterialMap

- Add AHDC::interclusters and AHDC::docaclusters to registerOutputBank
  so framework bank management (clearing, schema lookup) sees them.
- Use a per-event effectiveMode local instead of overwriting the
  modeTrackFinding instance field when an event exceeds MAX_HITS_FOR_AI;
  previously a single noisy event forced CV_Distance for the rest of
  the run.
- Remove the unused materialMap field and its MaterialMap/Material
  imports; the Kalman filter no longer consumes it from AHDCEngine.
Rename all IndexedTable fields, parameters, and javadoc references in the ALERT engine suite to carry a "Table" suffix, making calibration-table variables easy to spot at a glance. Touches AHDCEngine, ATOFEngine, ALERTEngine, HitReader, HitFinder, ATOFHit, and BarHit.
@baltzell
Copy link
Copy Markdown
Collaborator

Thanks!

baltzell
baltzell previously approved these changes Apr 13, 2026
@baltzell baltzell requested a review from ftouchte April 13, 2026 16:04
@mathieuouillon
Copy link
Copy Markdown
Collaborator Author

New issue with the last version on ALERTEngine, my previous fix only move the problem. Now I have the following issue:

java.lang.IndexOutOfBoundsException: Index 2 out of bounds for length 2
        at java.base/jdk.internal.util.Preconditions.outOfBounds(Preconditions.java:100)
        at java.base/jdk.internal.util.Preconditions.outOfBoundsCheckIndex(Preconditions.java:106)
        at java.base/jdk.internal.util.Preconditions.checkIndex(Preconditions.java:302)
        at java.base/java.util.Objects.checkIndex(Objects.java:385)
        at java.base/java.util.ArrayList.get(ArrayList.java:427)
        at org.jlab.service.alert.ALERTEngine.processDataEvent(ALERTEngine.java:374)
        at org.jlab.clas.reco.ReconstructionEngine.filterEvent(ReconstructionEngine.java:368)
        at org.jlab.clas.reco.ReconstructionEngine.execute(ReconstructionEngine.java:407)
        at org.jlab.clara.sys.ServiceEngine.executeEngine(ServiceEngine.java:227)
        at org.jlab.clara.sys.ServiceEngine.execute(ServiceEngine.java:153)
        at org.jlab.clara.sys.Service.lambda$execute$2(Service.java:178)
        at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:572)
        at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:317)
        at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)
        at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)
        at java.base/java.lang.Thread.run(Thread.java:1583)

I will work on that.

The Kalman preprocessing loop read tracks back via AHDC_tracks.get(row), which breaks as soon as the empty-hit guard skips a row and desynchronises row from the list index. Build each Track through a local reference, initialise position/momentum/trackid, then append — so skipped rows never
poison later iterations. Also log a warning on the skip branch so the upstream "AHDC::track row with no matching AHDC::hits" case is visible.
@mathieuouillon
Copy link
Copy Markdown
Collaborator Author

It should fix the issue but I need to understand why a row in AHDC::track can ever have zero matching hits in AHDC::hits

The AI candidate generator routinely emits overlapping TrackPredictions that share PreCluster (and therefore Hit) references. Accepting all predictions above threshold let later tracks silently steal earlier
tracks' hits via in-place set_trackId() mutation, leaving orphan rows in AHDC::track with no matching rows in AHDC::hits — which in turn crashed the ALERTEngine Kalman loop with IndexOutOfBoundsException inside
Track(ArrayList<Hit>).

Sort predictions by score descending, greedily accept each one only if none of its PreClusters has already been claimed, enforcing one-hit one-track.
@mathieuouillon
Copy link
Copy Markdown
Collaborator Author

Ok, it is enough for one pull request, and I can cook 35k events without any warning or error. Can be merged now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants