Skip to content

vf-tui: compare for all metrics#1117

Merged
snimu merged 5 commits intomainfrom
sebastian/vf-tui-compare-mode-2026-04-08
Apr 18, 2026
Merged

vf-tui: compare for all metrics#1117
snimu merged 5 commits intomainfrom
sebastian/vf-tui-compare-mode-2026-04-08

Conversation

@snimu
Copy link
Copy Markdown
Contributor

@snimu snimu commented Apr 8, 2026

Description

vf-tui now is always in group mode when we enter compare model (previously, it went v->g, now it's just v). There:

  • hitting enter on an arg will group by that arg's values, as before
  • but we can now also select the avg-reward column
  • hitting enter on it will allow us to select a different metric than the reward, and the table will update to show that metric's statistics

Pressing v will immediately get us into group mode:

image

Pressing enter on any of the arg columns still allows us to group by the unique values of that arg:

image

But now, it's also possible to select the "avg reward" column (renamed from just "avg"):

image

Pressing enter here doesn't group by the unique values (because that doesn't make any sense). Instead, it allows us to select a different metric:

image

And then selecting one will show the results of that metric like the reward before:

image

Type of Change

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Documentation update
  • Test improvement

Testing

  • All existing tests pass when running uv run pytest locally.
  • New tests have been added to cover the changes

Checklist

  • My code follows the style guidelines of this project as outlined in AGENTS.md
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • Any dependent changes have been merged and published

Note

Medium Risk
Adds new metric-selection and bucketing logic to the comparison TUI, changing how comparison tables are computed and navigated; main risk is UI/regression in table layout and metric aggregation/formatting across diverse value ranges.

Overview
Extends the run comparison TUI to compare any numeric metric, not just reward, via a new Select dropdown and a metric-aware avg/min/max + distribution “mix” display.

Refactors comparison navigation to a single cursor model (left/right/enter) that toggles grouping on setting columns or opens the metric selector when the metric column is selected, and updates copy/export to reflect the chosen metric.

Reviewed by Cursor Bugbot for commit f99c6d2. Bugbot is set up for automated code reviews on this repo. Configure here.

snimu and others added 2 commits April 8, 2026 13:34
… bucketing

- Remove hidden `g` key requirement — cursor (←/→) is always active across
  setting columns + a metric column; Enter toggles grouping on settings or
  opens a metric picker on the avg column
- Add MetricSelectorScreen modal for choosing any numeric metric from results
- Store raw metric_values in RunOverviewStats for per-metric distribution
- Adaptive bucketing (_metric_bucket_counts) for non-reward metrics with
  red→green gradient; min/max columns replace =0/=1 for non-reward
- Use ratio=1 for setting columns so Rich handles layout; fixes column
  truncation at various terminal widths
- Use compact formatting (_format_compact_metric) for non-reward values

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Remove MetricSelectorScreen modal in favor of a Select widget embedded
  in the compare screen panel
- Enter on the avg column opens the dropdown directly via action_show_overlay
- Focus returns to the screen after selection so Enter works normally on
  setting columns

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Comment thread verifiers/scripts/tui.py
Comment thread verifiers/scripts/tui.py Outdated
snimu and others added 2 commits April 8, 2026 14:12
Discard "reward" from metric_names before prepending it, since
_extract_numeric_metric_values can also extract a "reward" key from
record["metrics"] or record["info"]["reward_signals"].

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
_reward_style uses hardcoded 0-1 thresholds that are meaningless for
arbitrary metrics. Now only applied when metric_key is "reward";
other metrics get plain bold styling.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Copy link
Copy Markdown

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

Fix All in Cursor

❌ Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.

Reviewed by Cursor Bugbot for commit 88865e7. Configure here.

Comment thread verifiers/scripts/tui.py
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@snimu snimu merged commit 37a1b11 into main Apr 18, 2026
6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant