fix(install): don't re-apply patch per peer variant in isolated install#28148
fix(install): don't re-apply patch per peer variant in isolated install#28148
Conversation
When a patched dependency resolves to multiple isolated store variants (different peer hashes), the patch was being re-applied for each variant even when the package was already extracted and patched. Since the patched cache directory is shared across all peer variants, re-patching while another variant's hardlink task is running causes EPERM on Windows. The patch is already applied during the initial extraction (in the PackageManagerTask callback). Skip the redundant re-application when the package is already in the cache (missing_from_cache == false). Closes #28147 Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
Updated 2:28 AM PT - Mar 16th, 2026
❌ Your commit
🧪 To try this PR locally: bunx bun-pr 28148That installs a local version of the PR into your bun-28148 --bun |
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughStop re-applying package patches during isolated installs; patches are applied during extraction and entry tasks now start immediately. Added regression tests that verify a patched package is not re-patched per isolated store variant across reinstall/cache-hit paths. Changes
🚥 Pre-merge checks | ✅ 4✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. 📝 Coding Plan
Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/cli/install/isolated-install.test.ts`:
- Around line 658-673: The test's assertion is too weak: instead of only
checking storeDirs.length >= 1, require multiple variants by asserting
storeDirs.length >= 2 to ensure the multi-variant peer path is exercised, and
explicitly verify the patch by reading and asserting the patched package.json
contents for the installed one-optional-peer-dep both after the first install
and after the reinstall; update references to the storeDirs array, the
"one-optional-peer-dep@*" glob, packageDir, and the runBunInstall invocation to
locate the installed package.json(s) under the selected storeDir(s) and assert
the expected patched fields.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 23fa5618-ed97-4e3d-8272-832b317781dc
📒 Files selected for processing (2)
src/install/isolated_install.zigtest/cli/install/isolated-install.test.ts
- Use peer-deps@1.0.0 with provides-peer-deps-{1,2}-0-0 to create two
store variants with different peer hashes (no-deps@1.0.0 vs @2.0.0)
- Assert storeDirs.length >= 2 to ensure multi-variant path is exercised
- Verify the patch is applied in every store variant by reading index.js
- Verify both after first install and after cache-hit reinstall
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Use array join for patch to avoid template literal escaping issues - Patch package.json instead of index.js (simpler, no backtick issues) - Use direct spawn to capture stdout/stderr for better error diagnosis - Verify patched package.json description field in each store variant Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/cli/install/isolated-install.test.ts`:
- Around line 673-689: The test currently performs brittle substring checks
against stdout/stderr ("panic:"/"error:") after spawning the process (proc) and
reading stdout/stderr/exitCode; remove those substring assertions and replace
them with a single robust error check: assert that proc's exitCode (exitCode) is
0 and that stderr is empty (or only matches an allowed whitelist if there are
expected warnings). Update the assertions around stdout, stderr, and exitCode
used in this test (the spawn call, new Response(proc.stdout).text(), new
Response(proc.stderr).text(), and proc.exited) to reflect this: drop the
expect(stderr).not.toContain(...) and expect(stdout).not.toContain(... ) checks
and instead assert exitCode === 0 and expect(stderr).toBe("") (or
toMatch(allowedStderrPattern) if needed).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 6d1a30eb-6aa6-46e5-8926-83f48ccfbbf9
📒 Files selected for processing (1)
test/cli/install/isolated-install.test.ts
The same race condition exists in onPackageExtracted (Installer.zig), where the patch is re-applied for each entry after extraction completes. The PackageManagerTask callback already applied the patch during extraction, so re-applying per entry is both redundant and racy. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/install/isolated_install/Installer.zig (1)
60-78: 🧹 Nitpick | 🔵 TrivialRemove unused
applyPackagePatchmethod.This method has no callers in the codebase and is dead code. Removing it will eliminate maintenance burden and reduce confusion about its purpose.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/install/isolated_install/Installer.zig` around lines 60 - 78, The method Installer.applyPackagePatch is dead code with no callers; remove the entire public function definition (the applyPackagePatch implementation that creates a PatchTask, calls apply(), and checks the callback logger) from the Installer.zig file to eliminate the unused API surface; ensure you also remove any related imports or symbols that become unused as a result (e.g., references to PatchInfo.Patch, install.PatchTask usage, and any local variables only used by this method) and run a build to confirm no references remain.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@src/install/isolated_install/Installer.zig`:
- Around line 60-78: The method Installer.applyPackagePatch is dead code with no
callers; remove the entire public function definition (the applyPackagePatch
implementation that creates a PatchTask, calls apply(), and checks the callback
logger) from the Installer.zig file to eliminate the unused API surface; ensure
you also remove any related imports or symbols that become unused as a result
(e.g., references to PatchInfo.Patch, install.PatchTask usage, and any local
variables only used by this method) and run a build to confirm no references
remain.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 566959fd-5d34-44b4-b0b2-b8fe3807aa41
📒 Files selected for processing (1)
src/install/isolated_install/Installer.zig
Replace manual spawn with substring checks with runBunInstall, which already performs proper error/panic/exit-code assertions. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The previous commit incorrectly removed patch application entirely from onPackageExtracted, breaking isolated install with patches (the PackageManagerTask callback doesn't always apply the patch). Restore patch application but do it only ONCE before the entry loop starts, rather than per entry. This preserves correctness while still fixing the race condition: re-patching per entry could mutate the shared cache directory while a previously started hardlink task reads from it. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Previous feedback addressed — patch is now applied once in both cache-hit and cache-miss paths, and the test properly creates multiple peer variants with content verification.
Extended reasoning...
Overview
This PR fixes a race condition in the isolated package installer where patched dependencies were redundantly re-patched per peer variant. It touches three files: isolated_install.zig (cache-hit path), Installer.zig (cache-miss/download path), and the test file isolated-install.test.ts.
Security risks
No security implications — the changes affect local file installation logic only, removing redundant filesystem operations rather than adding new ones.
Level of scrutiny
My previous review identified two issues: (1) the download-path onPackageExtracted still re-applied patches per entry, and (2) the regression test only created one peer variant. Both have been addressed in subsequent commits. The onPackageExtracted function now applies the patch once using the first entry before starting hardlink tasks, and the test uses provides-peer-deps-1-0-0/provides-peer-deps-2-0-0 to create genuine multiple peer variants with >= 2 assertion and content verification.
Other factors
The CI failures are in unrelated vendor code (libuv, tinycc warnings) and build infrastructure, not in this PR's changes. The error propagation change in Installer.zig (marking all entries as failed if patch fails) is actually more correct than the previous per-entry approach. CodeRabbit's earlier suggestions have also been addressed.
Use `bun patch` + `bun patch --commit` to create the patch file, avoiding manual unified diff format issues that caused test failures. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The lockfile already exists after the first install + patch commit, so subsequent reinstalls don't print "Saved lockfile". Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
| if (log.hasErrors()) { | ||
| // monotonic is okay because we haven't started the task yet (it isn't running | ||
| // on another thread) | ||
| entry_steps[entry_id.get()].store(.done, .monotonic); | ||
| this.onTaskFail(entry_id, .{ .patching = log }); | ||
| continue; | ||
| patch_failed = true; | ||
| for (removed.value.items) |install_ctx| { | ||
| const entry_id = install_ctx.isolated_package_install_context; | ||
| entry_steps[entry_id.get()].store(.done, .monotonic); | ||
| this.onTaskFail(entry_id, .{ .patching = log }); | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
🟡 When a patch fails in onPackageExtracted, onTaskFail(entry_id, .{ .patching = log }) is called once per batched entry (lines 81-84), printing the identical "failed to patch package" error message and full patch log N times since all entries share the same pkg_id. Each call also redundantly invokes resumeUnblockedTasks() which iterates all entries. Consider printing the error once for the first entry, then silently marking remaining entries as failed.
Extended reasoning...
What the bug is
In the new onPackageExtracted code at lines 79-85, when patch application fails (log.hasErrors()), the error-reporting loop calls this.onTaskFail(entry_id, .{ .patching = log }) for every entry in the batch. Since all entries are batched by name+version (no peer hash in the task key), they all resolve to the same pkg_id, producing identical pkg_name and pkg_res values.
How it manifests
Looking at onTaskFail (lines 152-157), the .patching branch prints:
"failed to patch package: {name}@{version}"viaOutput.errGeneric- The full patch log via
patch_log.print(Output.errorWriter())
Since pkg_name and pkg_res are derived from pkg_id (lines 136-137), and all entries share the same pkg_id, the user sees N identical copies of the error message plus N copies of the patch log details.
Step-by-step proof
- Package
foo@1.0.0has a patch and resolves to 3 peer variants (entries A, B, C), batched into one download task. - Download completes;
onPackageExtractedapplies the patch once using the first entry (line 78). - The patch fails -
log.hasErrors()is true (line 79). - The loop at lines 81-84 iterates entries A, B, C, calling
onTaskFailfor each. - Each
onTaskFailcall resolvespkg_id→ samepkg_nameandpkg_res→ prints identical error. - Each
onTaskFailalso callsresumeUnblockedTasks()(line 200), which iterates ALL store entries looking for blocked tasks - this runs 3 times unnecessarily.
Impact
This is a minor UX issue - users see duplicate error output when a patch fails on a package with multiple peer variants. With fail_early enabled (line 193), Global.exit(1) terminates after the first error, so only one message appears. But without fail_early, N duplicate errors are printed. The redundant resumeUnblockedTasks() calls are also wasteful but not functionally harmful.
Suggested fix
Print the error once for the first entry, then silently mark remaining entries as failed:
if (log.hasErrors()) {
patch_failed = true;
// Report the error once using the first entry
const first_id = removed.value.items[0].isolated_package_install_context;
entry_steps[first_id.get()].store(.done, .monotonic);
this.onTaskFail(first_id, .{ .patching = log });
// Silently fail remaining entries
for (removed.value.items[1..]) |install_ctx| {
const entry_id = install_ctx.isolated_package_install_context;
entry_steps[entry_id.get()].store(.done, .monotonic);
this.summary.fail += 1;
this.decrementPendingTasks();
}
}Note: the per-entry store path cleanup in onTaskFail (lines 182-189) uses entry_id-specific paths, so that cleanup is still needed for each entry. The fix should preserve per-entry cleanup while deduplicating the error output.
Summary
missing_from_cache == false, the patch was already applied during extraction — skip the redundant re-applicationRoot Cause
The patched cache directory is named without a peer hash (
pkg@version_patchhash), while store directories include the peer hash (pkg@version+peerhash). This means multiple store variants share one patched cache directory.The code at
isolated_install.zig:994was re-applyingapplyPackagePatch()per store entry even when the package was already cached (missing_from_cache == false). SincestartTask()immediately schedules hardlinking on the thread pool, a second variant re-patching the shared cache directory while the first is hardlinking from it producesACCESS_DENIED→EPERMon Windows.The fix removes the redundant patch re-application. The patch is already applied once during initial extraction in the
PackageManagerTaskcallback (PackageManagerTask.zig:77).Test plan
test/cli/install/isolated-install.test.ts— "patched package with multiple peer variants should not re-apply patch (Windows isolated install can re-apply patched dependency per peer variant and fail with EPERM during hardlinking #28147)"bun installwith patched + peer-varied packagesCloses #28147
🤖 Generated with Claude Code