Conversation
- 新建 workspace 包 packages/@anthropic-ai/model-provider - 定义 ModelProviderHooks 接口(依赖注入:分析、成本、日志等) - 定义 ClientFactories 接口(Anthropic/OpenAI/Gemini/Grok 客户端工厂) - 搬入核心类型:Message 体系、NonNullableUsage、EMPTY_USAGE、SystemPrompt、错误常量 - 主项目 src/types/message.ts 等改为 re-export,保持向后兼容 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- 搬入 OpenAI 消息转换(convertMessages)、工具转换(convertTools)、流适配(streamAdapter) - 搬入 OpenAI 和 Grok 模型映射(resolveOpenAIModel、resolveGrokModel) - 主项目文件改为 thin re-export proxy Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- 搬入 Gemini 类型定义、消息转换、工具转换、流适配、模型映射 - 主项目 gemini/ 目录下文件改为 thin re-export proxy Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- 搬入 formatAPIError、extractConnectionErrorDetails 等 errorUtils - 迁移 10 个消费者文件直接从 @anthropic-ai/model-provider 导入 - 更新 emptyUsage、sdkUtilityTypes、systemPromptType 为 re-export proxy Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This reverts commit e458d63.
* fix: 修复 Bun 的 polyfill 问题 * fix: 类型修复完成 * feat: 统一所有包的类型文件 * fix: 修复构建问题
Co-authored-by: chengzifeng <chengzifeng@meituan.com>
将 Anthropic 格式的工具定义转换为 Langfuse 兼容的 OpenAI 格式,
并在 generation 的 input 中以 { messages, tools } 结构传入,
以便在 Langfuse UI 中查看完整的工具定义信息。
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat: 适配 zed acp 协议 * docs: 完善 acp 文档
|
Preview deployment for your docs. Learn more about Mintlify Previews.
💡 Tip: Enable Workflows to automatically generate PRs for you. |
📝 WalkthroughWalkthroughCentralizes provider logic into a new package Changes
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/services/langfuse/tracing.ts (1)
80-96:⚠️ Potential issue | 🟡 MinorFix empty array truthiness check for tools parameter.
In JavaScript, an empty array
[]is truthy, so the conditionparams.tools ? ... : ...will incorrectly wrap the input as{ messages, tools: [] }whenconvertToolsToLangfusereturns an empty array. This changes the Langfuse recorded input shape inconsistently compared to requests without tools.Suggested fix
export function recordLLMObservation( rootSpan: LangfuseSpan | null, params: { @@ ): void { if (!rootSpan || !isLangfuseEnabled()) return try { const genName = PROVIDER_GENERATION_NAMES[params.provider] ?? `Chat${params.provider}` + const hasTools = Array.isArray(params.tools) ? params.tools.length > 0 : Boolean(params.tools) // Use the global startObservation directly instead of rootSpan.startObservation(). @@ { model: params.model, - input: params.tools + input: hasTools ? { messages: params.input, tools: params.tools } : params.input,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/langfuse/tracing.ts` around lines 80 - 96, The conditional that builds the Langfuse generation input uses params.tools truthiness, but an empty array is truthy and will incorrectly wrap input as { messages, tools: [] }; update the check where the input is constructed (inside the startObservation call in this tracing module using genName and params) to only include tools when they are a non-empty array (e.g., Array.isArray(params.tools) && params.tools.length > 0) or otherwise use params.input directly so empty tool arrays do not change the recorded input shape.
🟠 Major comments (19)
src/utils/__tests__/earlyInput.test.ts-72-91 (1)
72-91:⚠️ Potential issue | 🟠 MajorRegression tests are non-functional (asserting constants, not behavior).
Both cases pass even if filtering breaks, because they only compare local string literals and never invoke the code path under test.
Safer interim change (avoid false confidence)
- test('DA1 response sequence pattern is documented (CSI ? ... c)', () => { - // \x1b[?64;1;2;4;6;17;18;21;22c — previously leaked as "?64;1;2;4;6;17;18;21;22c" - // After fix: CSI sequences are fully consumed, nothing leaks - // We document the expected clean output here - const leakedBefore = '?64;1;2;4;6;17;18;21;22c' - const cleanAfter = '' - // The fix ensures processChunk produces cleanAfter, not leakedBefore - // (verified manually; this test documents the contract) - expect(leakedBefore).not.toBe(cleanAfter) // sanity: they differ - expect(cleanAfter).toBe('') // after fix: nothing leaks - }) + test.todo('DA1 response sequence is fully filtered in stdin → processChunk path') - test('XTVERSION DCS sequence pattern is documented (ESC P ... ESC \\)', () => { - // \x1bP>|iTerm2 3.6.4\x1b\\ — previously leaked as ">|iTerm2 3.6.4" - // After fix: DCS sequences are fully consumed via ST terminator - const leakedBefore = '>|iTerm2 3.6.4' - const cleanAfter = '' - expect(leakedBefore).not.toBe(cleanAfter) - expect(cleanAfter).toBe('') - }) + test.todo('XTVERSION DCS sequence is fully filtered in stdin → processChunk path')🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/__tests__/earlyInput.test.ts` around lines 72 - 91, The tests currently assert only local constants instead of exercising the real filter logic; update both tests to call the actual function under test (e.g., processChunk) with the raw escape sequences and assert its return/side-effect produces the empty string. Specifically, in the "DA1 response sequence pattern..." test call processChunk with "\x1b[?64;1;2;4;6;17;18;21;22c" and expect the result to be "" (and/or verify any buffer state is cleared), and in the "XTVERSION DCS sequence pattern..." test call processChunk (or the DCS-handling entrypoint) with "\x1bP>|iTerm2 3.6.4\x1b\\" and expect "" as the filtered output; ensure you import/require processChunk (or the correct exported function) and reset any parser state between assertions so tests are deterministic.src/commands/poor/__tests__/poorMode.test.ts-28-53 (1)
28-53:⚠️ Potential issue | 🟠 MajorThe “first call reads settings” path is not actually tested.
Line 45 and Line 51 call
setPoorMode(...)beforeisPoorModeActive(), so these cases validate setter effects, not initial-read behavior. Also,freshModule()(Line 29) is empty, so singleton reset is currently unimplemented.Suggested test adjustment
-/** Reset module-level singleton between tests by re-importing a fresh copy. */ -async function freshModule() { - // Bun caches modules; we manipulate the exported functions directly since - // the singleton `poorModeActive` is reset to null only on first import. - // Instead we test the observable behaviour through set/get pairs. -} +// Prefer an explicit test reset hook exported from poorMode.ts, e.g.: +// export function __resetPoorModeForTest() { poorModeActive = null } describe('isPoorModeActive — reads from settings on first call', () => { beforeEach(() => { lastUpdate = null + mockSettings = {} + // __resetPoorModeForTest() }) test('returns false when settings has no poorMode key', () => { mockSettings = {} - // Force re-read by setting internal state via setPoorMode then checking - setPoorMode(false) expect(isPoorModeActive()).toBe(false) }) test('returns true when settings.poorMode === true', () => { mockSettings = { poorMode: true } - setPoorMode(true) expect(isPoorModeActive()).toBe(true) }) })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/commands/poor/__tests__/poorMode.test.ts` around lines 28 - 53, freshModule() is currently a no-op so tests never exercise the "first-call reads settings" path; implement freshModule to reset the module-level singleton (clear module cache or re-require the module) and reset internal state (e.g., set lastUpdate = null and poorModeActive = null if present) so a fresh import will read mockSettings on first isPoorModeActive() call, then update the tests to call freshModule() before invoking isPoorModeActive() (remove the prior setPoorMode(...) calls used to force state) so the tests validate initial-read behavior; refer to freshModule, isPoorModeActive, setPoorMode, mockSettings and lastUpdate/poolModeActive in the test file when making changes.packages/remote-control-server/src/transport/sse-writer.ts-133-146 (1)
133-146:⚠️ Potential issue | 🟠 MajorPreserve string-form user messages when normalizing worker payloads.
If a user event comes through as
{ message: "..." }, this branch rewrites it to{ message: { content: "" } }because the stringmessagecase is never used as a fallback. That drops real user text from the SSE stream.🔧 Proposed fix
if (event.type === "user") { const message = payload.message; if (!message || typeof message !== "object" || !("content" in message)) { const content = typeof normalized?.content === "string" ? normalized.content + : typeof normalized?.message === "string" + ? normalized.message : typeof payload.content === "string" ? payload.content : typeof event.payload === "string" ? event.payload : ""; payload.content = content; payload.message = { content }; } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/remote-control-server/src/transport/sse-writer.ts` around lines 133 - 146, In the SSE normalization branch handling event.type === "user" (in sse-writer.ts), the current fallback logic ignores when payload.message is a string and replaces it with an empty content object; update the fallback so a string message is preserved: if payload.message is a string use that string as content before checking normalized.content, payload.content, or event.payload, and then set payload.content and payload.message = { content } only after deriving the correct string content; modify references in this block (payload, message, normalized, event.payload) accordingly so real user text is not dropped.packages/remote-control-server/web/render.js-163-173 (1)
163-173:⚠️ Potential issue | 🟠 MajorDon't drop user text when the same event also contains embedded
tool_resultblocks.Both branches bail out as soon as
renderEmbeddedToolResultBlocks(payload)returns anything, so mixed-content user messages render only the tool result cards and lose their text. In the live path that also skips the loading state for a real user prompt.Suggested fix
case "user": { const toolResultEls = renderEmbeddedToolResultBlocks(payload); - if (toolResultEls.length > 0) { - els.push(...toolResultEls); - break; - } - if (!shouldRenderUserEvent(payload, direction, false)) return; - els.push(renderUserMessage(payload, direction)); - needLoading = true; + const text = extractText(payload).trim(); + if (text) { + if (!shouldRenderUserEvent(payload, direction, false)) return; + els.push(renderUserMessage(payload, direction)); + needLoading = true; + } + if (toolResultEls.length > 0) els.push(...toolResultEls); } break;Apply the same pattern in the replay branch so text and tool results can both render.
Also applies to: 234-244
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/remote-control-server/web/render.js` around lines 163 - 173, The current case "user" branch drops user text whenever renderEmbeddedToolResultBlocks(payload) returns non-empty; update the logic so that both embedded tool result elements and the user message can be pushed to histEls: call renderEmbeddedToolResultBlocks(payload) and always push its results (if any) into histEls, then independently check shouldRenderUserEvent(payload, direction, true) and push renderUserMessage(payload, direction) if true; apply the same change to the replay branch (the similar code around the other user-case at the second location) so mixed-content messages render both tool_result blocks and the user text.packages/remote-control-server/src/services/session.ts-17-17 (1)
17-17:⚠️ Potential issue | 🟠 MajorClean up the event bus for
inactivesessions too.
inactiveis now classified as a closed status, but onlyarchiveSession()removes the event bus. If a session transitions to"inactive"throughupdateSessionStatus(), the bus stays registered and existing stream subscribers can remain attached to a session that the rest of this service already treats as closed.Suggested fix
export function updateSessionStatus(sessionId: string, status: string) { storeUpdateSession(sessionId, { status }); const bus = getAllEventBuses().get(sessionId); - if (!bus) return; - - bus.publish({ - id: uuid(), - sessionId, - type: "session_status", - payload: { status }, - direction: "inbound", - }); + if (bus) { + bus.publish({ + id: uuid(), + sessionId, + type: "session_status", + payload: { status }, + direction: "inbound", + }); + } + if (isSessionClosedStatus(status)) { + removeEventBus(sessionId); + } } export function archiveSession(sessionId: string) { updateSessionStatus(sessionId, "archived"); - removeEventBus(sessionId); }Also applies to: 78-80, 130-150
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/remote-control-server/src/services/session.ts` at line 17, The event bus is only removed in archiveSession() so sessions marked "inactive" (now in CLOSED_SESSION_STATUSES) keep their event bus; factor the event-bus cleanup used in archiveSession() into a reusable helper (e.g., cleanupEventBusForSession or removeEventBus) and call it from updateSessionStatus() whenever the new status is a member of CLOSED_SESSION_STATUSES (and keep archiveSession() calling it too), so any transition to a closed status (including "inactive") unregisters the event bus and detaches stream subscribers.build.ts-93-110 (1)
93-110:⚠️ Potential issue | 🟠 MajorReset the regex state before scanning each file.
BUN_DESTRUCTUREis global, so.test(content)advanceslastIndex. After the first match, later files can be skipped and left unpatched, which makes the Node-compat pass flaky.Suggested fix
const BUN_DESTRUCTURE = /var \{([^}]+)\} = globalThis\.Bun;?/g const BUN_DESTRUCTURE_SAFE = 'var {$1} = typeof globalThis.Bun !== "undefined" ? globalThis.Bun : {};' for (const file of files) { if (!file.endsWith('.js')) continue const filePath = join(outdir, file) const content = await readFile(filePath, 'utf-8') - if (BUN_DESTRUCTURE.test(content)) { + BUN_DESTRUCTURE.lastIndex = 0 + const patchedContent = content.replace(BUN_DESTRUCTURE, BUN_DESTRUCTURE_SAFE) + if (patchedContent !== content) { await writeFile( filePath, - content.replace(BUN_DESTRUCTURE, BUN_DESTRUCTURE_SAFE), + patchedContent, ) bunPatched++ } } -BUN_DESTRUCTURE.lastIndex = 0Based on learnings, build output must be compatible with both Bun and Node.js.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@build.ts` around lines 93 - 110, The global RegExp BUN_DESTRUCTURE retains state between .test calls causing some files to be skipped; fix by resetting or re-instantiating the regex before testing each file in the files loop (e.g., recreate BUN_DESTRUCTURE or set BUN_DESTRUCTURE.lastIndex = 0 at the top of each iteration) so that the test against content always starts fresh before calling writeFile and incrementing bunPatched; ensure this change references BUN_DESTRUCTURE, the for (const file of files) loop, readFile, writeFile, and bunPatched so all JS outputs are reliably patched for Node compatibility.scripts/vite-plugin-feature-flags.ts-43-47 (1)
43-47:⚠️ Potential issue | 🟠 MajorOnly treat
FEATURE_*variables set to1as enabled.Right now any present variable enables the flag, so
FEATURE_ACP=0andFEATURE_ACP=falsestill turn ACP on. That breaks the documented override contract.Suggested fix
export function getEnabledFeatures(): Set<string> { - const envFeatures = Object.keys(process.env) - .filter((k) => k.startsWith("FEATURE_")) - .map((k) => k.replace("FEATURE_", "")); + const envFeatures = Object.entries(process.env) + .filter(([k, v]) => k.startsWith("FEATURE_") && v === "1") + .map(([k]) => k.replace("FEATURE_", "")); return new Set([...DEFAULT_BUILD_FEATURES, ...envFeatures]); }Based on learnings: Enable feature flags at runtime using environment variables:
FEATURE_<FLAG_NAME>=1.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/vite-plugin-feature-flags.ts` around lines 43 - 47, The current getEnabledFeatures treats any FEATURE_* env var as enabled; change it to only enable flags whose process.env value equals "1". In getEnabledFeatures(), iterate Object.entries(process.env) or filter Object.keys by checking process.env[k] === "1" (instead of only startsWith), extract the flag name via k.replace("FEATURE_", ""), and then return new Set([...DEFAULT_BUILD_FEATURES, ...envFeatures]) so only FEATURE_<NAME>=1 adds the feature while keeping DEFAULT_BUILD_FEATURES untouched.packages/@anthropic-ai/model-provider/src/providers/grok/modelMapping.ts-34-40 (1)
34-40:⚠️ Potential issue | 🟠 MajorValidate
GROK_MODEL_MAPvalues and honor exact model keys first.
JSON.parse()is asserted straight toRecord<string, string>, so non-string values can pass through unchecked, and the resolver only readsuserMap[family]. That meansGROK_MODEL_MAP='{"claude-opus-4-6":"grok-x"}'is ignored even though this option is documented as a model map.Suggested fix
function getUserModelMap(): Record<string, string> | null { const raw = process.env.GROK_MODEL_MAP if (!raw) return null try { - const parsed = JSON.parse(raw) + const parsed = JSON.parse(raw) as Record<string, unknown> if (parsed && typeof parsed === 'object' && !Array.isArray(parsed)) { - return parsed as Record<string, string> + return Object.fromEntries( + Object.entries(parsed).filter(([, value]) => typeof value === 'string'), + ) as Record<string, string> } } catch { // ignore invalid JSON } return null } @@ const userMap = getUserModelMap() - if (userMap && family && userMap[family]) { + if (userMap?.[cleanModel]) { + return userMap[cleanModel] + } + if (userMap && family && userMap[family]) { return userMap[family] }As per coding guidelines: For unknown structure objects, use
Record<string, unknown>instead ofanytype.Also applies to: 59-61
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/`@anthropic-ai/model-provider/src/providers/grok/modelMapping.ts around lines 34 - 40, The GROK_MODEL_MAP parsing currently casts JSON.parse() to Record<string,string> and doesn't validate values or exact model keys; update getUserModelMap to parse into Record<string,unknown>, iterate entries and build a new Record<string,string> that only includes keys whose values are typeof 'string' (discard non-string entries), and return null if no valid mappings; also ensure the resolver logic that looks up userMap[family] first is preserved so exact model keys (e.g., "claude-opus-4-6") are honored before any family-based lookup—apply the same input validation/filtering approach to the similar code around the other mapping code referenced (lines ~59-61).scripts/vite-plugin-feature-flags.ts-7-38 (1)
7-38:⚠️ Potential issue | 🟠 MajorSource the default feature list from the canonical build config.
build:vitenow carries its own allowlist, so it can drift frombuild.tsand produce a different binary for the same commit. This list should come from the same source of truth instead of being duplicated here.Based on learnings: Feature flags are disabled by default. Dev mode enables all features by default; build mode enables 19 specific features (listed in build.ts). Centrally manage build defines (constants, version numbers) in
scripts/defines.ts.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/vite-plugin-feature-flags.ts` around lines 7 - 38, The hardcoded DEFAULT_BUILD_FEATURES array in vite-plugin-feature-flags.ts must be removed and the canonical feature-list imported from the single source of truth used by build.ts (e.g., the exported build feature constant in scripts/defines.ts or the build.ts module). Replace DEFAULT_BUILD_FEATURES with the imported symbol (e.g., BUILD_FEATURES or DEFAULT_FEATURE_FLAGS) and ensure the plugin still applies the same dev vs build semantics (dev: enable all; build: use the imported allowlist of 19 features) so the Vite build and the primary build pipeline use the identical feature set.src/services/acp/entry.ts-41-62 (1)
41-62:⚠️ Potential issue | 🟠 MajorGuard shutdown before the first agent instance exists.
If stdin closes or a signal arrives before the connection callback assigns
agent,shutdown()throws onagent.sessionsand the cleanup path fails.Suggested fix
- let agent!: AcpAgent + let agent: AcpAgent | undefined @@ async function shutdown(): Promise<void> { // Clean up all active sessions - for (const [sessionId] of agent.sessions) { + if (!agent) { + process.exit(0) + } + + for (const [sessionId] of agent.sessions) { try { await agent.unstable_closeSession({ sessionId }) } catch { // Best-effort cleanup }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/acp/entry.ts` around lines 41 - 62, The shutdown function can run before the AgentSideConnection callback assigns the agent, causing a crash when accessing agent.sessions; modify shutdown to guard for an uninitialized agent by checking that the local variable agent is truthy before iterating its sessions (or return early if unset), and only call agent.unstable_closeSession for each session when agent is defined; reference the agent variable, the shutdown() function, the agent.sessions iteration, and unstable_closeSession when making the change.src/services/acp/entry.ts-30-51 (1)
30-51:⚠️ Potential issue | 🟠 MajorRedirect
console.*before any ACP startup work runs.
enableConfigs(),applySafeConfigEnvironmentVariables(), orAgentSideConnectionsetup can log before the redirect happens. Any such output lands on stdout and corrupts the NDJSON ACP stream.Suggested fix
export async function runAcpAgent(): Promise<void> { + // stdout is reserved for ACP messages + console.log = console.error + console.info = console.error + console.warn = console.error + console.debug = console.error + enableConfigs() @@ - // stdout is used for ACP messages — redirect console to stderr - console.log = console.error - console.info = console.error - console.warn = console.error - console.debug = console.error - async function shutdown(): Promise<void> {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/acp/entry.ts` around lines 30 - 51, Move the console redirection to the very start of runAcpAgent so no logging from enableConfigs, applySafeConfigEnvironmentVariables, createAcpStream or AgentSideConnection can write to stdout and corrupt the NDJSON ACP stream; specifically, set console.log/console.info/console.warn/console.debug to console.error before calling enableConfigs(), applySafeConfigEnvironmentVariables(), createAcpStream() or instantiating AgentSideConnection/AcpAgent so stdout remains reserved for ACP messages.scripts/vite-plugin-feature-flags.ts-104-110 (1)
104-110:⚠️ Potential issue | 🟠 MajorSymbol.dispose is dropped when transpiling
usingtoconst.Replacing
usingdeclarations withconstremoves automatic disposal semantics entirely. WhileslowLoggingExternalcurrently returns a no-op disposable with an empty[Symbol.dispose]()method, this safety relies onSLOW_OPERATION_LOGGINGremaining disabled. If the flag is ever enabled or the behavior ofslowLoggingchanges to return a real disposable, the transpiled code will silently fail to invoke cleanup logic that the source code expects. To maintain correct semantics, either keep theusingstatement or add a comment explaining why this specific case is safe to transpile unconditionally, and document the assumption being made.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/vite-plugin-feature-flags.ts` around lines 104 - 110, The transpilation that replaces "using _" via transformed.replace removes disposal semantics and can drop calls to [Symbol.dispose] when SLOW_OPERATION_LOGGING or slowLoggingExternal change; update the transformation in vite-plugin-feature-flags.ts so it preserves disposal semantics instead of unconditionally swapping to "const": when you detect the "using _" pattern, emit code that still binds the disposable (the "_" symbol) and ensures disposal in a finally block (checking for _[Symbol.dispose] or a dispose method) or else leave the original using statement and add a comment asserting safety; reference the transformed string replacement logic that matches /\busing\s+(_\w*)\s*=/, the "using _" detection, the transformed variable name "$1", and the feature flag SLOW_OPERATION_LOGGING/slowLoggingExternal when making the change.scripts/vite-plugin-feature-flags.ts-50-52 (1)
50-52:⚠️ Potential issue | 🟠 MajorAvoid stub-based
feature()resolution in Vite—use Bun's compiler-resolved approach instead.The regex + stub pattern bypasses Bun's intended mechanism. According to Bun 1.2 documentation,
import { feature } from "bun:bundle"is compiler-resolved during bundling (calls are replaced with boolean literals via static analysis), and custom runtime stubs are unsupported—at runtime without Bun bundling, nobun:bundlemodule exists. Even though this plugin pre-transforms feature calls before bundling, the fallback stub (line 84) means any unmatchedfeature(...)call (non-string-literal arguments, template literals, variables) silently returnsfalseinstead of failing visibly. Non-string-literal usage already violates feature flag guidelines, but this approach masks errors rather than surfacing them. Align with Bun's standard pattern and let the bundler handle resolution.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/vite-plugin-feature-flags.ts` around lines 50 - 52, The plugin currently uses FEATURE_CALL_RE to pre-transform feature('FLAG') calls and provides a runtime stub that returns false for unmatched or non-string-literal calls; remove that stub-based fallback and stop masking invalid usage—only transform calls that match FEATURE_CALL_RE (string-literal args) and leave all other feature(...) calls untouched so Bun's compiler can resolve them, and emit a build-time error or warning when you detect non-string-literal feature(...) usages (so authors fix them instead of silently getting false). Ensure references to FEATURE_CALL_RE and the runtime stub removal are updated and that import { feature } from "bun:bundle" is not emulated by the plugin.packages/@anthropic-ai/model-provider/src/shared/openaiStreamAdapter.ts-239-271 (1)
239-271:⚠️ Potential issue | 🟠 MajorAlways emit a terminal
message_delta/message_stop.Right now the tail of the generator is gated on seeing
choice.finish_reason. If an OpenAI-compatible endpoint closes the stream without that final field, callers get content blocks but never a terminal event or final usage payload.💡 Minimal fix
- if (pendingFinishReason !== null) { + if (started) { + const effectiveFinishReason = pendingFinishReason ?? 'stop' const stopReason = - pendingFinishReason === 'length' + effectiveFinishReason === 'length' ? 'max_tokens' - : pendingHasToolCalls + : pendingHasToolCalls || toolBlocks.size > 0 ? 'tool_use' - : mapFinishReason(pendingFinishReason) + : mapFinishReason(effectiveFinishReason) yield { type: 'message_delta',Also applies to: 282-308
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/`@anthropic-ai/model-provider/src/shared/openaiStreamAdapter.ts around lines 239 - 271, The generator currently only emits the terminal message_delta/message_stop when choice.finish_reason is present, which can leave callers without a final event if the upstream stream closes early; update the end-of-stream handling in openaiStreamAdapter.ts (around the logic that sets pendingFinishReason/pendingHasToolCalls and the later 282-308 block) to always emit a terminal event and final usage payload even when choice.finish_reason is undefined: detect stream termination (e.g., when the loop exits or when no finish_reason is provided), set a default finish reason or leave it null but still yield a 'message_delta'/'message_stop' (and the corresponding usage/stop event), and ensure openBlockIndices/toolBlocks are closed and cleared before yielding so consumers always receive a single final terminal event.packages/@anthropic-ai/model-provider/src/providers/gemini/streamAdapter.ts-197-206 (1)
197-206:⚠️ Potential issue | 🟠 MajorReturn the final prompt token count in
message_delta.
usageMetadatacan arrive aftermessage_start, but the terminal payload only includesoutput_tokens. In that case callers never see the realinput_tokens, so accounting/cost code will stay at 0 for prompt usage.💡 Suggested fix
usage: { + input_tokens: inputTokens, output_tokens: outputTokens, + cache_creation_input_tokens: 0, + cache_read_input_tokens: 0, },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/`@anthropic-ai/model-provider/src/providers/gemini/streamAdapter.ts around lines 197 - 206, The final emitted message_delta currently only includes stop_reason and output_tokens, which loses prompt token accounting if usageMetadata arrived after message_start; update the terminal yield in the streamAdapter (the block that checks stopped and calls mapGeminiFinishReason) to also include the final prompt token count by reading usageMetadata.input_tokens (or the tracked inputTokens variable) and adding it to delta (e.g., delta.input_tokens) so callers receive both input and output token counts even when usageMetadata arrives late.src/services/acp/__tests__/agent.test.ts-87-89 (1)
87-89:⚠️ Potential issue | 🟠 MajorChange mock specifier from
.tsto.js.The import in
src/services/acp/agent.tsuses'../../utils/model/modelOptions.js'(line 70), but the mock registers'../../../utils/model/modelOptions.ts'(line 87). Sincebun:testmock.module()matches the literal import string exactly, this.tsmock does not intercept the.jsimport, causing the test to execute the real module instead.Change
mock.module('../../../utils/model/modelOptions.js', () => ({ getModelOptions: mock(() => []), }))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/acp/__tests__/agent.test.ts` around lines 87 - 89, The test mock for the model options is registered with the wrong file extension so it doesn't match the real import; update the mock.module call in the test (the mock.module(...) invocation in src/services/acp/__tests__/agent.test.ts) to use the same literal specifier the code imports ('../../utils/model/modelOptions.js' as used by src/services/acp/agent.ts) — i.e., change the specifier from '../../../utils/model/modelOptions.ts' to '../../../utils/model/modelOptions.js' so bun:test's mock.module() will intercept the import and return the mocked getModelOptions.packages/@anthropic-ai/model-provider/src/errorUtils.ts-233-237 (1)
233-237:⚠️ Potential issue | 🟠 MajorDon't fall back to raw HTML after sanitization.
sanitizeMessageHTML()intentionally returns an empty string for HTML without a<title>, but this branch then returns the original HTML page. That means proxy/Cloudflare blobs still leak into the UI even after the sanitization pass.💡 Suggested fix
const sanitizedMessage = sanitizeAPIError(error) - return sanitizedMessage !== error.message && sanitizedMessage.length > 0 - ? sanitizedMessage - : error.message + if (sanitizedMessage !== error.message) { + return sanitizedMessage.length > 0 + ? sanitizedMessage + : 'API returned an HTML error page' + } + return error.message🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/`@anthropic-ai/model-provider/src/errorUtils.ts around lines 233 - 237, The current return expression falls back to the raw error.message when sanitizeAPIError returns an empty string (which is intentional for HTML without a <title>), leaking HTML into the UI; change the return logic in errorUtils.ts so that if sanitizeAPIError(error) returns a different value than error.message you return that sanitized value (even if it's an empty string) instead of falling back to the original HTML—i.e., use the sanitizedMessage !== error.message check alone (return sanitizedMessage when different, otherwise return error.message) referencing the sanitizeAPIError call and the sanitizedMessage variable.packages/@anthropic-ai/model-provider/src/shared/openaiStreamAdapter.ts-69-77 (1)
69-77:⚠️ Potential issue | 🟠 MajorRemove
as anyassertions for OpenAI compatibility fields.Lines 73 and 107 use
as anyto access provider-specific extensions (prompt_tokens_details.cached_tokensandreasoning_content) that aren't typed in the OpenAI SDK. Per strict mode enforcement, introduce a local compatibility interface or useRecord<string, unknown>with a type guard instead:// Option 1: Local interface for OpenAI extension fields interface OpenAICompat { prompt_tokens_details?: { cached_tokens?: number } } const details = (chunk.usage as OpenAICompat).prompt_tokens_details // Option 2: Type guard with Record const details = (chunk.usage as unknown as Record<string, unknown>).prompt_tokens_details🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/`@anthropic-ai/model-provider/src/shared/openaiStreamAdapter.ts around lines 69 - 77, The code is using unsafe `as any` casts on chunk.usage to access provider-specific fields (prompt_tokens_details.cached_tokens and reasoning_content); replace these with a typed compatibility approach: define a local interface (e.g., OpenAICompat with prompt_tokens_details?: { cached_tokens?: number } and reasoning_content?: string) and cast chunk.usage to that interface where you read those fields in openaiStreamAdapter.ts (references: chunk.usage, prompt_tokens_details, cached_tokens, reasoning_content), or alternatively cast to Record<string, unknown> and use a type guard to safely read and narrow those properties before assigning cachedReadTokens and reasoning content.packages/@anthropic-ai/model-provider/src/shared/openaiConvertMessages.ts-65-74 (1)
65-74:⚠️ Potential issue | 🟠 MajorThe turn-boundary scan still uses
any.This branch is deciding whether old reasoning gets stripped, so it should stay inside the same strict narrowing rules as the rest of the converter. Please narrow
bwithRecord<string, unknown>or a local block guard instead ofany.💡 Narrow without `any`
- const startsNewUserTurn = typeof content === 'string' - ? content.length > 0 - : Array.isArray(content) && content.some( - (b: any) => - typeof b === 'string' || - (b && - typeof b === 'object' && - 'type' in b && - b.type !== 'tool_result'), - ) + const startsNewUserTurn = typeof content === 'string' + ? content.length > 0 + : Array.isArray(content) && + content.some(block => { + if (typeof block === 'string') return true + if (!block || typeof block !== 'object') return false + const narrowed = block as Record<string, unknown> + return narrowed.type !== 'tool_result' + })As per coding guidelines
**/*.{ts,tsx}: TypeScript strict mode is enforced —bunx tsc --noEmitmust pass with zero errors. Noas anyallowed in production code, and for unknown structure objects useRecord<string, unknown>instead ofanytype.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/`@anthropic-ai/model-provider/src/shared/openaiConvertMessages.ts around lines 65 - 74, The turn-boundary predicate for startsNewUserTurn is using an unsafe any for the array element `b`; change that to a stricter type (e.g. `Record<string, unknown>`) and add a local null/object guard before checking the `'type'` property so TypeScript narrows correctly. Concretely, in the Array.isArray(content) branch update the callback to accept `b: unknown`, first check `typeof b === 'string'` then `typeof b === 'object' && b !== null` and finally assert `('type' in b)` with `b as Record<string, unknown>` when comparing `b.type !== 'tool_result'` so no `any` is used and tsc strict mode passes for the `startsNewUserTurn` expression.
🟡 Minor comments (12)
src/utils/__tests__/imageResizer.test.ts-8-8 (1)
8-8:⚠️ Potential issue | 🟡 MinorUse
src/alias import instead of relative path.Line 8 should use the repository
src/...alias for utility imports to match TS path conventions.Suggested patch
-import { detectImageFormatFromBase64, detectImageFormatFromBuffer } from '../imageResizer.js' +import { detectImageFormatFromBase64, detectImageFormatFromBuffer } from 'src/utils/imageResizer.js'As per coding guidelines, "
**/*.{ts,tsx}: Import utility functions from thesrc/path alias."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/__tests__/imageResizer.test.ts` at line 8, Replace the relative import in the test that pulls detectImageFormatFromBase64 and detectImageFormatFromBuffer from '../imageResizer.js' with the repository path-alias import (e.g., 'src/utils/imageResizer') so the test uses the TS path alias; update the import statement for the symbols detectImageFormatFromBase64 and detectImageFormatFromBuffer accordingly to follow the project's src/ alias convention.src/keybindings/__tests__/confirmation-keybindings.test.ts-43-43 (1)
43-43:⚠️ Potential issue | 🟡 MinorUse English-only suite description to match test guideline.
The
describe(...)title includes Chinese text; please keep test descriptions fully English for consistency with repo standards.Suggested update
-describe('Confirmation context — n/y keys removed (fix: 修复 n 快捷键导致关闭的问题)', () => { +describe('Confirmation context — n/y keys removed (fix for accidental dismissal)', () => {As per coding guidelines: "
**/__tests__/**/*.test.ts: Use English for test descriptions."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/keybindings/__tests__/confirmation-keybindings.test.ts` at line 43, Update the test suite title used in the describe(...) call in confirmation-keybindings.test.ts to an English-only string to comply with the repo's test-guideline; locate the describe('Confirmation context — n/y keys removed (fix: 修复 n 快捷键导致关闭的问题)', ...) invocation and replace the Chinese characters and any non-English punctuation so the entire suite description is English-only while preserving its intent.src/utils/__tests__/earlyInput.test.ts-22-26 (1)
22-26:⚠️ Potential issue | 🟡 MinorUse
src/alias import in tests for utilities.The relative import should follow the repo alias convention for
ts/tsxfiles.Suggested change
import { seedEarlyInput, consumeEarlyInput, hasEarlyInput, -} from '../earlyInput.js' +} from 'src/utils/earlyInput'As per coding guidelines
**/*.{ts,tsx}: Import utility functions from thesrc/path alias. Use imports likeimport { ... } from 'src/utils/...'which are resolved via tsconfig.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/__tests__/earlyInput.test.ts` around lines 22 - 26, Update the test import to use the repo tsconfig alias: replace the relative import of seedEarlyInput, consumeEarlyInput, and hasEarlyInput with the aliased module import from 'src/utils/earlyInput' so the test follows the project's src/ path convention.docs/features/remote-control-self-hosting.md-143-145 (1)
143-145:⚠️ Potential issue | 🟡 MinorAdd language identifiers to the new fenced blocks.
These two new fences are currently flagged by markdownlint (MD040). Adding a simple
textlanguage keeps the docs clean and avoids lint noise.📝 Proposed fix
-``` +```text https://rcs.example.com/code?bridge=<environmentId>-
+text
https://rcs.example.com/code/session_Also applies to: 149-151
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/features/remote-control-self-hosting.md` around lines 143 - 145, Add the language identifier "text" to the new fenced code blocks that contain the example URLs (the fences showing "https://rcs.example.com/code?bridge=<environmentId>" and "https://rcs.example.com/code/session_<id>") so they become ```text ... ``` to satisfy markdownlint MD040 and avoid lint noise.packages/remote-control-server/src/store.ts-203-205 (1)
203-205:⚠️ Potential issue | 🟡 MinorClean up owner bindings when deleting a session.
This function now removes
sessionWorkers, but it still leaves the per-session entry insessionOwners. On a long-lived process, deleted sessions will keep orphaned ownership sets around indefinitely.🧹 Proposed fix
export function storeDeleteSession(id: string): boolean { sessionWorkers.delete(id); + sessionOwners.delete(id); return sessions.delete(id); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/remote-control-server/src/store.ts` around lines 203 - 205, storeDeleteSession currently removes sessionWorkers and sessions but leaves the per-session entry in sessionOwners, causing orphaned ownership sets; update the storeDeleteSession function to also remove the sessionOwners entry for the given id (preferably clear the Set if present and then call sessionOwners.delete(id)) so both the ownership mapping and any held references are cleaned up when deleting a session.packages/remote-control-server/src/__tests__/routes.test.ts-1274-1284 (1)
1274-1284:⚠️ Potential issue | 🟡 MinorRead until the
client_eventframe instead of assuming it's the next chunk.Lines 1274-1284 currently assume the post-publish
reader.read()returns the event frame, but this stream already emits keepalives and can legally send another one first. That makes the test intermittently fail even when the endpoint is correct.Proposed test hardening
- const secondChunk = await reader.read(); - const frame = new TextDecoder().decode(secondChunk.value!); - expect(frame).toContain("event: client_event"); - expect(frame).toContain("\"payload\":{\"type\":\"user\",\"content\":\"hello\",\"message\":{\"content\":\"hello\"}}"); + const deadline = Date.now() + 2000; + let frame = ""; + while (Date.now() < deadline) { + const chunk = await reader.read(); + frame = new TextDecoder().decode(chunk.value!); + if (frame.includes("event: client_event")) break; + } + expect(frame).toContain("event: client_event"); + expect(frame).toContain("\"payload\":{\"type\":\"user\",\"content\":\"hello\",\"message\":{\"content\":\"hello\"}}");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/remote-control-server/src/__tests__/routes.test.ts` around lines 1274 - 1284, The test assumes the event is the next chunk after publishSessionEvent; instead change the read logic to loop calling reader.read() (replacing secondChunk) decoding each chunk into frame and continue until the decoded frame includes "event: client_event" (then assert payload and break), and finally call reader.cancel(); reference the reader.read() usage, publishSessionEvent(id, ...), and the variables firstChunk/keepalive/frame so the loop consumes intervening keepalive frames reliably.src/services/acp/permissions.ts-171-183 (1)
171-183:⚠️ Potential issue | 🟡 MinorMissing error handling in handleExitPlanMode.
Unlike the main permission flow (lines 90-131) which wraps
conn.requestPermissionin a try/catch,handleExitPlanModedoes not handle errors from the permission request. If the connection fails, this will throw and propagate up instead of returning a deny decision.🛡️ Proposed fix to add error handling
- const response = await conn.requestPermission({ - sessionId, - toolCall, - options, - }) + let response + try { + response = await conn.requestPermission({ + sessionId, + toolCall, + options, + }) + } catch { + return { + behavior: 'deny', + message: 'Permission request failed', + decisionReason: { type: 'mode', mode: 'plan' }, + } + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/acp/permissions.ts` around lines 171 - 183, The handleExitPlanMode path calls conn.requestPermission (observed around the response/outcome check) without a try/catch, so connection errors will throw instead of returning a deny decision; wrap the call to conn.requestPermission in a try/catch inside handleExitPlanMode and on any error return the same shaped deny response used in the main flow (behavior: 'deny', decisionReason with type 'mode' and mode 'default', and a message conveying tool use was aborted or the error), ensuring you still handle the existing response.outcome.outcome === 'cancelled' branch after the try block.src/services/api/claude.ts-2919-2919 (1)
2919-2919:⚠️ Potential issue | 🟡 MinorUse full API tool set in Langfuse observation.
Line 2919 only logs
toolSchemas, but the request can includeextraToolSchemasviaallTools. This can make traces incomplete when advisor/server tools are appended.Suggested fix
- tools: convertToolsToLangfuse(toolSchemas as unknown[]), + tools: convertToolsToLangfuse(allTools as unknown[]),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/api/claude.ts` at line 2919, The Langfuse observation is only logging toolSchemas via convertToolsToLangfuse(toolSchemas as unknown[]) but may miss extraToolSchemas provided through allTools; update the observation to use the full combined tool set (e.g., merge toolSchemas and extraToolSchemas/allTools) and pass that combined array into convertToolsToLangfuse so traces include advisor/server-appended tools — modify the call site where convertToolsToLangfuse is invoked (referencing convertToolsToLangfuse, toolSchemas, extraToolSchemas/allTools) to construct and supply the complete tools list.docs/features/acp-zed.md-67-67 (1)
67-67:⚠️ Potential issue | 🟡 MinorSection numbering inconsistency: missing section 3.2.
Section numbering jumps from 3.1 to 3.3, skipping 3.2. Either add the missing section or renumber 3.3 → 3.2 and subsequent sections.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/features/acp-zed.md` at line 67, The document's section numbering skips 3.2 (it shows 3.1 then 3.3 API 认证配置); update the headings to restore consistent numbering by either inserting the missing 3.2 content if it was omitted or renumbering "3.3 API 认证配置" (and all subsequent headings and any internal cross-references) to "3.2" (and adjust later numbers accordingly); search for the heading text "3.3 API 认证配置" and any references to "3.3" or later section numbers and update them to the corrected numbering to keep the whole document consistent.vite.config.ts-91-94 (1)
91-94:⚠️ Potential issue | 🟡 MinorRemove trailing slash from
src/alias key to align with standard Vite configuration patterns.The alias is currently configured as
"src/"with a trailing slash. While this works functionally, the standard Vite convention (and how tsconfig defines paths) is to use"src"without a trailing slash in the alias key:♻️ Standard alias pattern
alias: { // src/* path alias (mirrors tsconfig paths) - "src/": resolve(projectRoot, "src/"), + src: resolve(projectRoot, "src"), },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@vite.config.ts` around lines 91 - 94, The alias key currently uses "src/" with a trailing slash; update the alias object so the key is "src" (no trailing slash) while keeping the resolved path value (resolve(projectRoot, "src/")) unchanged—i.e., change the alias entry in the alias map (the "alias" object in vite config) from "src/" to "src" to match standard Vite/tsconfig conventions.src/services/acp/agent.ts-661-670 (1)
661-670:⚠️ Potential issue | 🟡 MinorInconsistency between
validModesandavailableModes.
validModesincludes'bypassPermissions'but theavailableModesarray increateSession(lines 513-519) doesn't include this mode. This allows setting a mode viaapplySessionModethat isn't advertised to clients.Consider either adding
'bypassPermissions'toavailableModesor removing it fromvalidModesfor consistency.Option A: Remove from validModes
private applySessionMode(sessionId: string, modeId: string): void { - const validModes = ['auto', 'default', 'acceptEdits', 'bypassPermissions', 'dontAsk', 'plan'] + const validModes = ['auto', 'default', 'acceptEdits', 'dontAsk', 'plan'] if (!validModes.includes(modeId)) { throw new Error(`Invalid mode: ${modeId}`) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/acp/agent.ts` around lines 661 - 670, The applySessionMode function permits 'bypassPermissions' (validModes) but createSession's availableModes array doesn't advertise it, causing inconsistency; update createSession (the availableModes array) to include 'bypassPermissions' so clients see all modes allowed by applySessionMode, or alternatively remove 'bypassPermissions' from validModes in applySessionMode and any related logic; locate createSession and applySessionMode to make the corresponding addition/removal so both arrays match.src/services/acp/agent.ts-316-325 (1)
316-325:⚠️ Potential issue | 🟡 MinorMissing
awaitonteardownSessionmay cause race conditions.
teardownSessionis an async method that callscancel()internally. Calling it withoutawaitmeans the error is thrown before the session cleanup completes, potentially leaving state inconsistent if the caller retries immediately.Proposed fix
if ( err instanceof Error && (err.message.includes('terminated') || err.message.includes('process exited')) ) { - this.teardownSession(params.sessionId) + await this.teardownSession(params.sessionId) throw new Error( 'The Claude Agent process exited unexpectedly. Please start a new session.', ) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/acp/agent.ts` around lines 316 - 325, The teardownSession call in the error-handling branch is asynchronous and currently invoked without awaiting, which can cause race conditions because teardownSession internally calls cancel(); change the handler to await this.teardownSession(params.sessionId) before throwing the new Error so cleanup completes first; locate the block checking err instanceof Error and err.message.includes(...) in the agent (function/method containing teardownSession usage) and add await to the teardownSession invocation and ensure the enclosing function is async or already supports awaiting.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 1e8831d8-63c4-4fc6-9711-764926637ed5
⛔ Files ignored due to path filters (2)
bun.lockis excluded by!**/*.lockcontributors.svgis excluded by!**/*.svg
📒 Files selected for processing (135)
.github/workflows/ci.ymlCLAUDE.mdREADME.mdbuild.tsdocs/diagrams/agent-loop-simple.mmddocs/diagrams/agent-loop.mmddocs/features/acp-zed.mddocs/features/chrome-use-mcp.mddocs/features/remote-control-self-hosting.mdpackage.jsonpackages/@ant/claude-for-chrome-mcp/tsconfig.jsonpackages/@ant/computer-use-input/src/backends/darwin.tspackages/@ant/computer-use-input/tsconfig.jsonpackages/@ant/computer-use-mcp/tsconfig.jsonpackages/@ant/computer-use-swift/src/backends/darwin.tspackages/@ant/computer-use-swift/src/backends/linux.tspackages/@ant/computer-use-swift/src/types.tspackages/@ant/computer-use-swift/tsconfig.jsonpackages/@ant/ink/tsconfig.jsonpackages/@anthropic-ai/model-provider/package.jsonpackages/@anthropic-ai/model-provider/src/client/index.tspackages/@anthropic-ai/model-provider/src/client/types.tspackages/@anthropic-ai/model-provider/src/errorUtils.tspackages/@anthropic-ai/model-provider/src/hooks/index.tspackages/@anthropic-ai/model-provider/src/hooks/types.tspackages/@anthropic-ai/model-provider/src/index.tspackages/@anthropic-ai/model-provider/src/providers/gemini/convertMessages.tspackages/@anthropic-ai/model-provider/src/providers/gemini/convertTools.tspackages/@anthropic-ai/model-provider/src/providers/gemini/modelMapping.tspackages/@anthropic-ai/model-provider/src/providers/gemini/streamAdapter.tspackages/@anthropic-ai/model-provider/src/providers/gemini/types.tspackages/@anthropic-ai/model-provider/src/providers/grok/modelMapping.tspackages/@anthropic-ai/model-provider/src/providers/openai/modelMapping.tspackages/@anthropic-ai/model-provider/src/shared/openaiConvertMessages.tspackages/@anthropic-ai/model-provider/src/shared/openaiConvertTools.tspackages/@anthropic-ai/model-provider/src/shared/openaiStreamAdapter.tspackages/@anthropic-ai/model-provider/src/types/errors.tspackages/@anthropic-ai/model-provider/src/types/index.tspackages/@anthropic-ai/model-provider/src/types/message.tspackages/@anthropic-ai/model-provider/src/types/systemPrompt.tspackages/@anthropic-ai/model-provider/src/types/usage.tspackages/@anthropic-ai/model-provider/tsconfig.jsonpackages/agent-tools/src/__tests__/compat.test.tspackages/agent-tools/tsconfig.jsonpackages/audio-capture-napi/tsconfig.jsonpackages/builtin-tools/src/tools/AgentTool/__tests__/agentDisplay.test.tspackages/builtin-tools/tsconfig.jsonpackages/color-diff-napi/src/__tests__/color-diff.test.tspackages/color-diff-napi/tsconfig.jsonpackages/image-processor-napi/tsconfig.jsonpackages/mcp-client/src/__tests__/InProcessTransport.test.tspackages/mcp-client/src/__tests__/discovery.test.tspackages/mcp-client/src/__tests__/manager.test.tspackages/mcp-client/tsconfig.jsonpackages/modifiers-napi/tsconfig.jsonpackages/remote-control-server/src/__tests__/disconnect-monitor.test.tspackages/remote-control-server/src/__tests__/routes.test.tspackages/remote-control-server/src/__tests__/services.test.tspackages/remote-control-server/src/__tests__/ws-handler.test.tspackages/remote-control-server/src/routes/v1/environments.tspackages/remote-control-server/src/routes/v1/environments.work.tspackages/remote-control-server/src/routes/v1/session-ingress.tspackages/remote-control-server/src/routes/v1/sessions.tspackages/remote-control-server/src/routes/v2/code-sessions.tspackages/remote-control-server/src/routes/v2/worker-events-stream.tspackages/remote-control-server/src/routes/v2/worker-events.tspackages/remote-control-server/src/routes/v2/worker.tspackages/remote-control-server/src/routes/web/auth.tspackages/remote-control-server/src/routes/web/control.tspackages/remote-control-server/src/routes/web/sessions.tspackages/remote-control-server/src/services/disconnect-monitor.tspackages/remote-control-server/src/services/session.tspackages/remote-control-server/src/services/transport.tspackages/remote-control-server/src/store.tspackages/remote-control-server/src/transport/sse-writer.tspackages/remote-control-server/src/transport/ws-handler.tspackages/remote-control-server/tsconfig.jsonpackages/remote-control-server/web/app.jspackages/remote-control-server/web/components.csspackages/remote-control-server/web/index.htmlpackages/remote-control-server/web/render.jspackages/remote-control-server/web/utils.jspackages/url-handler-napi/tsconfig.jsonscripts/dev.tsscripts/post-build.tsscripts/vite-plugin-feature-flags.tsscripts/vite-plugin-import-meta-require.tssrc/QueryEngine.tssrc/bridge/bridgeMessaging.tssrc/cli/handlers/auth.tssrc/cli/print.tssrc/commands/poor/__tests__/poorMode.test.tssrc/components/ConsoleOAuthFlow.tsxsrc/components/messages/SystemAPIErrorMessage.tsxsrc/entrypoints/cli.tsxsrc/entrypoints/sdk/sdkUtilityTypes.tssrc/keybindings/__tests__/confirmation-keybindings.test.tssrc/keybindings/defaultBindings.tssrc/services/acp/__tests__/agent.test.tssrc/services/acp/__tests__/bridge.test.tssrc/services/acp/__tests__/permissions.test.tssrc/services/acp/agent.tssrc/services/acp/bridge.tssrc/services/acp/entry.tssrc/services/acp/permissions.tssrc/services/acp/utils.tssrc/services/api/claude.tssrc/services/api/emptyUsage.tssrc/services/api/errorUtils.tssrc/services/api/gemini/convertMessages.tssrc/services/api/gemini/convertTools.tssrc/services/api/gemini/modelMapping.tssrc/services/api/gemini/streamAdapter.tssrc/services/api/gemini/types.tssrc/services/api/grok/modelMapping.tssrc/services/api/openai/__tests__/convertMessages.test.tssrc/services/api/openai/__tests__/queryModelOpenAI.test.tssrc/services/api/openai/convertMessages.tssrc/services/api/openai/convertTools.tssrc/services/api/openai/index.tssrc/services/api/openai/modelMapping.tssrc/services/api/openai/streamAdapter.tssrc/services/langfuse/__tests__/langfuse.test.tssrc/services/langfuse/convert.tssrc/services/langfuse/tracing.tssrc/types/message.tssrc/utils/__tests__/bunHashPolyfill.test.tssrc/utils/__tests__/earlyInput.test.tssrc/utils/__tests__/imageResizer.test.tssrc/utils/forkedAgent.tssrc/utils/sideQuestion.tssrc/utils/systemPromptType.tstsconfig.base.jsontsconfig.jsonvite.config.ts
💤 Files with no reviewable changes (1)
- src/keybindings/defaultBindings.ts
| import { execFile, execFileSync } from 'child_process' | ||
| import { promisify } from 'util' | ||
| import type { FrontmostAppInfo, InputBackend } from '../types.js' | ||
|
|
||
| const execFileAsync = promisify(execFile) | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify runtime guidance and this file's process-exec strategy.
set -euo pipefail
# 1) Locate Bun runtime guidance in repo docs.
fd -i 'CLAUDE.md' -x rg -n -C2 "Runtime must be Bun|Bun\\.build|bun:bundle|feature\\(" {}
# 2) Inspect current backend implementation for Node process APIs.
rg -n -C2 "from 'child_process'|from \"child_process\"|execFileAsync|execFileSync\\(" packages/@ant/computer-use-input/src/backends/darwin.tsRepository: claude-code-best/claude-code
Length of output: 3970
🏁 Script executed:
# Check if this is a separate package with its own configuration
cat packages/@ant/computer-use-input/package.json | head -30
# Check the directory structure and purpose
ls -la packages/@ant/computer-use-input/ | head -20
# Search for any documentation about this package
find packages/@ant/computer-use-input -maxdepth 2 -name "README*" -o -name "*.md" | xargs cat 2>/dev/null | head -100Repository: claude-code-best/claude-code
Length of output: 503
🏁 Script executed:
# Find where `@ant/computer-use-input` is imported
rg -n "@ant/computer-use-input" --type ts --type tsx
# Check if there's a workspace configuration or root tsconfig
cat package.json | head -50Repository: claude-code-best/claude-code
Length of output: 1459
🏁 Script executed:
# Search for imports of the package without type restrictions
rg "@ant/computer-use-input" -A 2
# Also check if there's a references to darwin.ts anywhere
rg "darwin\.ts|backends/darwin" -nRepository: claude-code-best/claude-code
Length of output: 9342
🏁 Script executed:
# Check how darwin backend is conditionally loaded or gated
rg -n "require.*darwin|import.*darwin|isSupported" packages/@ant/computer-use-input/src/
# Check the index.ts to see platform dispatch logic
cat packages/@ant/computer-use-input/src/index.tsRepository: claude-code-best/claude-code
Length of output: 2780
child_process and util imports prevent darwin backend from loading in Bun runtime.
Lines 8–9 import Node.js-only modules at the top level of darwin.ts. When Bun runtime executes require('./backends/darwin.js') in index.ts, these imports fail because Bun lacks child_process and util modules. The try-catch in loadBackend() masks this silently, leaving isSupported = false on macOS — breaking computer-use functionality.
Fix: Dynamically import child_process and util only when needed (lazy-load inside functions), or use Bun's native process APIs (Bun.spawn()) instead of Node.js execFile/execFileSync.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/`@ant/computer-use-input/src/backends/darwin.ts around lines 8 - 13,
Top-level imports of Node-only modules (execFile, execFileSync from
child_process and promisify from util which creates execFileAsync) prevent the
darwin backend from loading in Bun; remove those top-level imports and lazy-load
the required APIs inside the functions that actually run commands (or replace
with Bun.spawn usage). Specifically, eliminate the top-level
execFile/execFileSync/promisify/execFileAsync symbols and instead require/import
child_process and util (or call Bun.spawn) within the command-running functions
in packages/@ant/computer-use-input/src/backends/darwin.ts so the module can be
loaded under Bun and only uses Node APIs when actually invoked.
| @@ -0,0 +1,7 @@ | |||
| { | |||
| "extends": "../../tsconfig.json", | |||
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "All tsconfig.json files:"
fd -a '^tsconfig\.json$'
echo
python - <<'PY'
import json
from pathlib import Path
p = Path('packages/@anthropic-ai/model-provider/tsconfig.json')
cfg = json.loads(p.read_text())
ext = cfg.get("extends")
resolved = (p.parent / ext).resolve()
print("extends =", ext)
print("resolved =", resolved)
print("exists =", resolved.exists())
PYRepository: claude-code-best/claude-code
Length of output: 1140
Fix the extends path: target file does not exist.
The extends path ../../tsconfig.json resolves to packages/tsconfig.json, which does not exist in the repository. The repository root tsconfig.json is at tsconfig.json (root level). Change line 2 to:
"extends": "../../../tsconfig.json",This will correctly reference the base configuration at the repository root.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/`@anthropic-ai/model-provider/tsconfig.json at line 2, Update the
"extends" value in the package's tsconfig.json so it points to the repository
root base config instead of a non-existent packages/tsconfig.json; locate the
"extends" property in packages/@anthropic-ai/model-provider/tsconfig.json and
change its path to reference the root tsconfig.json (i.e., adjust the extends
string upward one extra directory level) so the package inherits the correct
base configuration.
There was a problem hiding this comment.
🧹 Nitpick comments (1)
.github/workflows/ci.yml (1)
29-32: Harden Codecov step by pinning the action SHA and making LCOV input explicit.Using a mutable tag (
@v5) and implicit file discovery is less reproducible. Pin to a full-length commit SHA and pass the LCOV file directly. Since the build generates coverage viabun test --coverage --coverage-reporter=lcov, explicitly specify the output file.Proposed change
- name: Upload coverage to Codecov - uses: codecov/codecov-action@v5 + uses: codecov/codecov-action@<full-length-commit-sha-for-v5> with: token: ${{ secrets.CODECOV_TOKEN }} + files: ./coverage/lcov.info🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/ci.yml around lines 29 - 32, The CI step "Upload coverage to Codecov" currently uses the mutable tag codecov/codecov-action@v5 and relies on implicit discovery; replace the mutable tag with the action's full commit SHA (pin to a specific full-length SHA for codecov/codecov-action) and add an explicit LCOV input (e.g., files: ./coverage/lcov.info) alongside the token to ensure deterministic behavior given your bun test --coverage --coverage-reporter=lcov output; update the step that references uses: codecov/codecov-action@v5 and the with: block to include the pinned SHA and the explicit LCOV file path.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In @.github/workflows/ci.yml:
- Around line 29-32: The CI step "Upload coverage to Codecov" currently uses the
mutable tag codecov/codecov-action@v5 and relies on implicit discovery; replace
the mutable tag with the action's full commit SHA (pin to a specific full-length
SHA for codecov/codecov-action) and add an explicit LCOV input (e.g., files:
./coverage/lcov.info) alongside the token to ensure deterministic behavior given
your bun test --coverage --coverage-reporter=lcov output; update the step that
references uses: codecov/codecov-action@v5 and the with: block to include the
pinned SHA and the explicit LCOV file path.
Summary by CodeRabbit
New Features
Chores
Tests