Skip to content

feat: 重构供应商层次#286

Open
claude-code-best wants to merge 25 commits intomainfrom
refactor/provider
Open

feat: 重构供应商层次#286
claude-code-best wants to merge 25 commits intomainfrom
refactor/provider

Conversation

@claude-code-best
Copy link
Copy Markdown
Owner

@claude-code-best claude-code-best commented Apr 16, 2026

Summary by CodeRabbit

  • New Features

    • Added agent-loop diagrams to docs.
    • Query engine now exposes abort/reset signal controls.
    • Introduced a model-provider layer enabling additional model backends and improved API error/SSL hinting.
  • Chores

    • Bumped package version to 1.4.0 and expanded workspace/build scripts (Vite/Bun/typecheck).
  • Tests

    • Added tests for poor mode, keybindings, early input, image detection, and related utilities.

claude-code-best and others added 21 commits April 13, 2026 23:10
- 新建 workspace 包 packages/@anthropic-ai/model-provider
- 定义 ModelProviderHooks 接口(依赖注入:分析、成本、日志等)
- 定义 ClientFactories 接口(Anthropic/OpenAI/Gemini/Grok 客户端工厂)
- 搬入核心类型:Message 体系、NonNullableUsage、EMPTY_USAGE、SystemPrompt、错误常量
- 主项目 src/types/message.ts 等改为 re-export,保持向后兼容

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- 搬入 OpenAI 消息转换(convertMessages)、工具转换(convertTools)、流适配(streamAdapter)
- 搬入 OpenAI 和 Grok 模型映射(resolveOpenAIModel、resolveGrokModel)
- 主项目文件改为 thin re-export proxy

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- 搬入 Gemini 类型定义、消息转换、工具转换、流适配、模型映射
- 主项目 gemini/ 目录下文件改为 thin re-export proxy

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- 搬入 formatAPIError、extractConnectionErrorDetails 等 errorUtils
- 迁移 10 个消费者文件直接从 @anthropic-ai/model-provider 导入
- 更新 emptyUsage、sdkUtilityTypes、systemPromptType 为 re-export proxy

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: 修复 Bun 的 polyfill 问题

* fix: 类型修复完成

* feat: 统一所有包的类型文件

* fix: 修复构建问题
* fix: 修复 Bun 的 polyfill 问题

* fix: 类型修复完成

* feat: 统一所有包的类型文件

* fix: 修复构建问题
Co-authored-by: chengzifeng <chengzifeng@meituan.com>
将 Anthropic 格式的工具定义转换为 Langfuse 兼容的 OpenAI 格式,
并在 generation 的 input 中以 { messages, tools } 结构传入,
以便在 Langfuse UI 中查看完整的工具定义信息。

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* feat: 适配 zed acp 协议

* docs: 完善 acp 文档
@mintlify
Copy link
Copy Markdown

mintlify bot commented Apr 16, 2026

Preview deployment for your docs. Learn more about Mintlify Previews.

Project Status Preview Updated (UTC)
ccb-863780bf 🟢 Ready View Preview Apr 16, 2026, 12:51 PM

💡 Tip: Enable Workflows to automatically generate PRs for you.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 16, 2026

📝 Walkthrough

Walkthrough

Centralizes provider logic into a new package @anthropic-ai/model-provider, migrating OpenAI/Gemini/Grok conversions, stream adapters, types, and error/usage utilities to that package and replacing many local implementations with re-exports; adds package/TS config and tests, and two agent-loop diagrams.

Changes

Cohort / File(s) Summary
New provider package (core & exports)
packages/@anthropic-ai/model-provider/package.json, packages/@anthropic-ai/model-provider/tsconfig.json, packages/@anthropic-ai/model-provider/src/index.ts, packages/@anthropic-ai/model-provider/src/client/*, packages/@anthropic-ai/model-provider/src/hooks/*
Adds package that exposes client-factory registration, hooks registration, and central export surface for provider utilities/types.
Provider: error, types, usage, and hooks
packages/@anthropic-ai/model-provider/src/errorUtils.ts, packages/@anthropic-ai/model-provider/src/types/*, packages/@anthropic-ai/model-provider/src/types/index.ts, packages/@anthropic-ai/model-provider/src/types/usage.ts
Implements API/error formatting, SSL hinting, message/usage/systemPrompt types, and EMPTY_USAGE constant.
Provider: OpenAI/Gemini/Grok adapters & stream adapters
packages/@anthropic-ai/model-provider/src/shared/openai*, packages/@anthropic-ai/model-provider/src/providers/gemini/*, packages/@anthropic-ai/model-provider/src/providers/grok/*, packages/@anthropic-ai/model-provider/src/providers/openai/*
Adds message/tool conversion, JSON-schema sanitization, model-mapping, and stream adaptation implementations for OpenAI, Gemini, and Grok.
Replaced local implementations with re-exports
src/services/api/openai/..., src/services/api/gemini/..., src/services/api/grok/modelMapping.ts, src/services/api/gemini/types.ts, src/services/api/openai/streamAdapter.ts
Deleted large local implementations and replaced them with re-exports from @anthropic-ai/model-provider, keeping external API names but moving logic to the package.
Import migrations across codebase
src/services/api/errorUtils.ts, src/services/api/emptyUsage.ts, src/entrypoints/sdk/sdkUtilityTypes.ts, src/cli/handlers/auth.ts, src/cli/print.ts, src/components/*, src/utils/*, src/QueryEngine.ts, src/types/message.ts
Updates imports to consume types/values/functions (formatAPIError, getSSLErrorHint, EMPTY_USAGE, NonNullableUsage, SystemPrompt, message types) from the new provider package; adds QueryEngine methods resetAbortController and getAbortSignal.
Package & TypeScript configuration
package.json, tsconfig.base.json, tsconfig.json, packages/tsconfig.json, packages/*/tsconfig.json (new/updated)
Adds root/package TS configs, narrows types in root tsconfig, extends yarn workspace and adds build/typecheck scripts and dependencies.
Tests added/updated
src/services/api/openai/__tests__/convertMessages.test.ts, src/commands/poor/__tests__/poorMode.test.ts, src/keybindings/__tests__/confirmation-keybindings.test.ts, src/utils/__tests__/*, packages/builtin-tools/.../agentDisplay.test.ts
Adds multiple test suites for message conversion ordering, poor mode settings, keybindings, polyfills, early input, and image format detection; expands one existing test mock.
Docs: agent-loop diagrams
docs/diagrams/agent-loop.mmd, docs/diagrams/agent-loop-simple.mmd
Adds two Mermaid diagrams modeling agent loop control flow (detailed and simplified).
CI update
.github/workflows/ci.yml
Enables test coverage reporting and uploads lcov to Codecov.

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

Possibly related PRs

  • #284 — Touches src/QueryEngine.ts (adds/reset/get abort-controller methods) and overlaps the same QueryEngine changes in this PR.
  • #199 — Modifies OpenAI message conversion behavior; overlaps the OpenAI convertMessages migration to the new provider package.
  • #125 — Adds/edits Gemini adapter and types; overlaps the Gemini provider logic now moved into @anthropic-ai/model-provider.

Poem

🐇 I hopped through modules, gathered each strand,
Moved streams and types into one tidy land.
From errors to models and messages bright,
I stitched them together by moonbeams at night.
A cheery rabbit—package bundled and light!

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title '重构供应商层次' (refactor provider hierarchy) accurately describes the main objective of the PR, which involves extracting provider-layer logic into a new @anthropic-ai/model-provider package and refactoring imports across the codebase.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch refactor/provider

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Comment thread packages/remote-control-server/web/app.js Fixed
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Note

Due to the large number of review comments, Critical severity comments were prioritized as inline comments.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/services/langfuse/tracing.ts (1)

80-96: ⚠️ Potential issue | 🟡 Minor

Fix empty array truthiness check for tools parameter.

In JavaScript, an empty array [] is truthy, so the condition params.tools ? ... : ... will incorrectly wrap the input as { messages, tools: [] } when convertToolsToLangfuse returns an empty array. This changes the Langfuse recorded input shape inconsistently compared to requests without tools.

Suggested fix
 export function recordLLMObservation(
   rootSpan: LangfuseSpan | null,
   params: {
@@
 ): void {
   if (!rootSpan || !isLangfuseEnabled()) return
   try {
     const genName = PROVIDER_GENERATION_NAMES[params.provider] ?? `Chat${params.provider}`
+    const hasTools = Array.isArray(params.tools) ? params.tools.length > 0 : Boolean(params.tools)
 
     // Use the global startObservation directly instead of rootSpan.startObservation().
@@
       {
         model: params.model,
-        input: params.tools
+        input: hasTools
           ? { messages: params.input, tools: params.tools }
           : params.input,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/langfuse/tracing.ts` around lines 80 - 96, The conditional that
builds the Langfuse generation input uses params.tools truthiness, but an empty
array is truthy and will incorrectly wrap input as { messages, tools: [] };
update the check where the input is constructed (inside the startObservation
call in this tracing module using genName and params) to only include tools when
they are a non-empty array (e.g., Array.isArray(params.tools) &&
params.tools.length > 0) or otherwise use params.input directly so empty tool
arrays do not change the recorded input shape.
🟠 Major comments (19)
src/utils/__tests__/earlyInput.test.ts-72-91 (1)

72-91: ⚠️ Potential issue | 🟠 Major

Regression tests are non-functional (asserting constants, not behavior).

Both cases pass even if filtering breaks, because they only compare local string literals and never invoke the code path under test.

Safer interim change (avoid false confidence)
-  test('DA1 response sequence pattern is documented (CSI ? ... c)', () => {
-    // \x1b[?64;1;2;4;6;17;18;21;22c — previously leaked as "?64;1;2;4;6;17;18;21;22c"
-    // After fix: CSI sequences are fully consumed, nothing leaks
-    // We document the expected clean output here
-    const leakedBefore = '?64;1;2;4;6;17;18;21;22c'
-    const cleanAfter = ''
-    // The fix ensures processChunk produces cleanAfter, not leakedBefore
-    // (verified manually; this test documents the contract)
-    expect(leakedBefore).not.toBe(cleanAfter) // sanity: they differ
-    expect(cleanAfter).toBe('') // after fix: nothing leaks
-  })
+  test.todo('DA1 response sequence is fully filtered in stdin → processChunk path')

-  test('XTVERSION DCS sequence pattern is documented (ESC P ... ESC \\)', () => {
-    // \x1bP>|iTerm2 3.6.4\x1b\\ — previously leaked as ">|iTerm2 3.6.4"
-    // After fix: DCS sequences are fully consumed via ST terminator
-    const leakedBefore = '>|iTerm2 3.6.4'
-    const cleanAfter = ''
-    expect(leakedBefore).not.toBe(cleanAfter)
-    expect(cleanAfter).toBe('')
-  })
+  test.todo('XTVERSION DCS sequence is fully filtered in stdin → processChunk path')
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/__tests__/earlyInput.test.ts` around lines 72 - 91, The tests
currently assert only local constants instead of exercising the real filter
logic; update both tests to call the actual function under test (e.g.,
processChunk) with the raw escape sequences and assert its return/side-effect
produces the empty string. Specifically, in the "DA1 response sequence
pattern..." test call processChunk with "\x1b[?64;1;2;4;6;17;18;21;22c" and
expect the result to be "" (and/or verify any buffer state is cleared), and in
the "XTVERSION DCS sequence pattern..." test call processChunk (or the
DCS-handling entrypoint) with "\x1bP>|iTerm2 3.6.4\x1b\\" and expect "" as the
filtered output; ensure you import/require processChunk (or the correct exported
function) and reset any parser state between assertions so tests are
deterministic.
src/commands/poor/__tests__/poorMode.test.ts-28-53 (1)

28-53: ⚠️ Potential issue | 🟠 Major

The “first call reads settings” path is not actually tested.

Line 45 and Line 51 call setPoorMode(...) before isPoorModeActive(), so these cases validate setter effects, not initial-read behavior. Also, freshModule() (Line 29) is empty, so singleton reset is currently unimplemented.

Suggested test adjustment
-/** Reset module-level singleton between tests by re-importing a fresh copy. */
-async function freshModule() {
-  // Bun caches modules; we manipulate the exported functions directly since
-  // the singleton `poorModeActive` is reset to null only on first import.
-  // Instead we test the observable behaviour through set/get pairs.
-}
+// Prefer an explicit test reset hook exported from poorMode.ts, e.g.:
+// export function __resetPoorModeForTest() { poorModeActive = null }

 describe('isPoorModeActive — reads from settings on first call', () => {
   beforeEach(() => {
     lastUpdate = null
+    mockSettings = {}
+    // __resetPoorModeForTest()
   })

   test('returns false when settings has no poorMode key', () => {
     mockSettings = {}
-    // Force re-read by setting internal state via setPoorMode then checking
-    setPoorMode(false)
     expect(isPoorModeActive()).toBe(false)
   })

   test('returns true when settings.poorMode === true', () => {
     mockSettings = { poorMode: true }
-    setPoorMode(true)
     expect(isPoorModeActive()).toBe(true)
   })
 })
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/commands/poor/__tests__/poorMode.test.ts` around lines 28 - 53,
freshModule() is currently a no-op so tests never exercise the "first-call reads
settings" path; implement freshModule to reset the module-level singleton (clear
module cache or re-require the module) and reset internal state (e.g., set
lastUpdate = null and poorModeActive = null if present) so a fresh import will
read mockSettings on first isPoorModeActive() call, then update the tests to
call freshModule() before invoking isPoorModeActive() (remove the prior
setPoorMode(...) calls used to force state) so the tests validate initial-read
behavior; refer to freshModule, isPoorModeActive, setPoorMode, mockSettings and
lastUpdate/poolModeActive in the test file when making changes.
packages/remote-control-server/src/transport/sse-writer.ts-133-146 (1)

133-146: ⚠️ Potential issue | 🟠 Major

Preserve string-form user messages when normalizing worker payloads.

If a user event comes through as { message: "..." }, this branch rewrites it to { message: { content: "" } } because the string message case is never used as a fallback. That drops real user text from the SSE stream.

🔧 Proposed fix
   if (event.type === "user") {
     const message = payload.message;
     if (!message || typeof message !== "object" || !("content" in message)) {
       const content =
         typeof normalized?.content === "string"
           ? normalized.content
+          : typeof normalized?.message === "string"
+            ? normalized.message
           : typeof payload.content === "string"
             ? payload.content
           : typeof event.payload === "string"
             ? event.payload
             : "";
       payload.content = content;
       payload.message = { content };
     }
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/remote-control-server/src/transport/sse-writer.ts` around lines 133
- 146, In the SSE normalization branch handling event.type === "user" (in
sse-writer.ts), the current fallback logic ignores when payload.message is a
string and replaces it with an empty content object; update the fallback so a
string message is preserved: if payload.message is a string use that string as
content before checking normalized.content, payload.content, or event.payload,
and then set payload.content and payload.message = { content } only after
deriving the correct string content; modify references in this block (payload,
message, normalized, event.payload) accordingly so real user text is not
dropped.
packages/remote-control-server/web/render.js-163-173 (1)

163-173: ⚠️ Potential issue | 🟠 Major

Don't drop user text when the same event also contains embedded tool_result blocks.

Both branches bail out as soon as renderEmbeddedToolResultBlocks(payload) returns anything, so mixed-content user messages render only the tool result cards and lose their text. In the live path that also skips the loading state for a real user prompt.

Suggested fix
     case "user":
       {
         const toolResultEls = renderEmbeddedToolResultBlocks(payload);
-        if (toolResultEls.length > 0) {
-          els.push(...toolResultEls);
-          break;
-        }
-        if (!shouldRenderUserEvent(payload, direction, false)) return;
-        els.push(renderUserMessage(payload, direction));
-        needLoading = true;
+        const text = extractText(payload).trim();
+        if (text) {
+          if (!shouldRenderUserEvent(payload, direction, false)) return;
+          els.push(renderUserMessage(payload, direction));
+          needLoading = true;
+        }
+        if (toolResultEls.length > 0) els.push(...toolResultEls);
       }
       break;

Apply the same pattern in the replay branch so text and tool results can both render.

Also applies to: 234-244

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/remote-control-server/web/render.js` around lines 163 - 173, The
current case "user" branch drops user text whenever
renderEmbeddedToolResultBlocks(payload) returns non-empty; update the logic so
that both embedded tool result elements and the user message can be pushed to
histEls: call renderEmbeddedToolResultBlocks(payload) and always push its
results (if any) into histEls, then independently check
shouldRenderUserEvent(payload, direction, true) and push
renderUserMessage(payload, direction) if true; apply the same change to the
replay branch (the similar code around the other user-case at the second
location) so mixed-content messages render both tool_result blocks and the user
text.
packages/remote-control-server/src/services/session.ts-17-17 (1)

17-17: ⚠️ Potential issue | 🟠 Major

Clean up the event bus for inactive sessions too.

inactive is now classified as a closed status, but only archiveSession() removes the event bus. If a session transitions to "inactive" through updateSessionStatus(), the bus stays registered and existing stream subscribers can remain attached to a session that the rest of this service already treats as closed.

Suggested fix
 export function updateSessionStatus(sessionId: string, status: string) {
   storeUpdateSession(sessionId, { status });
   const bus = getAllEventBuses().get(sessionId);
-  if (!bus) return;
-
-  bus.publish({
-    id: uuid(),
-    sessionId,
-    type: "session_status",
-    payload: { status },
-    direction: "inbound",
-  });
+  if (bus) {
+    bus.publish({
+      id: uuid(),
+      sessionId,
+      type: "session_status",
+      payload: { status },
+      direction: "inbound",
+    });
+  }
+  if (isSessionClosedStatus(status)) {
+    removeEventBus(sessionId);
+  }
 }
 
 export function archiveSession(sessionId: string) {
   updateSessionStatus(sessionId, "archived");
-  removeEventBus(sessionId);
 }

Also applies to: 78-80, 130-150

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/remote-control-server/src/services/session.ts` at line 17, The event
bus is only removed in archiveSession() so sessions marked "inactive" (now in
CLOSED_SESSION_STATUSES) keep their event bus; factor the event-bus cleanup used
in archiveSession() into a reusable helper (e.g., cleanupEventBusForSession or
removeEventBus) and call it from updateSessionStatus() whenever the new status
is a member of CLOSED_SESSION_STATUSES (and keep archiveSession() calling it
too), so any transition to a closed status (including "inactive") unregisters
the event bus and detaches stream subscribers.
build.ts-93-110 (1)

93-110: ⚠️ Potential issue | 🟠 Major

Reset the regex state before scanning each file.

BUN_DESTRUCTURE is global, so .test(content) advances lastIndex. After the first match, later files can be skipped and left unpatched, which makes the Node-compat pass flaky.

Suggested fix
 const BUN_DESTRUCTURE = /var \{([^}]+)\} = globalThis\.Bun;?/g
 const BUN_DESTRUCTURE_SAFE = 'var {$1} = typeof globalThis.Bun !== "undefined" ? globalThis.Bun : {};'
 for (const file of files) {
   if (!file.endsWith('.js')) continue
   const filePath = join(outdir, file)
   const content = await readFile(filePath, 'utf-8')
-  if (BUN_DESTRUCTURE.test(content)) {
+  BUN_DESTRUCTURE.lastIndex = 0
+  const patchedContent = content.replace(BUN_DESTRUCTURE, BUN_DESTRUCTURE_SAFE)
+  if (patchedContent !== content) {
     await writeFile(
       filePath,
-      content.replace(BUN_DESTRUCTURE, BUN_DESTRUCTURE_SAFE),
+      patchedContent,
     )
     bunPatched++
   }
 }
-BUN_DESTRUCTURE.lastIndex = 0

Based on learnings, build output must be compatible with both Bun and Node.js.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@build.ts` around lines 93 - 110, The global RegExp BUN_DESTRUCTURE retains
state between .test calls causing some files to be skipped; fix by resetting or
re-instantiating the regex before testing each file in the files loop (e.g.,
recreate BUN_DESTRUCTURE or set BUN_DESTRUCTURE.lastIndex = 0 at the top of each
iteration) so that the test against content always starts fresh before calling
writeFile and incrementing bunPatched; ensure this change references
BUN_DESTRUCTURE, the for (const file of files) loop, readFile, writeFile, and
bunPatched so all JS outputs are reliably patched for Node compatibility.
scripts/vite-plugin-feature-flags.ts-43-47 (1)

43-47: ⚠️ Potential issue | 🟠 Major

Only treat FEATURE_* variables set to 1 as enabled.

Right now any present variable enables the flag, so FEATURE_ACP=0 and FEATURE_ACP=false still turn ACP on. That breaks the documented override contract.

Suggested fix
 export function getEnabledFeatures(): Set<string> {
-  const envFeatures = Object.keys(process.env)
-    .filter((k) => k.startsWith("FEATURE_"))
-    .map((k) => k.replace("FEATURE_", ""));
+  const envFeatures = Object.entries(process.env)
+    .filter(([k, v]) => k.startsWith("FEATURE_") && v === "1")
+    .map(([k]) => k.replace("FEATURE_", ""));
   return new Set([...DEFAULT_BUILD_FEATURES, ...envFeatures]);
 }

Based on learnings: Enable feature flags at runtime using environment variables: FEATURE_<FLAG_NAME>=1.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/vite-plugin-feature-flags.ts` around lines 43 - 47, The current
getEnabledFeatures treats any FEATURE_* env var as enabled; change it to only
enable flags whose process.env value equals "1". In getEnabledFeatures(),
iterate Object.entries(process.env) or filter Object.keys by checking
process.env[k] === "1" (instead of only startsWith), extract the flag name via
k.replace("FEATURE_", ""), and then return new Set([...DEFAULT_BUILD_FEATURES,
...envFeatures]) so only FEATURE_<NAME>=1 adds the feature while keeping
DEFAULT_BUILD_FEATURES untouched.
packages/@anthropic-ai/model-provider/src/providers/grok/modelMapping.ts-34-40 (1)

34-40: ⚠️ Potential issue | 🟠 Major

Validate GROK_MODEL_MAP values and honor exact model keys first.

JSON.parse() is asserted straight to Record<string, string>, so non-string values can pass through unchecked, and the resolver only reads userMap[family]. That means GROK_MODEL_MAP='{"claude-opus-4-6":"grok-x"}' is ignored even though this option is documented as a model map.

Suggested fix
 function getUserModelMap(): Record<string, string> | null {
   const raw = process.env.GROK_MODEL_MAP
   if (!raw) return null
   try {
-    const parsed = JSON.parse(raw)
+    const parsed = JSON.parse(raw) as Record<string, unknown>
     if (parsed && typeof parsed === 'object' && !Array.isArray(parsed)) {
-      return parsed as Record<string, string>
+      return Object.fromEntries(
+        Object.entries(parsed).filter(([, value]) => typeof value === 'string'),
+      ) as Record<string, string>
     }
   } catch {
     // ignore invalid JSON
   }
   return null
 }
@@
   const userMap = getUserModelMap()
-  if (userMap && family && userMap[family]) {
+  if (userMap?.[cleanModel]) {
+    return userMap[cleanModel]
+  }
+  if (userMap && family && userMap[family]) {
     return userMap[family]
   }

As per coding guidelines: For unknown structure objects, use Record<string, unknown> instead of any type.

Also applies to: 59-61

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/`@anthropic-ai/model-provider/src/providers/grok/modelMapping.ts
around lines 34 - 40, The GROK_MODEL_MAP parsing currently casts JSON.parse() to
Record<string,string> and doesn't validate values or exact model keys; update
getUserModelMap to parse into Record<string,unknown>, iterate entries and build
a new Record<string,string> that only includes keys whose values are typeof
'string' (discard non-string entries), and return null if no valid mappings;
also ensure the resolver logic that looks up userMap[family] first is preserved
so exact model keys (e.g., "claude-opus-4-6") are honored before any
family-based lookup—apply the same input validation/filtering approach to the
similar code around the other mapping code referenced (lines ~59-61).
scripts/vite-plugin-feature-flags.ts-7-38 (1)

7-38: ⚠️ Potential issue | 🟠 Major

Source the default feature list from the canonical build config.

build:vite now carries its own allowlist, so it can drift from build.ts and produce a different binary for the same commit. This list should come from the same source of truth instead of being duplicated here.

Based on learnings: Feature flags are disabled by default. Dev mode enables all features by default; build mode enables 19 specific features (listed in build.ts). Centrally manage build defines (constants, version numbers) in scripts/defines.ts.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/vite-plugin-feature-flags.ts` around lines 7 - 38, The hardcoded
DEFAULT_BUILD_FEATURES array in vite-plugin-feature-flags.ts must be removed and
the canonical feature-list imported from the single source of truth used by
build.ts (e.g., the exported build feature constant in scripts/defines.ts or the
build.ts module). Replace DEFAULT_BUILD_FEATURES with the imported symbol (e.g.,
BUILD_FEATURES or DEFAULT_FEATURE_FLAGS) and ensure the plugin still applies the
same dev vs build semantics (dev: enable all; build: use the imported allowlist
of 19 features) so the Vite build and the primary build pipeline use the
identical feature set.
src/services/acp/entry.ts-41-62 (1)

41-62: ⚠️ Potential issue | 🟠 Major

Guard shutdown before the first agent instance exists.

If stdin closes or a signal arrives before the connection callback assigns agent, shutdown() throws on agent.sessions and the cleanup path fails.

Suggested fix
-  let agent!: AcpAgent
+  let agent: AcpAgent | undefined
@@
   async function shutdown(): Promise<void> {
     // Clean up all active sessions
-    for (const [sessionId] of agent.sessions) {
+    if (!agent) {
+      process.exit(0)
+    }
+
+    for (const [sessionId] of agent.sessions) {
       try {
         await agent.unstable_closeSession({ sessionId })
       } catch {
         // Best-effort cleanup
       }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/acp/entry.ts` around lines 41 - 62, The shutdown function can
run before the AgentSideConnection callback assigns the agent, causing a crash
when accessing agent.sessions; modify shutdown to guard for an uninitialized
agent by checking that the local variable agent is truthy before iterating its
sessions (or return early if unset), and only call agent.unstable_closeSession
for each session when agent is defined; reference the agent variable, the
shutdown() function, the agent.sessions iteration, and unstable_closeSession
when making the change.
src/services/acp/entry.ts-30-51 (1)

30-51: ⚠️ Potential issue | 🟠 Major

Redirect console.* before any ACP startup work runs.

enableConfigs(), applySafeConfigEnvironmentVariables(), or AgentSideConnection setup can log before the redirect happens. Any such output lands on stdout and corrupts the NDJSON ACP stream.

Suggested fix
 export async function runAcpAgent(): Promise<void> {
+  // stdout is reserved for ACP messages
+  console.log = console.error
+  console.info = console.error
+  console.warn = console.error
+  console.debug = console.error
+
   enableConfigs()
@@
-  // stdout is used for ACP messages — redirect console to stderr
-  console.log = console.error
-  console.info = console.error
-  console.warn = console.error
-  console.debug = console.error
-
   async function shutdown(): Promise<void> {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/acp/entry.ts` around lines 30 - 51, Move the console redirection
to the very start of runAcpAgent so no logging from enableConfigs,
applySafeConfigEnvironmentVariables, createAcpStream or AgentSideConnection can
write to stdout and corrupt the NDJSON ACP stream; specifically, set
console.log/console.info/console.warn/console.debug to console.error before
calling enableConfigs(), applySafeConfigEnvironmentVariables(),
createAcpStream() or instantiating AgentSideConnection/AcpAgent so stdout
remains reserved for ACP messages.
scripts/vite-plugin-feature-flags.ts-104-110 (1)

104-110: ⚠️ Potential issue | 🟠 Major

Symbol.dispose is dropped when transpiling using to const.

Replacing using declarations with const removes automatic disposal semantics entirely. While slowLoggingExternal currently returns a no-op disposable with an empty [Symbol.dispose]() method, this safety relies on SLOW_OPERATION_LOGGING remaining disabled. If the flag is ever enabled or the behavior of slowLogging changes to return a real disposable, the transpiled code will silently fail to invoke cleanup logic that the source code expects. To maintain correct semantics, either keep the using statement or add a comment explaining why this specific case is safe to transpile unconditionally, and document the assumption being made.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/vite-plugin-feature-flags.ts` around lines 104 - 110, The
transpilation that replaces "using _" via transformed.replace removes disposal
semantics and can drop calls to [Symbol.dispose] when SLOW_OPERATION_LOGGING or
slowLoggingExternal change; update the transformation in
vite-plugin-feature-flags.ts so it preserves disposal semantics instead of
unconditionally swapping to "const": when you detect the "using _" pattern, emit
code that still binds the disposable (the "_" symbol) and ensures disposal in a
finally block (checking for _[Symbol.dispose] or a dispose method) or else leave
the original using statement and add a comment asserting safety; reference the
transformed string replacement logic that matches /\busing\s+(_\w*)\s*=/, the
"using _" detection, the transformed variable name "$1", and the feature flag
SLOW_OPERATION_LOGGING/slowLoggingExternal when making the change.
scripts/vite-plugin-feature-flags.ts-50-52 (1)

50-52: ⚠️ Potential issue | 🟠 Major

Avoid stub-based feature() resolution in Vite—use Bun's compiler-resolved approach instead.

The regex + stub pattern bypasses Bun's intended mechanism. According to Bun 1.2 documentation, import { feature } from "bun:bundle" is compiler-resolved during bundling (calls are replaced with boolean literals via static analysis), and custom runtime stubs are unsupported—at runtime without Bun bundling, no bun:bundle module exists. Even though this plugin pre-transforms feature calls before bundling, the fallback stub (line 84) means any unmatched feature(...) call (non-string-literal arguments, template literals, variables) silently returns false instead of failing visibly. Non-string-literal usage already violates feature flag guidelines, but this approach masks errors rather than surfacing them. Align with Bun's standard pattern and let the bundler handle resolution.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/vite-plugin-feature-flags.ts` around lines 50 - 52, The plugin
currently uses FEATURE_CALL_RE to pre-transform feature('FLAG') calls and
provides a runtime stub that returns false for unmatched or non-string-literal
calls; remove that stub-based fallback and stop masking invalid usage—only
transform calls that match FEATURE_CALL_RE (string-literal args) and leave all
other feature(...) calls untouched so Bun's compiler can resolve them, and emit
a build-time error or warning when you detect non-string-literal feature(...)
usages (so authors fix them instead of silently getting false). Ensure
references to FEATURE_CALL_RE and the runtime stub removal are updated and that
import { feature } from "bun:bundle" is not emulated by the plugin.
packages/@anthropic-ai/model-provider/src/shared/openaiStreamAdapter.ts-239-271 (1)

239-271: ⚠️ Potential issue | 🟠 Major

Always emit a terminal message_delta/message_stop.

Right now the tail of the generator is gated on seeing choice.finish_reason. If an OpenAI-compatible endpoint closes the stream without that final field, callers get content blocks but never a terminal event or final usage payload.

💡 Minimal fix
-  if (pendingFinishReason !== null) {
+  if (started) {
+    const effectiveFinishReason = pendingFinishReason ?? 'stop'
     const stopReason =
-      pendingFinishReason === 'length'
+      effectiveFinishReason === 'length'
         ? 'max_tokens'
-        : pendingHasToolCalls
+        : pendingHasToolCalls || toolBlocks.size > 0
           ? 'tool_use'
-          : mapFinishReason(pendingFinishReason)
+          : mapFinishReason(effectiveFinishReason)

     yield {
       type: 'message_delta',

Also applies to: 282-308

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/`@anthropic-ai/model-provider/src/shared/openaiStreamAdapter.ts
around lines 239 - 271, The generator currently only emits the terminal
message_delta/message_stop when choice.finish_reason is present, which can leave
callers without a final event if the upstream stream closes early; update the
end-of-stream handling in openaiStreamAdapter.ts (around the logic that sets
pendingFinishReason/pendingHasToolCalls and the later 282-308 block) to always
emit a terminal event and final usage payload even when choice.finish_reason is
undefined: detect stream termination (e.g., when the loop exits or when no
finish_reason is provided), set a default finish reason or leave it null but
still yield a 'message_delta'/'message_stop' (and the corresponding usage/stop
event), and ensure openBlockIndices/toolBlocks are closed and cleared before
yielding so consumers always receive a single final terminal event.
packages/@anthropic-ai/model-provider/src/providers/gemini/streamAdapter.ts-197-206 (1)

197-206: ⚠️ Potential issue | 🟠 Major

Return the final prompt token count in message_delta.

usageMetadata can arrive after message_start, but the terminal payload only includes output_tokens. In that case callers never see the real input_tokens, so accounting/cost code will stay at 0 for prompt usage.

💡 Suggested fix
       usage: {
+        input_tokens: inputTokens,
         output_tokens: outputTokens,
+        cache_creation_input_tokens: 0,
+        cache_read_input_tokens: 0,
       },
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/`@anthropic-ai/model-provider/src/providers/gemini/streamAdapter.ts
around lines 197 - 206, The final emitted message_delta currently only includes
stop_reason and output_tokens, which loses prompt token accounting if
usageMetadata arrived after message_start; update the terminal yield in the
streamAdapter (the block that checks stopped and calls mapGeminiFinishReason) to
also include the final prompt token count by reading usageMetadata.input_tokens
(or the tracked inputTokens variable) and adding it to delta (e.g.,
delta.input_tokens) so callers receive both input and output token counts even
when usageMetadata arrives late.
src/services/acp/__tests__/agent.test.ts-87-89 (1)

87-89: ⚠️ Potential issue | 🟠 Major

Change mock specifier from .ts to .js.

The import in src/services/acp/agent.ts uses '../../utils/model/modelOptions.js' (line 70), but the mock registers '../../../utils/model/modelOptions.ts' (line 87). Since bun:test mock.module() matches the literal import string exactly, this .ts mock does not intercept the .js import, causing the test to execute the real module instead.

Change
mock.module('../../../utils/model/modelOptions.js', () => ({
  getModelOptions: mock(() => []),
}))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/acp/__tests__/agent.test.ts` around lines 87 - 89, The test mock
for the model options is registered with the wrong file extension so it doesn't
match the real import; update the mock.module call in the test (the
mock.module(...) invocation in src/services/acp/__tests__/agent.test.ts) to use
the same literal specifier the code imports ('../../utils/model/modelOptions.js'
as used by src/services/acp/agent.ts) — i.e., change the specifier from
'../../../utils/model/modelOptions.ts' to '../../../utils/model/modelOptions.js'
so bun:test's mock.module() will intercept the import and return the mocked
getModelOptions.
packages/@anthropic-ai/model-provider/src/errorUtils.ts-233-237 (1)

233-237: ⚠️ Potential issue | 🟠 Major

Don't fall back to raw HTML after sanitization.

sanitizeMessageHTML() intentionally returns an empty string for HTML without a <title>, but this branch then returns the original HTML page. That means proxy/Cloudflare blobs still leak into the UI even after the sanitization pass.

💡 Suggested fix
   const sanitizedMessage = sanitizeAPIError(error)
-  return sanitizedMessage !== error.message && sanitizedMessage.length > 0
-    ? sanitizedMessage
-    : error.message
+  if (sanitizedMessage !== error.message) {
+    return sanitizedMessage.length > 0
+      ? sanitizedMessage
+      : 'API returned an HTML error page'
+  }
+  return error.message
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/`@anthropic-ai/model-provider/src/errorUtils.ts around lines 233 -
237, The current return expression falls back to the raw error.message when
sanitizeAPIError returns an empty string (which is intentional for HTML without
a <title>), leaking HTML into the UI; change the return logic in errorUtils.ts
so that if sanitizeAPIError(error) returns a different value than error.message
you return that sanitized value (even if it's an empty string) instead of
falling back to the original HTML—i.e., use the sanitizedMessage !==
error.message check alone (return sanitizedMessage when different, otherwise
return error.message) referencing the sanitizeAPIError call and the
sanitizedMessage variable.
packages/@anthropic-ai/model-provider/src/shared/openaiStreamAdapter.ts-69-77 (1)

69-77: ⚠️ Potential issue | 🟠 Major

Remove as any assertions for OpenAI compatibility fields.

Lines 73 and 107 use as any to access provider-specific extensions (prompt_tokens_details.cached_tokens and reasoning_content) that aren't typed in the OpenAI SDK. Per strict mode enforcement, introduce a local compatibility interface or use Record<string, unknown> with a type guard instead:

// Option 1: Local interface for OpenAI extension fields
interface OpenAICompat {
  prompt_tokens_details?: { cached_tokens?: number }
}
const details = (chunk.usage as OpenAICompat).prompt_tokens_details

// Option 2: Type guard with Record
const details = (chunk.usage as unknown as Record<string, unknown>).prompt_tokens_details
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/`@anthropic-ai/model-provider/src/shared/openaiStreamAdapter.ts
around lines 69 - 77, The code is using unsafe `as any` casts on chunk.usage to
access provider-specific fields (prompt_tokens_details.cached_tokens and
reasoning_content); replace these with a typed compatibility approach: define a
local interface (e.g., OpenAICompat with prompt_tokens_details?: {
cached_tokens?: number } and reasoning_content?: string) and cast chunk.usage to
that interface where you read those fields in openaiStreamAdapter.ts
(references: chunk.usage, prompt_tokens_details, cached_tokens,
reasoning_content), or alternatively cast to Record<string, unknown> and use a
type guard to safely read and narrow those properties before assigning
cachedReadTokens and reasoning content.
packages/@anthropic-ai/model-provider/src/shared/openaiConvertMessages.ts-65-74 (1)

65-74: ⚠️ Potential issue | 🟠 Major

The turn-boundary scan still uses any.

This branch is deciding whether old reasoning gets stripped, so it should stay inside the same strict narrowing rules as the rest of the converter. Please narrow b with Record<string, unknown> or a local block guard instead of any.

💡 Narrow without `any`
-        const startsNewUserTurn = typeof content === 'string'
-          ? content.length > 0
-          : Array.isArray(content) && content.some(
-              (b: any) =>
-                typeof b === 'string' ||
-                (b &&
-                  typeof b === 'object' &&
-                  'type' in b &&
-                  b.type !== 'tool_result'),
-            )
+        const startsNewUserTurn = typeof content === 'string'
+          ? content.length > 0
+          : Array.isArray(content) &&
+            content.some(block => {
+              if (typeof block === 'string') return true
+              if (!block || typeof block !== 'object') return false
+              const narrowed = block as Record<string, unknown>
+              return narrowed.type !== 'tool_result'
+            })

As per coding guidelines **/*.{ts,tsx}: TypeScript strict mode is enforced — bunx tsc --noEmit must pass with zero errors. No as any allowed in production code, and for unknown structure objects use Record<string, unknown> instead of any type.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/`@anthropic-ai/model-provider/src/shared/openaiConvertMessages.ts
around lines 65 - 74, The turn-boundary predicate for startsNewUserTurn is using
an unsafe any for the array element `b`; change that to a stricter type (e.g.
`Record<string, unknown>`) and add a local null/object guard before checking the
`'type'` property so TypeScript narrows correctly. Concretely, in the
Array.isArray(content) branch update the callback to accept `b: unknown`, first
check `typeof b === 'string'` then `typeof b === 'object' && b !== null` and
finally assert `('type' in b)` with `b as Record<string, unknown>` when
comparing `b.type !== 'tool_result'` so no `any` is used and tsc strict mode
passes for the `startsNewUserTurn` expression.
🟡 Minor comments (12)
src/utils/__tests__/imageResizer.test.ts-8-8 (1)

8-8: ⚠️ Potential issue | 🟡 Minor

Use src/ alias import instead of relative path.

Line 8 should use the repository src/... alias for utility imports to match TS path conventions.

Suggested patch
-import { detectImageFormatFromBase64, detectImageFormatFromBuffer } from '../imageResizer.js'
+import { detectImageFormatFromBase64, detectImageFormatFromBuffer } from 'src/utils/imageResizer.js'

As per coding guidelines, "**/*.{ts,tsx}: Import utility functions from the src/ path alias."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/__tests__/imageResizer.test.ts` at line 8, Replace the relative
import in the test that pulls detectImageFormatFromBase64 and
detectImageFormatFromBuffer from '../imageResizer.js' with the repository
path-alias import (e.g., 'src/utils/imageResizer') so the test uses the TS path
alias; update the import statement for the symbols detectImageFormatFromBase64
and detectImageFormatFromBuffer accordingly to follow the project's src/ alias
convention.
src/keybindings/__tests__/confirmation-keybindings.test.ts-43-43 (1)

43-43: ⚠️ Potential issue | 🟡 Minor

Use English-only suite description to match test guideline.

The describe(...) title includes Chinese text; please keep test descriptions fully English for consistency with repo standards.

Suggested update
-describe('Confirmation context — n/y keys removed (fix: 修复 n 快捷键导致关闭的问题)', () => {
+describe('Confirmation context — n/y keys removed (fix for accidental dismissal)', () => {

As per coding guidelines: "**/__tests__/**/*.test.ts: Use English for test descriptions."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/keybindings/__tests__/confirmation-keybindings.test.ts` at line 43,
Update the test suite title used in the describe(...) call in
confirmation-keybindings.test.ts to an English-only string to comply with the
repo's test-guideline; locate the describe('Confirmation context — n/y keys
removed (fix: 修复 n 快捷键导致关闭的问题)', ...) invocation and replace the Chinese
characters and any non-English punctuation so the entire suite description is
English-only while preserving its intent.
src/utils/__tests__/earlyInput.test.ts-22-26 (1)

22-26: ⚠️ Potential issue | 🟡 Minor

Use src/ alias import in tests for utilities.

The relative import should follow the repo alias convention for ts/tsx files.

Suggested change
 import {
   seedEarlyInput,
   consumeEarlyInput,
   hasEarlyInput,
-} from '../earlyInput.js'
+} from 'src/utils/earlyInput'

As per coding guidelines **/*.{ts,tsx}: Import utility functions from the src/ path alias. Use imports like import { ... } from 'src/utils/...' which are resolved via tsconfig.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/utils/__tests__/earlyInput.test.ts` around lines 22 - 26, Update the test
import to use the repo tsconfig alias: replace the relative import of
seedEarlyInput, consumeEarlyInput, and hasEarlyInput with the aliased module
import from 'src/utils/earlyInput' so the test follows the project's src/ path
convention.
docs/features/remote-control-self-hosting.md-143-145 (1)

143-145: ⚠️ Potential issue | 🟡 Minor

Add language identifiers to the new fenced blocks.

These two new fences are currently flagged by markdownlint (MD040). Adding a simple text language keeps the docs clean and avoids lint noise.

📝 Proposed fix
-```
+```text
 https://rcs.example.com/code?bridge=<environmentId>

- +text
https://rcs.example.com/code/session_

Also applies to: 149-151

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/features/remote-control-self-hosting.md` around lines 143 - 145, Add the
language identifier "text" to the new fenced code blocks that contain the
example URLs (the fences showing
"https://rcs.example.com/code?bridge=<environmentId>" and
"https://rcs.example.com/code/session_<id>") so they become ```text ... ``` to
satisfy markdownlint MD040 and avoid lint noise.
packages/remote-control-server/src/store.ts-203-205 (1)

203-205: ⚠️ Potential issue | 🟡 Minor

Clean up owner bindings when deleting a session.

This function now removes sessionWorkers, but it still leaves the per-session entry in sessionOwners. On a long-lived process, deleted sessions will keep orphaned ownership sets around indefinitely.

🧹 Proposed fix
 export function storeDeleteSession(id: string): boolean {
   sessionWorkers.delete(id);
+  sessionOwners.delete(id);
   return sessions.delete(id);
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/remote-control-server/src/store.ts` around lines 203 - 205,
storeDeleteSession currently removes sessionWorkers and sessions but leaves the
per-session entry in sessionOwners, causing orphaned ownership sets; update the
storeDeleteSession function to also remove the sessionOwners entry for the given
id (preferably clear the Set if present and then call sessionOwners.delete(id))
so both the ownership mapping and any held references are cleaned up when
deleting a session.
packages/remote-control-server/src/__tests__/routes.test.ts-1274-1284 (1)

1274-1284: ⚠️ Potential issue | 🟡 Minor

Read until the client_event frame instead of assuming it's the next chunk.

Lines 1274-1284 currently assume the post-publish reader.read() returns the event frame, but this stream already emits keepalives and can legally send another one first. That makes the test intermittently fail even when the endpoint is correct.

Proposed test hardening
-    const secondChunk = await reader.read();
-    const frame = new TextDecoder().decode(secondChunk.value!);
-    expect(frame).toContain("event: client_event");
-    expect(frame).toContain("\"payload\":{\"type\":\"user\",\"content\":\"hello\",\"message\":{\"content\":\"hello\"}}");
+    const deadline = Date.now() + 2000;
+    let frame = "";
+    while (Date.now() < deadline) {
+      const chunk = await reader.read();
+      frame = new TextDecoder().decode(chunk.value!);
+      if (frame.includes("event: client_event")) break;
+    }
+    expect(frame).toContain("event: client_event");
+    expect(frame).toContain("\"payload\":{\"type\":\"user\",\"content\":\"hello\",\"message\":{\"content\":\"hello\"}}");
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/remote-control-server/src/__tests__/routes.test.ts` around lines
1274 - 1284, The test assumes the event is the next chunk after
publishSessionEvent; instead change the read logic to loop calling reader.read()
(replacing secondChunk) decoding each chunk into frame and continue until the
decoded frame includes "event: client_event" (then assert payload and break),
and finally call reader.cancel(); reference the reader.read() usage,
publishSessionEvent(id, ...), and the variables firstChunk/keepalive/frame so
the loop consumes intervening keepalive frames reliably.
src/services/acp/permissions.ts-171-183 (1)

171-183: ⚠️ Potential issue | 🟡 Minor

Missing error handling in handleExitPlanMode.

Unlike the main permission flow (lines 90-131) which wraps conn.requestPermission in a try/catch, handleExitPlanMode does not handle errors from the permission request. If the connection fails, this will throw and propagate up instead of returning a deny decision.

🛡️ Proposed fix to add error handling
-  const response = await conn.requestPermission({
-    sessionId,
-    toolCall,
-    options,
-  })
+  let response
+  try {
+    response = await conn.requestPermission({
+      sessionId,
+      toolCall,
+      options,
+    })
+  } catch {
+    return {
+      behavior: 'deny',
+      message: 'Permission request failed',
+      decisionReason: { type: 'mode', mode: 'plan' },
+    }
+  }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/acp/permissions.ts` around lines 171 - 183, The
handleExitPlanMode path calls conn.requestPermission (observed around the
response/outcome check) without a try/catch, so connection errors will throw
instead of returning a deny decision; wrap the call to conn.requestPermission in
a try/catch inside handleExitPlanMode and on any error return the same shaped
deny response used in the main flow (behavior: 'deny', decisionReason with type
'mode' and mode 'default', and a message conveying tool use was aborted or the
error), ensuring you still handle the existing response.outcome.outcome ===
'cancelled' branch after the try block.
src/services/api/claude.ts-2919-2919 (1)

2919-2919: ⚠️ Potential issue | 🟡 Minor

Use full API tool set in Langfuse observation.

Line 2919 only logs toolSchemas, but the request can include extraToolSchemas via allTools. This can make traces incomplete when advisor/server tools are appended.

Suggested fix
-    tools: convertToolsToLangfuse(toolSchemas as unknown[]),
+    tools: convertToolsToLangfuse(allTools as unknown[]),
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/api/claude.ts` at line 2919, The Langfuse observation is only
logging toolSchemas via convertToolsToLangfuse(toolSchemas as unknown[]) but may
miss extraToolSchemas provided through allTools; update the observation to use
the full combined tool set (e.g., merge toolSchemas and
extraToolSchemas/allTools) and pass that combined array into
convertToolsToLangfuse so traces include advisor/server-appended tools — modify
the call site where convertToolsToLangfuse is invoked (referencing
convertToolsToLangfuse, toolSchemas, extraToolSchemas/allTools) to construct and
supply the complete tools list.
docs/features/acp-zed.md-67-67 (1)

67-67: ⚠️ Potential issue | 🟡 Minor

Section numbering inconsistency: missing section 3.2.

Section numbering jumps from 3.1 to 3.3, skipping 3.2. Either add the missing section or renumber 3.3 → 3.2 and subsequent sections.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/features/acp-zed.md` at line 67, The document's section numbering skips
3.2 (it shows 3.1 then 3.3 API 认证配置); update the headings to restore consistent
numbering by either inserting the missing 3.2 content if it was omitted or
renumbering "3.3 API 认证配置" (and all subsequent headings and any internal
cross-references) to "3.2" (and adjust later numbers accordingly); search for
the heading text "3.3 API 认证配置" and any references to "3.3" or later section
numbers and update them to the corrected numbering to keep the whole document
consistent.
vite.config.ts-91-94 (1)

91-94: ⚠️ Potential issue | 🟡 Minor

Remove trailing slash from src/ alias key to align with standard Vite configuration patterns.

The alias is currently configured as "src/" with a trailing slash. While this works functionally, the standard Vite convention (and how tsconfig defines paths) is to use "src" without a trailing slash in the alias key:

♻️ Standard alias pattern
   alias: {
     // src/* path alias (mirrors tsconfig paths)
-    "src/": resolve(projectRoot, "src/"),
+    src: resolve(projectRoot, "src"),
   },
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@vite.config.ts` around lines 91 - 94, The alias key currently uses "src/"
with a trailing slash; update the alias object so the key is "src" (no trailing
slash) while keeping the resolved path value (resolve(projectRoot, "src/"))
unchanged—i.e., change the alias entry in the alias map (the "alias" object in
vite config) from "src/" to "src" to match standard Vite/tsconfig conventions.
src/services/acp/agent.ts-661-670 (1)

661-670: ⚠️ Potential issue | 🟡 Minor

Inconsistency between validModes and availableModes.

validModes includes 'bypassPermissions' but the availableModes array in createSession (lines 513-519) doesn't include this mode. This allows setting a mode via applySessionMode that isn't advertised to clients.

Consider either adding 'bypassPermissions' to availableModes or removing it from validModes for consistency.

Option A: Remove from validModes
   private applySessionMode(sessionId: string, modeId: string): void {
-    const validModes = ['auto', 'default', 'acceptEdits', 'bypassPermissions', 'dontAsk', 'plan']
+    const validModes = ['auto', 'default', 'acceptEdits', 'dontAsk', 'plan']
     if (!validModes.includes(modeId)) {
       throw new Error(`Invalid mode: ${modeId}`)
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/acp/agent.ts` around lines 661 - 670, The applySessionMode
function permits 'bypassPermissions' (validModes) but createSession's
availableModes array doesn't advertise it, causing inconsistency; update
createSession (the availableModes array) to include 'bypassPermissions' so
clients see all modes allowed by applySessionMode, or alternatively remove
'bypassPermissions' from validModes in applySessionMode and any related logic;
locate createSession and applySessionMode to make the corresponding
addition/removal so both arrays match.
src/services/acp/agent.ts-316-325 (1)

316-325: ⚠️ Potential issue | 🟡 Minor

Missing await on teardownSession may cause race conditions.

teardownSession is an async method that calls cancel() internally. Calling it without await means the error is thrown before the session cleanup completes, potentially leaving state inconsistent if the caller retries immediately.

Proposed fix
       if (
         err instanceof Error &&
         (err.message.includes('terminated') ||
           err.message.includes('process exited'))
       ) {
-        this.teardownSession(params.sessionId)
+        await this.teardownSession(params.sessionId)
         throw new Error(
           'The Claude Agent process exited unexpectedly. Please start a new session.',
         )
       }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/acp/agent.ts` around lines 316 - 325, The teardownSession call
in the error-handling branch is asynchronous and currently invoked without
awaiting, which can cause race conditions because teardownSession internally
calls cancel(); change the handler to await
this.teardownSession(params.sessionId) before throwing the new Error so cleanup
completes first; locate the block checking err instanceof Error and
err.message.includes(...) in the agent (function/method containing
teardownSession usage) and add await to the teardownSession invocation and
ensure the enclosing function is async or already supports awaiting.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 1e8831d8-63c4-4fc6-9711-764926637ed5

📥 Commits

Reviewing files that changed from the base of the PR and between a02dc0b and 54e33b3.

⛔ Files ignored due to path filters (2)
  • bun.lock is excluded by !**/*.lock
  • contributors.svg is excluded by !**/*.svg
📒 Files selected for processing (135)
  • .github/workflows/ci.yml
  • CLAUDE.md
  • README.md
  • build.ts
  • docs/diagrams/agent-loop-simple.mmd
  • docs/diagrams/agent-loop.mmd
  • docs/features/acp-zed.md
  • docs/features/chrome-use-mcp.md
  • docs/features/remote-control-self-hosting.md
  • package.json
  • packages/@ant/claude-for-chrome-mcp/tsconfig.json
  • packages/@ant/computer-use-input/src/backends/darwin.ts
  • packages/@ant/computer-use-input/tsconfig.json
  • packages/@ant/computer-use-mcp/tsconfig.json
  • packages/@ant/computer-use-swift/src/backends/darwin.ts
  • packages/@ant/computer-use-swift/src/backends/linux.ts
  • packages/@ant/computer-use-swift/src/types.ts
  • packages/@ant/computer-use-swift/tsconfig.json
  • packages/@ant/ink/tsconfig.json
  • packages/@anthropic-ai/model-provider/package.json
  • packages/@anthropic-ai/model-provider/src/client/index.ts
  • packages/@anthropic-ai/model-provider/src/client/types.ts
  • packages/@anthropic-ai/model-provider/src/errorUtils.ts
  • packages/@anthropic-ai/model-provider/src/hooks/index.ts
  • packages/@anthropic-ai/model-provider/src/hooks/types.ts
  • packages/@anthropic-ai/model-provider/src/index.ts
  • packages/@anthropic-ai/model-provider/src/providers/gemini/convertMessages.ts
  • packages/@anthropic-ai/model-provider/src/providers/gemini/convertTools.ts
  • packages/@anthropic-ai/model-provider/src/providers/gemini/modelMapping.ts
  • packages/@anthropic-ai/model-provider/src/providers/gemini/streamAdapter.ts
  • packages/@anthropic-ai/model-provider/src/providers/gemini/types.ts
  • packages/@anthropic-ai/model-provider/src/providers/grok/modelMapping.ts
  • packages/@anthropic-ai/model-provider/src/providers/openai/modelMapping.ts
  • packages/@anthropic-ai/model-provider/src/shared/openaiConvertMessages.ts
  • packages/@anthropic-ai/model-provider/src/shared/openaiConvertTools.ts
  • packages/@anthropic-ai/model-provider/src/shared/openaiStreamAdapter.ts
  • packages/@anthropic-ai/model-provider/src/types/errors.ts
  • packages/@anthropic-ai/model-provider/src/types/index.ts
  • packages/@anthropic-ai/model-provider/src/types/message.ts
  • packages/@anthropic-ai/model-provider/src/types/systemPrompt.ts
  • packages/@anthropic-ai/model-provider/src/types/usage.ts
  • packages/@anthropic-ai/model-provider/tsconfig.json
  • packages/agent-tools/src/__tests__/compat.test.ts
  • packages/agent-tools/tsconfig.json
  • packages/audio-capture-napi/tsconfig.json
  • packages/builtin-tools/src/tools/AgentTool/__tests__/agentDisplay.test.ts
  • packages/builtin-tools/tsconfig.json
  • packages/color-diff-napi/src/__tests__/color-diff.test.ts
  • packages/color-diff-napi/tsconfig.json
  • packages/image-processor-napi/tsconfig.json
  • packages/mcp-client/src/__tests__/InProcessTransport.test.ts
  • packages/mcp-client/src/__tests__/discovery.test.ts
  • packages/mcp-client/src/__tests__/manager.test.ts
  • packages/mcp-client/tsconfig.json
  • packages/modifiers-napi/tsconfig.json
  • packages/remote-control-server/src/__tests__/disconnect-monitor.test.ts
  • packages/remote-control-server/src/__tests__/routes.test.ts
  • packages/remote-control-server/src/__tests__/services.test.ts
  • packages/remote-control-server/src/__tests__/ws-handler.test.ts
  • packages/remote-control-server/src/routes/v1/environments.ts
  • packages/remote-control-server/src/routes/v1/environments.work.ts
  • packages/remote-control-server/src/routes/v1/session-ingress.ts
  • packages/remote-control-server/src/routes/v1/sessions.ts
  • packages/remote-control-server/src/routes/v2/code-sessions.ts
  • packages/remote-control-server/src/routes/v2/worker-events-stream.ts
  • packages/remote-control-server/src/routes/v2/worker-events.ts
  • packages/remote-control-server/src/routes/v2/worker.ts
  • packages/remote-control-server/src/routes/web/auth.ts
  • packages/remote-control-server/src/routes/web/control.ts
  • packages/remote-control-server/src/routes/web/sessions.ts
  • packages/remote-control-server/src/services/disconnect-monitor.ts
  • packages/remote-control-server/src/services/session.ts
  • packages/remote-control-server/src/services/transport.ts
  • packages/remote-control-server/src/store.ts
  • packages/remote-control-server/src/transport/sse-writer.ts
  • packages/remote-control-server/src/transport/ws-handler.ts
  • packages/remote-control-server/tsconfig.json
  • packages/remote-control-server/web/app.js
  • packages/remote-control-server/web/components.css
  • packages/remote-control-server/web/index.html
  • packages/remote-control-server/web/render.js
  • packages/remote-control-server/web/utils.js
  • packages/url-handler-napi/tsconfig.json
  • scripts/dev.ts
  • scripts/post-build.ts
  • scripts/vite-plugin-feature-flags.ts
  • scripts/vite-plugin-import-meta-require.ts
  • src/QueryEngine.ts
  • src/bridge/bridgeMessaging.ts
  • src/cli/handlers/auth.ts
  • src/cli/print.ts
  • src/commands/poor/__tests__/poorMode.test.ts
  • src/components/ConsoleOAuthFlow.tsx
  • src/components/messages/SystemAPIErrorMessage.tsx
  • src/entrypoints/cli.tsx
  • src/entrypoints/sdk/sdkUtilityTypes.ts
  • src/keybindings/__tests__/confirmation-keybindings.test.ts
  • src/keybindings/defaultBindings.ts
  • src/services/acp/__tests__/agent.test.ts
  • src/services/acp/__tests__/bridge.test.ts
  • src/services/acp/__tests__/permissions.test.ts
  • src/services/acp/agent.ts
  • src/services/acp/bridge.ts
  • src/services/acp/entry.ts
  • src/services/acp/permissions.ts
  • src/services/acp/utils.ts
  • src/services/api/claude.ts
  • src/services/api/emptyUsage.ts
  • src/services/api/errorUtils.ts
  • src/services/api/gemini/convertMessages.ts
  • src/services/api/gemini/convertTools.ts
  • src/services/api/gemini/modelMapping.ts
  • src/services/api/gemini/streamAdapter.ts
  • src/services/api/gemini/types.ts
  • src/services/api/grok/modelMapping.ts
  • src/services/api/openai/__tests__/convertMessages.test.ts
  • src/services/api/openai/__tests__/queryModelOpenAI.test.ts
  • src/services/api/openai/convertMessages.ts
  • src/services/api/openai/convertTools.ts
  • src/services/api/openai/index.ts
  • src/services/api/openai/modelMapping.ts
  • src/services/api/openai/streamAdapter.ts
  • src/services/langfuse/__tests__/langfuse.test.ts
  • src/services/langfuse/convert.ts
  • src/services/langfuse/tracing.ts
  • src/types/message.ts
  • src/utils/__tests__/bunHashPolyfill.test.ts
  • src/utils/__tests__/earlyInput.test.ts
  • src/utils/__tests__/imageResizer.test.ts
  • src/utils/forkedAgent.ts
  • src/utils/sideQuestion.ts
  • src/utils/systemPromptType.ts
  • tsconfig.base.json
  • tsconfig.json
  • vite.config.ts
💤 Files with no reviewable changes (1)
  • src/keybindings/defaultBindings.ts

Comment on lines +8 to +13
import { execFile, execFileSync } from 'child_process'
import { promisify } from 'util'
import type { FrontmostAppInfo, InputBackend } from '../types.js'

const execFileAsync = promisify(execFile)

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify runtime guidance and this file's process-exec strategy.

set -euo pipefail

# 1) Locate Bun runtime guidance in repo docs.
fd -i 'CLAUDE.md' -x rg -n -C2 "Runtime must be Bun|Bun\\.build|bun:bundle|feature\\(" {}

# 2) Inspect current backend implementation for Node process APIs.
rg -n -C2 "from 'child_process'|from \"child_process\"|execFileAsync|execFileSync\\(" packages/@ant/computer-use-input/src/backends/darwin.ts

Repository: claude-code-best/claude-code

Length of output: 3970


🏁 Script executed:

# Check if this is a separate package with its own configuration
cat packages/@ant/computer-use-input/package.json | head -30

# Check the directory structure and purpose
ls -la packages/@ant/computer-use-input/ | head -20

# Search for any documentation about this package
find packages/@ant/computer-use-input -maxdepth 2 -name "README*" -o -name "*.md" | xargs cat 2>/dev/null | head -100

Repository: claude-code-best/claude-code

Length of output: 503


🏁 Script executed:

# Find where `@ant/computer-use-input` is imported
rg -n "@ant/computer-use-input" --type ts --type tsx

# Check if there's a workspace configuration or root tsconfig
cat package.json | head -50

Repository: claude-code-best/claude-code

Length of output: 1459


🏁 Script executed:

# Search for imports of the package without type restrictions
rg "@ant/computer-use-input" -A 2

# Also check if there's a references to darwin.ts anywhere
rg "darwin\.ts|backends/darwin" -n

Repository: claude-code-best/claude-code

Length of output: 9342


🏁 Script executed:

# Check how darwin backend is conditionally loaded or gated
rg -n "require.*darwin|import.*darwin|isSupported" packages/@ant/computer-use-input/src/

# Check the index.ts to see platform dispatch logic
cat packages/@ant/computer-use-input/src/index.ts

Repository: claude-code-best/claude-code

Length of output: 2780


child_process and util imports prevent darwin backend from loading in Bun runtime.

Lines 8–9 import Node.js-only modules at the top level of darwin.ts. When Bun runtime executes require('./backends/darwin.js') in index.ts, these imports fail because Bun lacks child_process and util modules. The try-catch in loadBackend() masks this silently, leaving isSupported = false on macOS — breaking computer-use functionality.

Fix: Dynamically import child_process and util only when needed (lazy-load inside functions), or use Bun's native process APIs (Bun.spawn()) instead of Node.js execFile/execFileSync.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/`@ant/computer-use-input/src/backends/darwin.ts around lines 8 - 13,
Top-level imports of Node-only modules (execFile, execFileSync from
child_process and promisify from util which creates execFileAsync) prevent the
darwin backend from loading in Bun; remove those top-level imports and lazy-load
the required APIs inside the functions that actually run commands (or replace
with Bun.spawn usage). Specifically, eliminate the top-level
execFile/execFileSync/promisify/execFileAsync symbols and instead require/import
child_process and util (or call Bun.spawn) within the command-running functions
in packages/@ant/computer-use-input/src/backends/darwin.ts so the module can be
loaded under Bun and only uses Node APIs when actually invoked.

@@ -0,0 +1,7 @@
{
"extends": "../../tsconfig.json",
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "All tsconfig.json files:"
fd -a '^tsconfig\.json$'

echo
python - <<'PY'
import json
from pathlib import Path

p = Path('packages/@anthropic-ai/model-provider/tsconfig.json')
cfg = json.loads(p.read_text())
ext = cfg.get("extends")
resolved = (p.parent / ext).resolve()
print("extends =", ext)
print("resolved =", resolved)
print("exists =", resolved.exists())
PY

Repository: claude-code-best/claude-code

Length of output: 1140


Fix the extends path: target file does not exist.

The extends path ../../tsconfig.json resolves to packages/tsconfig.json, which does not exist in the repository. The repository root tsconfig.json is at tsconfig.json (root level). Change line 2 to:

"extends": "../../../tsconfig.json",

This will correctly reference the base configuration at the repository root.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/`@anthropic-ai/model-provider/tsconfig.json at line 2, Update the
"extends" value in the package's tsconfig.json so it points to the repository
root base config instead of a non-existent packages/tsconfig.json; locate the
"extends" property in packages/@anthropic-ai/model-provider/tsconfig.json and
change its path to reference the root tsconfig.json (i.e., adjust the extends
string upward one extra directory level) so the package inherits the correct
base configuration.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
.github/workflows/ci.yml (1)

29-32: Harden Codecov step by pinning the action SHA and making LCOV input explicit.

Using a mutable tag (@v5) and implicit file discovery is less reproducible. Pin to a full-length commit SHA and pass the LCOV file directly. Since the build generates coverage via bun test --coverage --coverage-reporter=lcov, explicitly specify the output file.

Proposed change
       - name: Upload coverage to Codecov
-        uses: codecov/codecov-action@v5
+        uses: codecov/codecov-action@<full-length-commit-sha-for-v5>
         with:
           token: ${{ secrets.CODECOV_TOKEN }}
+          files: ./coverage/lcov.info
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/ci.yml around lines 29 - 32, The CI step "Upload coverage
to Codecov" currently uses the mutable tag codecov/codecov-action@v5 and relies
on implicit discovery; replace the mutable tag with the action's full commit SHA
(pin to a specific full-length SHA for codecov/codecov-action) and add an
explicit LCOV input (e.g., files: ./coverage/lcov.info) alongside the token to
ensure deterministic behavior given your bun test --coverage
--coverage-reporter=lcov output; update the step that references uses:
codecov/codecov-action@v5 and the with: block to include the pinned SHA and the
explicit LCOV file path.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In @.github/workflows/ci.yml:
- Around line 29-32: The CI step "Upload coverage to Codecov" currently uses the
mutable tag codecov/codecov-action@v5 and relies on implicit discovery; replace
the mutable tag with the action's full commit SHA (pin to a specific full-length
SHA for codecov/codecov-action) and add an explicit LCOV input (e.g., files:
./coverage/lcov.info) alongside the token to ensure deterministic behavior given
your bun test --coverage --coverage-reporter=lcov output; update the step that
references uses: codecov/codecov-action@v5 and the with: block to include the
pinned SHA and the explicit LCOV file path.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 473444ab-a36c-48bf-b38b-3225bdcca9ad

📥 Commits

Reviewing files that changed from the base of the PR and between 05f9260 and 8cb6c9c.

📒 Files selected for processing (1)
  • .github/workflows/ci.yml

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants