feat: integrate 5 feature branches + daemon/job 命令层级化 + 跨平台后台引擎 + TypeScript 错误修复#258
feat: integrate 5 feature branches + daemon/job 命令层级化 + 跨平台后台引擎 + TypeScript 错误修复#258amDosion wants to merge 1 commit intoclaude-code-best:mainfrom
Conversation
|
Caution Review failedThe pull request is closed. ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (123)
📝 WalkthroughWalkthroughAdds background-session/daemon rework, a workspace autonomy subsystem (authority/runs/flows/persistence), job/templates on-disk jobs, assistant (KAIROS) activation and session discovery/UI, pipe mute/relay controls, language prefs, many CLI/REPL command implementations, tool bridge delivery, extensive tests and docs, and build/dev feature-default updates. Changes
Sequence Diagram(s)sequenceDiagram
participant CLI as CLI / User
participant Daemon as Daemon Supervisor
participant BgEngine as BgEngine (tmux/detached)
participant SessionReg as Session Registry
participant Logs as Log File
CLI->>Daemon: claude daemon bg <args>
Daemon->>BgEngine: selectEngine().start(opts)
alt tmux available
BgEngine->>BgEngine: TmuxEngine.start() (tmux new-session -d ...)
else detached fallback
BgEngine->>BgEngine: DetachedEngine.start() (spawn detached child, redirect logs)
end
BgEngine->>SessionReg: persist SessionEntry (engine, logPath, tmuxSessionName)
BgEngine-->>Daemon: BgStartResult
Daemon-->>CLI: Background session started message
CLI->>Daemon: claude daemon attach <sessionId>
Daemon->>SessionReg: findSession(sessionId)
SessionReg-->>Daemon: SessionEntry {engine, logPath, tmuxSessionName}
alt engine == 'tmux'
Daemon->>BgEngine: TmuxEngine.attach(session)
BgEngine->>CLI: tmux attach-session (inherited stdio)
else engine == 'detached'
Daemon->>BgEngine: DetachedEngine.attach(session)
BgEngine->>Logs: tailLog(logPath)
Logs-->>CLI: stream appended content
end
sequenceDiagram
participant Scheduler as Scheduler / Cron
participant AutAuth as autonomyAuthority
participant AutRuns as autonomyRuns
participant AutFlows as autonomyFlows
participant Queue as Command Queue
Scheduler->>AutAuth: prepareAutonomyTurnPrompt(basePrompt, proactive-tick)
AutAuth-->>Scheduler: prepared prompt + due heartbeat tasks
Scheduler->>AutRuns: createProactiveAutonomyCommands(prepared)
AutRuns->>AutRuns: create tick run(s) (runId)
AutRuns->>AutFlows: startManagedAutonomyFlow() for due tasks
AutFlows->>AutRuns: queueManagedAutonomyFlowStepRun()
AutRuns-->>Scheduler: queued commands (tick + step commands)
Scheduler->>Queue: enqueue commands
Queue->>AutRuns: markAutonomyRunRunning(runId) (on start)
Queue->>AutRuns: finalizeAutonomyRunCompleted/Failed(runId) (on finish)
AutRuns->>AutFlows: advance/sync flow step status and possibly queue next step
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Suggested labels
✨ Finishing Touches🧪 Generate unit tests (beta)
|
b5592aa to
559e65f
Compare
There was a problem hiding this comment.
Actionable comments posted: 20
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
src/utils/handlePromptSubmit.ts (1)
469-626:⚠️ Potential issue | 🟠 MajorDon't retroactively fail runs that already completed.
The new inner
catchnow covers both turn execution andfinalizeAutonomyRunCompleted()/ follow-up enqueue failures. If one completion step throws after an earlierrunIdalready finished, this path will still callfinalizeAutonomyRunFailed()for every collected run and rethrow, which can corrupt run state and make a successful turn surface as failed.💡 One way to avoid double-finalizing completed runs
const autonomyRunIds = new Set<string>() + const completedAutonomyRunIds = new Set<string>() try { await runWithWorkload(turnWorkload, async () => { // ... }) for (const runId of autonomyRunIds) { const nextCommands = await finalizeAutonomyRunCompleted({ runId, priority: 'later', workload: turnWorkload, }) + completedAutonomyRunIds.add(runId) for (const nextCommand of nextCommands) { enqueue(nextCommand) } } } catch (error) { for (const runId of autonomyRunIds) { + if (completedAutonomyRunIds.has(runId)) continue await finalizeAutonomyRunFailed({ runId, error: String(error), }) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/handlePromptSubmit.ts` around lines 469 - 626, The catch currently treats every autonomy run in autonomyRunIds as failed even if finalizeAutonomyRunCompleted(...) succeeded; declare a new Set completedRunIds (alongside autonomyRunIds), and inside the loop that calls finalizeAutonomyRunCompleted({ runId, ... }) add runId to completedRunIds only after that call completes successfully; in the catch block, only call finalizeAutonomyRunFailed(...) for runIds in autonomyRunIds that are not in completedRunIds (i.e., iterate autonomyRunIds and skip those present in completedRunIds) so completed runs are not retroactively marked failed.src/cli/print.ts (1)
2169-2293:⚠️ Potential issue | 🔴 CriticalOnly finalize autonomy runs as completed after a non-error result.
This path only treats thrown exceptions as failures.
ask()also completes normally withresult.is_error === true, and those cases currently fall through tofinalizeAutonomyRunCompleted(), which can mark a failed step as successful and enqueue follow-up managed work.🚨 Suggested success/failure split
+ let autonomyError: string | null = null try { await runWithWorkload(cmd.workload ?? options.workload, async () => { for await (const message of ask({ commands: uniqBy( [...currentCommands, ...appState.mcp.commands], 'name', ), prompt: input, promptUuid: cmd.uuid, isMeta: cmd.isMeta, cwd: cwd(), tools: allTools, verbose: options.verbose, mcpClients: allMcpClients, thinkingConfig: options.thinkingConfig, maxTurns: options.maxTurns, maxBudgetUsd: options.maxBudgetUsd, taskBudget: options.taskBudget, canUseTool, userSpecifiedModel: activeUserSpecifiedModel, fallbackModel: options.fallbackModel, jsonSchema: getInitJsonSchema() ?? options.jsonSchema, mutableMessages, getReadFileCache: () => pendingSeeds.size === 0 ? readFileState : mergeFileStateCaches(readFileState, pendingSeeds), setReadFileCache: cache => { readFileState = cache for (const [path, seed] of pendingSeeds.entries()) { const existing = readFileState.get(path) if (!existing || seed.timestamp > existing.timestamp) { readFileState.set(path, seed) } } pendingSeeds.clear() }, customSystemPrompt: options.systemPrompt, appendSystemPrompt: options.appendSystemPrompt, getAppState, setAppState, abortController, replayUserMessages: options.replayUserMessages, includePartialMessages: options.includePartialMessages, handleElicitation: (serverName, params, elicitSignal) => structuredIO.handleElicitation( serverName, params.message, undefined, elicitSignal, params.mode, params.url, 'elicitationId' in params ? params.elicitationId : undefined, ), agents: currentAgents, orphanedPermission: cmd.orphanedPermission, setSDKStatus: status => { output.enqueue({ type: 'system', subtype: 'status', status: status as 'compacting' | null, session_id: getSessionId(), uuid: randomUUID(), }) }, })) { + if (message.type === 'result' && message.is_error) { + autonomyError = + message.errors?.join('; ') ?? message.subtype + } // existing message handling... } }) - for (const runId of autonomyRunIds) { - const nextCommands = await finalizeAutonomyRunCompleted({ - runId, - currentDir: cwd(), - priority: 'later', - workload: cmd.workload ?? options.workload, - }) - for (const nextCommand of nextCommands) { - enqueue({ - ...nextCommand, - uuid: randomUUID(), - }) - } - } + if (autonomyError) { + for (const runId of autonomyRunIds) { + await finalizeAutonomyRunFailed({ + runId, + error: autonomyError, + }) + } + } else { + for (const runId of autonomyRunIds) { + const nextCommands = await finalizeAutonomyRunCompleted({ + runId, + currentDir: cwd(), + priority: 'later', + workload: cmd.workload ?? options.workload, + }) + for (const nextCommand of nextCommands) { + enqueue({ + ...nextCommand, + uuid: randomUUID(), + }) + } + } + } } catch (error) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/cli/print.ts` around lines 2169 - 2293, The loop currently treats only thrown exceptions as failures but ignores ask() returning a terminal result with result.is_error === true; update the ask() handling to track terminal result success and only call finalizeAutonomyRunCompleted(runId...) when the run actually completed successfully. Specifically, inside the runWithWorkload/ask() loop (where message.type === 'result') set a boolean (e.g., autonomyRunSucceeded) to true only when message.type === 'result' && !message.is_error, and set it false when message.type === 'result' && message.is_error; after exiting the try block, iterate autonomyRunIds and call finalizeAutonomyRunCompleted only for runs with autonomyRunSucceeded === true, and call finalizeAutonomyRunFailed (passing the error/string from the result) for runs that ended with result.is_error === true; preserve the existing catch behavior that calls finalizeAutonomyRunFailed on thrown exceptions. Ensure you reference the existing symbols ask, runWithWorkload, message.type === 'result', message.is_error, finalizeAutonomyRunCompleted, and finalizeAutonomyRunFailed when making the change.src/screens/REPL.tsx (1)
3723-3725:⚠️ Potential issue | 🟠 Major
shouldStorePlanForVerificationis always false.Line 3724 passes
undefinedtoisEnvTruthy, so this guard can never succeed and the newpendingPlanVerificationbranch at Lines 3748-3756 is dead code.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/screens/REPL.tsx` around lines 3723 - 3725, The condition for shouldStorePlanForVerification is always false because isEnvTruthy is called with undefined; update the guard in the shouldStorePlanForVerification assignment so isEnvTruthy checks a real environment variable or value (e.g., pass a specific env var like process.env.STORE_PLAN_FOR_VERIFICATION or the boolean flag you intend) instead of undefined, ensuring the branch that sets pendingPlanVerification (and the pendingPlanVerification logic) can execute when initialMsg.message.planContent and process.env.USER_TYPE === 'ant' are true.
🟡 Minor comments (23)
src/cli/rollback.ts-33-45 (1)
33-45:⚠️ Potential issue | 🟡 MinorReturn a non-zero exit code for invalid CLI usage.
When
targetis missing, the command prints usage and returns with exit code0. This makes automation treat invalid invocation as success.Suggested fix
if (!target) { - console.log( + console.error( 'Usage: claude rollback [target]\n\n' + 'Options:\n' + ' -l, --list List recent published versions\n' + ' --dry-run Show what would be installed\n' + ' --safe Roll back to server-pinned safe version\n\n' + 'Examples:\n' + ' claude rollback 2.1.880\n' + ' claude rollback --list\n' + ' claude rollback --safe', ) + process.exitCode = 1 return }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/cli/rollback.ts` around lines 33 - 45, The CLI currently prints usage when `target` is missing but returns with exit code 0; update the missing-target branch (the `if (!target)` block that prints the usage string) to exit with a non-zero status by calling process.exit(1) (or by setting process.exitCode = 1 before returning) instead of the plain return, so automation sees the invocation as a failure.tsconfig.json-24-25 (1)
24-25:⚠️ Potential issue | 🟡 MinorAdd
packages/agent-tools/srcto the TypeScriptincludeglob in tsconfig.json.The path aliases for
@claude-code-best/agent-toolsare defined (lines 17-21), but source files inpackages/agent-tools/srcare not included in the TypeScript compilation. This directory contains source files (index.ts,registry.ts,types.ts) that need to be compiled. Add"packages/agent-tools/src/**/*.ts"and"packages/agent-tools/src/**/*.tsx"to theincludearray, and add"packages/agent-tools/src/__tests__"to theexcludearray to match the pattern used formcp-client.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tsconfig.json` around lines 24 - 25, tsconfig.json’s "include" array is missing the agent-tools source globs; add "packages/agent-tools/src/**/*.ts" and "packages/agent-tools/src/**/*.tsx" to the existing include entries so the `@claude-code-best/agent-tools` paths (defined in the paths section) are compiled, and add "packages/agent-tools/src/__tests__" to the "exclude" array to mirror the mcp-client test exclusion; update the include/exclude arrays accordingly to ensure index.ts, registry.ts, and types.ts are picked up.src/utils/autonomyPersistence.ts-19-22 (1)
19-22:⚠️ Potential issue | 🟡 MinorCompare against the queued promise when cleaning up.
Line 44 can never match because Lines 19-22 store
previous.then(() => current), notcurrent. That leaves one resolvedpersistenceLocksentry perrootDiraround for the lifetime of the process.🩹 Proposed fix
let release!: () => void const current = new Promise<void>(resolve => { release = resolve }) - persistenceLocks.set( + const queued = previous.then(() => current) + persistenceLocks.set( key, - previous.then(() => current), + queued, ) @@ - if (persistenceLocks.get(key) === current) { + if (persistenceLocks.get(key) === queued) { persistenceLocks.delete(key) }Also applies to: 44-45
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/autonomyPersistence.ts` around lines 19 - 22, The stored promise in persistenceLocks is previous.then(() => current) but the cleanup logic compares against current, so the equality never matches and entries persist; fix by creating a single variable (e.g., const queued = previous.then(() => current)), use persistenceLocks.set(key, queued) and in the cleanup compare persistenceLocks.get(key) === queued (or remove using the same queued reference) instead of comparing to current; ensure the same queued variable is used in both the set and the later conditional that removes the Map entry (variables: persistenceLocks, key, previous, current).packages/builtin-tools/src/tools/SendUserFileTool/SendUserFileTool.ts-74-90 (1)
74-90:⚠️ Potential issue | 🟡 MinorValidate the path contract before reporting success.
This branch only checks
stat(). Relative paths still get resolved against the current working directory, and unreadable regular files can still fall through tosent: truewhen bridge mode is off. Reject non-absolute paths and confirmR_OKbefore treating the file as deliverable.🛡️ Proposed fix
async call(input: SendUserFileInput, context) { const { file_path } = input - const { stat } = await import('fs/promises') + const { access, stat } = await import('fs/promises') + const { constants } = await import('fs') + const { isAbsolute } = await import('path') // Verify file exists and is readable let fileSize: number try { + if (!isAbsolute(file_path)) { + return { + data: { sent: false, file_path, error: 'Path must be absolute.' }, + } + } const fileStat = await stat(file_path) if (!fileStat.isFile()) { return { data: { sent: false, file_path, error: 'Path is not a file.' }, } } + await access(file_path, constants.R_OK) fileSize = fileStat.size } catch {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/builtin-tools/src/tools/SendUserFileTool/SendUserFileTool.ts` around lines 74 - 90, The current pre-send check in SendUserFileTool (using stat(file_path)) only verifies existence and file-ness but allows relative paths and unreadable files to be treated as sent; update the validation to first ensure file_path is absolute (use path.isAbsolute) and then call access(file_path, fs.constants.R_OK) (import access and fs.constants from 'fs'/'fs/promises') to verify read permission before proceeding, returning the same shape { sent: false, file_path, error: ... } for non-absolute paths or non-readable files; keep the existing stat-based isFile and size extraction (fileStat.size) after these checks so unreadable or relative paths never fall through to sent: true.docs/task/task-015-job-command-hierarchy.md-31-38 (1)
31-38:⚠️ Potential issue | 🟡 Minor
templateJobs.tsis listed as both unchanged and modified.Line 35 says
src/cli/handlers/templateJobs.tsis “不动”, but Lines 70-99 add a newstatusbranch and handler in that same file. Please reconcile the file list with the implementation plan.Also applies to: 70-99
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/task/task-015-job-command-hierarchy.md` around lines 31 - 38, The documentation claims src/cli/handlers/templateJobs.ts is unchanged while the diff actually adds a new "status" branch and handler in that file; update the doc to reflect the implementation plan by removing the "不动" label for templateJobs.ts and explicitly listing the new status branch/handler (reference the new "status" command/handler added in templateJobs.ts and the lines that register/handle the status branch) so the file list matches the code changes, or alternatively revert the added status handler if the doc should remain authoritative—ensure templateJobs.ts and the new status handler are consistently described.docs/task/task-004-assistant-session-attach.md-6-6 (1)
6-6:⚠️ Potential issue | 🟡 MinorUpdate the Phase 4C status and module path here.
Line 6 still marks 4C as TODO, and Lines 21 and 67 reference
AssistantSessionChooser.ts, but this PR already addssrc/assistant/AssistantSessionChooser.tsx. Keeping the task doc in sync will avoid follow-up work targeting the wrong phase/file.Also applies to: 19-23, 61-68
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/task/task-004-assistant-session-attach.md` at line 6, Update the task doc to mark Phase 4C as DONE instead of TODO and change all references to the module path/name from AssistantSessionChooser.ts to the new component path AssistantSessionChooser.tsx (as added under src/assistant/AssistantSessionChooser.tsx); search for and replace occurrences around the noted areas (lines referencing Phase statuses and the module at ~Lines 19-23, 61-68) so the document correctly reflects Phase 4A-4D statuses and the new file location.src/jobs/templates.ts-81-87 (1)
81-87:⚠️ Potential issue | 🟡 MinorUse a type guard to narrow
frontmatter.descriptioninstead of casting to string.The
descriptionproperty is typed asstring | null, so theas stringcast bypasses TypeScript's type checking. Use atypeofguard instead to safely narrow the type and add runtime validation.Proposed fix
const description = - (frontmatter.description as string) || + typeof frontmatter.description === 'string' && + frontmatter.description.trim().length > 0 + ? frontmatter.description + : content .split('\n') .find(l => l.trim().length > 0) ?.trim() || 'No description'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/jobs/templates.ts` around lines 81 - 87, Replace the unsafe cast on frontmatter.description with a runtime type guard: check typeof frontmatter.description === 'string' (or non-empty string) before using it to set the description variable in the const description assignment; if the guard fails, fall back to the first non-empty content line or 'No description' as already done. Update the expression around frontmatter.description in the description initializer (the const description binding) so TypeScript understands the narrowed type and the runtime ensures null/invalid values are not used.src/jobs/state.ts-58-61 (1)
58-61:⚠️ Potential issue | 🟡 MinorReject malformed
state.jsonshapes.
JSON.parse()can return any truthy value here. Ifstate.jsonever becomes"oops"or[], Line 88 inappendJobReply()will either throw or silently persist nonsense instead of treating the job as unreadable. Returnnullunless the parsed value actually matchesJobState.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/jobs/state.ts` around lines 58 - 61, The readJobState function currently returns whatever JSON.parse yields which can be non-object/malformed; update readJobState to validate the parsed value against the expected JobState shape and return null for any invalid or non-object results (also keep the existing try/catch for parse errors). Concretely, after parsing the file in readJobState, run a small runtime type guard that checks required JobState properties (e.g., presence and types of fields used later in appendJobReply) and only return the parsed value if it passes; otherwise log or ignore and return null so appendJobReply and other callers treat the job as unreadable.docs/features/daemon-restructure-design.md-235-246 (1)
235-246:⚠️ Potential issue | 🟡 MinorAdd language identifiers to fenced code blocks to satisfy markdown lint.
The file list code fence is unlabeled, triggering MD040.
🛠️ Suggested fix
-``` +```text src/cli/bg/engine.ts BgEngine 接口定义 src/cli/bg/engines/tmux.ts TmuxEngine (从 bg.ts 提取) src/cli/bg/engines/detached.ts DetachedEngine (新实现) src/cli/bg/engines/index.ts 引擎选择 + re-export src/cli/bg/tail.ts 跨平台日志 tail (用于 detached attach) src/commands/daemon/index.ts /daemon REPL 斜杠命令注册 src/commands/daemon/daemon.tsx /daemon 子命令路由 + status UI src/commands/job/index.ts /job REPL 斜杠命令注册 src/commands/job/job.tsx /job 子命令路由 + UI docs/features/daemon-restructure-design.md 本设计文档</details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@docs/features/daemon-restructure-design.mdaround lines 235 - 246, The code
block in docs/features/daemon-restructure-design.md is missing a language
identifier and triggers MD040; update the unlabeled fenced block that lists
files to include a language tag (e.g., add "text" after the opening) so the block becomestext ... ``` and re-run linting to confirm the MD040 warning is
resolved.</details> </blockquote></details> <details> <summary>src/proactive/__tests__/state.baseline.test.ts-29-29 (1)</summary><blockquote> `29-29`: _⚠️ Potential issue_ | _🟡 Minor_ **Use `describe("functionName")` here as well.** `'proactive state baseline'` is descriptive, but it diverges from the repo test naming convention and is harder to grep with the other state suites. As per coding guidelines, "Use English naming for test cases: `describe("functionName")` + `test("behavior description")`." <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@src/proactive/__tests__/state.baseline.test.ts` at line 29, Replace the free-form suite title with the function-style name: change the describe call from 'proactive state baseline' to describe("proactiveStateBaseline", ...) so the test suite follows the repo convention describe("functionName") + test("behavior description"); update any child test descriptions if needed to retain behavior text only under test(...) blocks. ``` </details> </blockquote></details> <details> <summary>src/commands/__tests__/proactive.baseline.test.ts-13-13 (1)</summary><blockquote> `13-13`: _⚠️ Potential issue_ | _🟡 Minor_ **Use the command/function name in `describe()`.** `'/proactive baseline'` doesn't match the repo's `describe("functionName")` convention, which makes this suite harder to grep beside the other command tests. As per coding guidelines, "Use English naming for test cases: `describe("functionName")` + `test("behavior description")`." <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@src/commands/__tests__/proactive.baseline.test.ts` at line 13, The test suite describe header should use the function name rather than the CLI path; replace describe('/proactive baseline', ...) with describe('proactiveBaseline', ...) (or the actual exported function name if different) so it follows the repo convention describe("functionName") and is easy to grep; update any nested test descriptions only if they reference the old string. ``` </details> </blockquote></details> <details> <summary>docs/task/task-003-templates-job-mvp.md-6-7 (1)</summary><blockquote> `6-7`: _⚠️ Potential issue_ | _🟡 Minor_ **Documentation inconsistency: Status vs validation checklist.** The status is marked as `DONE` but the validation checklist (lines 63-66) has unchecked items. Consider updating the checklist to `[x]` if these have been verified, or clarifying the status if validation is pending. <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@docs/task/task-003-templates-job-mvp.md` around lines 6 - 7, The document currently shows "状态: DONE" while the "validation checklist" section still contains unchecked items; update the two places to be consistent by either marking the checklist items as completed (change the relevant checklist boxes to [x]) if verification is done, or change "状态: DONE" to a status that reflects pending validation (e.g., "IN REVIEW" or "PENDING VALIDATION"); locate the "状态: DONE" line and the "validation checklist" section (the checklist lines near the bottom) and make the corresponding update so both reflect the same completion state. ``` </details> </blockquote></details> <details> <summary>src/cli/handlers/templateJobs.ts-138-138 (1)</summary><blockquote> `138-138`: _⚠️ Potential issue_ | _🟡 Minor_ **Inconsistent command name in usage message.** Same issue: should be `claude job reply` to match the command hierarchy. <details> <summary>📝 Proposed fix</summary> ```diff - console.error('Usage: claude reply <job-id> <text>') + console.error('Usage: claude job reply <job-id> <text>') ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@src/cli/handlers/templateJobs.ts` at line 138, Update the usage message string in src/cli/handlers/templateJobs.ts so it matches the actual CLI command hierarchy: replace the incorrect 'Usage: claude reply <job-id> <text>' message with 'Usage: claude job reply <job-id> <text>' (locate the console.error call that prints the usage). Ensure any other nearby help/usage strings in the same function or handler are consistent with the 'claude job reply' command name. ``` </details> </blockquote></details> <details> <summary>src/cli/handlers/templateJobs.ts-95-95 (1)</summary><blockquote> `95-95`: _⚠️ Potential issue_ | _🟡 Minor_ **Inconsistent command name in usage message.** The usage shows `claude new <template>` but per the PR objectives and Line 42, the correct command is `claude job new <template>`. <details> <summary>📝 Proposed fix</summary> ```diff - console.error('Usage: claude new <template> [args...]') + console.error('Usage: claude job new <template> [args...]') ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@src/cli/handlers/templateJobs.ts` at line 95, Update the inconsistent usage string emitted by the template jobs handler: replace the console.error call that currently prints "Usage: claude new <template> [args...]" with the correct command "Usage: claude job new <template> [args...]" so the CLI help matches the implemented command (update the console.error invocation in templateJobs.ts). ``` </details> </blockquote></details> <details> <summary>src/cli/bg/tail.ts-56-59 (1)</summary><blockquote> `56-59`: _⚠️ Potential issue_ | _🟡 Minor_ **Potential file descriptor leak on read error.** If `readSync` throws after `openSync` succeeds, `closeSync` is never called, leaking the file descriptor. <details> <summary>🔧 Proposed fix</summary> ```diff - const fd = openSync(logPath, 'r') - const buf = Buffer.alloc(stat.size - position) - readSync(fd, buf, 0, buf.length, position) - closeSync(fd) + const fd = openSync(logPath, 'r') + try { + const buf = Buffer.alloc(stat.size - position) + readSync(fd, buf, 0, buf.length, position) + process.stdout.write(buf) + position = stat.size + } finally { + closeSync(fd) + } ``` Note: This also requires removing lines 61-62 from the outer scope. </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@src/cli/bg/tail.ts` around lines 56 - 59, Wrap the openSync/readSync/closeSync sequence in a try/finally so the file descriptor (fd) is always closed even if readSync throws: call openSync to assign fd, perform readSync inside the try block and call closeSync(fd) in the finally block; also remove the duplicate outer-scope closeSync lines mentioned in the comment so you don't close twice. Reference the local variable fd and the calls openSync, readSync and closeSync in the tail logic to locate and update the code. ``` </details> </blockquote></details> <details> <summary>src/hooks/useScheduledTasks.ts-115-121 (1)</summary><blockquote> `115-121`: _⚠️ Potential issue_ | _🟡 Minor_ **Use a type guard to narrow `command.value` before passing to `injectUserMessageToTeammate`.** Line 117 asserts `command.value as string`, but `QueuedCommand.value` is typed as `string | Array<ContentBlockParam>`. Direct type assertion without narrowing violates the guideline to use type guards for union types. Add a type guard to ensure `command.value` is a string before invoking the function. <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@src/hooks/useScheduledTasks.ts` around lines 115 - 121, The code is unsafely asserting command.value as a string when QueuedCommand.value can be string | Array<ContentBlockParam>; add a type guard before calling injectUserMessageToTeammate: check typeof command.value === 'string' (or use a helper isStringValue) and only pass command.value to injectUserMessageToTeammate when that guard succeeds, otherwise handle the non-string branch (e.g., skip, convert, or process the Array<ContentBlockParam>) so injectUserMessageToTeammate always receives a real string. ``` </details> </blockquote></details> <details> <summary>docs/test-plans/openclaw-autonomy-baseline.md-84-87 (1)</summary><blockquote> `84-87`: _⚠️ Potential issue_ | _🟡 Minor_ **Test file count mismatch.** The "Test Files Added In This Round" section (lines 34-38) lists **five** test files, but the acceptance criteria states "The **four** new test files pass." <details> <summary>Suggested fix</summary> ```diff This baseline round is complete when: -1. The four new test files pass. +1. The five new test files pass. 2. No production source files are modified. ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@docs/test-plans/openclaw-autonomy-baseline.md` around lines 84 - 87, The acceptance criteria currently says "The four new test files pass" but the "Test Files Added In This Round" section lists five files; update the document so the counts match by either changing the acceptance criterion to "five new test files pass" or removing one file from the "Test Files Added In This Round" list; adjust the phrasing in the "This baseline round is complete when:" block (the sentence containing "four new test files") and ensure the "Test Files Added In This Round" section and the acceptance criterion reference the same number and file set. ``` </details> </blockquote></details> <details> <summary>src/utils/taskSummary.ts-51-70 (1)</summary><blockquote> `51-70`: _⚠️ Potential issue_ | _🟡 Minor_ **Potential type mismatch: `status` can be `'working'` but cast to `'busy' | 'idle'`.** Line 51 initializes `status = 'working'`, but line 70 casts it as `'busy' | 'idle'`. If the last block is not a `tool_use`, the status remains `'working'`, which doesn't match the expected union type. Either change the default to `'idle'` (if that's the intended fallback), or add `'working'` to the accepted status type in `updateSessionActivity`. <details> <summary>Suggested fix if 'idle' is the intended default</summary> ```diff - let status = 'working' + let status: 'busy' | 'idle' = 'idle' let waitingFor: string | undefined if (lastAssistant?.message?.content) { @@ -67,7 +67,7 @@ // Fire-and-forget update to session registry void updateSessionActivity({ - status: status as 'busy' | 'idle', + status, waitingFor, }).catch(err => { ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@src/utils/taskSummary.ts` around lines 51 - 70, The variable status is initialized to 'working' but later cast as 'busy' | 'idle' when calling updateSessionActivity, causing a type mismatch; change the default initialization of status to 'idle' (replace let status = 'working' with let status = 'idle') or otherwise map 'working' to 'idle' before the cast so that the value passed to updateSessionActivity conforms to the 'busy' | 'idle' union; update references in the same block (the status variable used with lastAssistant/message/content and the call to updateSessionActivity) accordingly. ``` </details> </blockquote></details> <details> <summary>src/utils/__tests__/cronTasks.baseline.test.ts-194-202 (1)</summary><blockquote> `194-202`: _⚠️ Potential issue_ | _🟡 Minor_ **This assertion doesn’t verify the behavior the test name promises.** Lines 199-201 only prove that `jittered` is non-null and `>= fromMs`; an implementation returning a later-but-wrong timestamp would still pass even though the test says the value should equal the exact next fire time. <details> <summary>Suggested assertion</summary> ```diff - expect(jittered!).toBeGreaterThanOrEqual(fromMs) + expect(jittered).toBe(exact) ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@src/utils/__tests__/cronTasks.baseline.test.ts` around lines 194 - 202, The test name promises the jittered value equals the exact next fire time but current assertions only check non-null and >= fromMs; update the assertions to validate equality with nextCronRunMs by asserting jittered equals exact (after confirming both are non-null), i.e., use nextCronRunMs('0 0 29 2 *', fromMs) result stored in exact and assert oneShotJitteredNextCronRunMs('0 0 29 2 *', fromMs, '89abcdef') strictly equals that exact value to guarantee the jittered helper returns the precise next fire time when no alternative exists. ``` </details> </blockquote></details> <details> <summary>src/screens/REPL.tsx-3364-3375 (1)</summary><blockquote> `3364-3375`: _⚠️ Potential issue_ | _🟡 Minor_ **Replace `Record<string, any>` with `Record<string, unknown>` and add proper type narrowing.** Line 3366 violates the guideline to avoid `as any` in production code by using `Record<string, any>` despite the eslint-disable comment. Additionally, lines 3371-3374 rely on a type assertion (`as typeof prev.companionReaction`) rather than a proper type guard to narrow the `unknown` callback parameter before writing to state. Use type narrowing (e.g., `typeof reaction === 'string'`) or adjust the observer callback signature in the injected function to provide safe typing without relying on unchecked casts. <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@src/screens/REPL.tsx` around lines 3364 - 3375, Replace the unsafe any usage and unchecked cast by changing the local variable type to Record<string, unknown> for (globalThis).fireCompanionObserver (keep name _fireCompanionObserver) and update the callback to narrow the unknown reaction before using it: inside the reaction handler for _fireCompanionObserver(messagesRef.current, reaction => ...), check the runtime type (e.g., typeof reaction === 'string' or other expected type) or use a type-guard function, and only call setAppState(prev => ...) with the narrowed/typed reaction (compare and assign to prev.companionReaction) after the guard; ensure no "as any" or "as typeof prev.companionReaction" casts remain. ``` </details> </blockquote></details> <details> <summary>src/commands/assistant/assistant.tsx-61-91 (1)</summary><blockquote> `61-91`: _⚠️ Potential issue_ | _🟡 Minor_ **Race condition between daemon start and success callback.** The `setTimeout` on Line 85-87 reports success after 1500ms regardless of whether the daemon actually started successfully. If `child.on('error')` fires after the timeout, the error handler won't prevent `onInstalled` from being called. Consider tracking whether an error occurred before calling `onInstalled`: <details> <summary>🐛 Proposed fix to prevent race condition</summary> ```diff function startDaemon(): void { if (starting) return; setStarting(true); const dir = defaultDir || resolve('.'); + let errorOccurred = false; try { const execArgs = [...process.execArgv, process.argv[1]!, 'daemon', 'start', `--dir=${dir}`]; const child = spawn(process.execPath, execArgs, { cwd: dir, stdio: 'ignore', detached: true, }); child.unref(); child.on('error', err => { + errorOccurred = true; onError(`Failed to start daemon: ${err.message}`); }); // Give the daemon a moment to initialize, then report success. // The daemon still needs several more seconds to register the bridge // and create a CCR session — main.tsx will tell the user to reconnect. setTimeout(() => { + if (!errorOccurred) { onInstalled(dir); + } }, 1500); } catch (err) { onError(`Failed to start daemon: ${err instanceof Error ? err.message : String(err)}`); } } ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@src/commands/assistant/assistant.tsx` around lines 61 - 91, The startDaemon function currently calls onInstalled after a fixed 1500ms timeout regardless of child process errors, causing a race where child.on('error') may fire after onInstalled; update startDaemon to track child startup outcome by adding a local flag (e.g., didError) or use child.once('error')/child.once('spawn') so that child.on('error') sets didError = true and invokes onError, and the timeout handler checks this flag (or only calls onInstalled from the spawn/ready event) before calling onInstalled; also ensure any listeners are once-only or removed after use to avoid duplicate callbacks (references: startDaemon, child.on('error'), setTimeout, onInstalled, onError). ``` </details> </blockquote></details> <details> <summary>src/commands/assistant/assistant.tsx-68-68 (1)</summary><blockquote> `68-68`: _⚠️ Potential issue_ | _🟡 Minor_ **Non-null assertion on `process.argv[1]` could fail in edge cases.** While `process.argv[1]` typically contains the script path, it could be undefined in unusual execution contexts. Consider adding a guard or fallback. <details> <summary>🛡️ Proposed defensive check</summary> ```diff + const scriptPath = process.argv[1]; + if (!scriptPath) { + onError('Unable to determine script path for daemon start'); + return; + } - const execArgs = [...process.execArgv, process.argv[1]!, 'daemon', 'start', `--dir=${dir}`]; + const execArgs = [...process.execArgv, scriptPath, 'daemon', 'start', `--dir=${dir}`]; ``` </details> <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@src/commands/assistant/assistant.tsx` at line 68, The current execArgs construction uses a non-null assertion on process.argv[1] which can be undefined; update the execArgs assignment in assistant.tsx to guard this value and provide a safe fallback (e.g., process.argv[1] ?? process.argv[0] ?? a resolved cwd/path) or throw a clear error if absent, ensuring you adjust imports if you choose to use path.resolve; replace the `process.argv[1]!` usage so execArgs is built from a guaranteed string. ``` </details> </blockquote></details> <details> <summary>src/cli/handlers/ant.ts-198-215 (1)</summary><blockquote> `198-215`: _⚠️ Potential issue_ | _🟡 Minor_ **`--output` is a no-op right now.** Both branches regenerate the cache and print the same message, but `opts.output` is never used. `completion --output <file>` will silently ignore the requested destination. <details> <summary>🤖 Prompt for AI Agents</summary> ``` Verify each finding against the current code and only fix it if needed. In `@src/cli/handlers/ant.ts` around lines 198 - 215, The completionHandler currently ignores opts.output; both branches call regenerateCompletionCache() and only log, so --output is a no-op. Fix by using opts.output: when opts.output is provided, obtain the generated completion data (either by calling regenerateCompletionCache() and capturing its return value or by calling regenerateCompletionCache(opts.output) if you extend the function) and write that data to the requested file path (use fs/promises.writeFile or similar) before logging; otherwise keep the existing behavior of regenerating and printing to stdout. Update the completionHandler function and, if needed, adjust regenerateCompletionCache to return the completion content so opts.output can be written to disk. ``` </details> </blockquote></details> </blockquote></details> --- <details> <summary>ℹ️ Review info</summary> <details> <summary>⚙️ Run configuration</summary> **Configuration used**: defaults **Review profile**: CHILL **Plan**: Pro **Run ID**: `1a404be6-708a-4205-a72c-996bdf8136b0` </details> <details> <summary>📥 Commits</summary> Reviewing files that changed from the base of the PR and between dad3ad2b8dd2c5ed941777b5d05c00676c930bce and b5592aa2ad42a20a5464e6942db13bfce59b2ebb. </details> <details> <summary>📒 Files selected for processing (109)</summary> * `.gitignore` * `02-kairos (1).md` * `build.ts` * `docs/features/daemon-restructure-design.md` * `docs/features/stub-recovery-design-1-4.md` * `docs/task/task-001-daemon-status-stop.md` * `docs/task/task-002-bg-sessions-ps-logs-kill.md` * `docs/task/task-003-templates-job-mvp.md` * `docs/task/task-004-assistant-session-attach.md` * `docs/task/task-013-bg-engine-abstraction.md` * `docs/task/task-014-daemon-command-hierarchy.md` * `docs/task/task-015-job-command-hierarchy.md` * `docs/task/task-016-backward-compat-tests.md` * `docs/test-plans/openclaw-autonomy-baseline.md` * `packages/builtin-tools/src/tools/PushNotificationTool/PushNotificationTool.ts` * `packages/builtin-tools/src/tools/SendUserFileTool/SendUserFileTool.ts` * `scripts/dev.ts` * `src/__tests__/context.baseline.test.ts` * `src/assistant/AssistantSessionChooser.ts` * `src/assistant/AssistantSessionChooser.tsx` * `src/assistant/gate.ts` * `src/assistant/index.ts` * `src/assistant/sessionDiscovery.ts` * `src/cli/bg.ts` * `src/cli/bg/__tests__/detached.test.ts` * `src/cli/bg/__tests__/engine.test.ts` * `src/cli/bg/__tests__/tail.test.ts` * `src/cli/bg/engine.ts` * `src/cli/bg/engines/detached.ts` * `src/cli/bg/engines/index.ts` * `src/cli/bg/engines/tmux.ts` * `src/cli/bg/tail.ts` * `src/cli/handlers/ant.ts` * `src/cli/handlers/templateJobs.ts` * `src/cli/print.ts` * `src/cli/rollback.ts` * `src/cli/up.ts` * `src/commands.ts` * `src/commands/__tests__/autonomy.test.ts` * `src/commands/__tests__/proactive.baseline.test.ts` * `src/commands/assistant/assistant.ts` * `src/commands/assistant/assistant.tsx` * `src/commands/assistant/gate.ts` * `src/commands/autonomy.ts` * `src/commands/daemon/__tests__/daemon.test.ts` * `src/commands/daemon/daemon.tsx` * `src/commands/daemon/index.ts` * `src/commands/init.ts` * `src/commands/job/__tests__/job.test.ts` * `src/commands/job/index.ts` * `src/commands/job/job.tsx` * `src/commands/lang/index.ts` * `src/commands/lang/lang.ts` * `src/commands/send/send.ts` * `src/commands/torch.ts` * `src/daemon/__tests__/daemonMain.test.ts` * `src/daemon/__tests__/state.test.ts` * `src/daemon/main.ts` * `src/daemon/state.ts` * `src/entrypoints/cli.tsx` * `src/hooks/useAwaySummary.ts` * `src/hooks/useMasterMonitor.ts` * `src/hooks/usePipeIpc.ts` * `src/hooks/usePipeMuteSync.ts` * `src/hooks/usePipePermissionForward.ts` * `src/hooks/usePipeRelay.ts` * `src/hooks/useScheduledTasks.ts` * `src/jobs/__tests__/classifier.test.ts` * `src/jobs/__tests__/state.test.ts` * `src/jobs/__tests__/templates.test.ts` * `src/jobs/classifier.ts` * `src/jobs/state.ts` * `src/jobs/templates.ts` * `src/main.tsx` * `src/proactive/__tests__/state.baseline.test.ts` * `src/proactive/useProactive.ts` * `src/screens/REPL.tsx` * `src/services/analytics/growthbook.ts` * `src/services/api/openai/__tests__/queryModelOpenAI.isolated.ts` * `src/services/api/openai/__tests__/queryModelOpenAI.test.ts` * `src/services/api/openai/__tests__/streamAdapter.test.ts` * `src/services/awaySummary.ts` * `src/services/langfuse/__tests__/langfuse.isolated.ts` * `src/services/langfuse/__tests__/langfuse.test.ts` * `src/tasks/InProcessTeammateTask/InProcessTeammateTask.tsx` * `src/tasks/InProcessTeammateTask/types.ts` * `src/types/textInputTypes.ts` * `src/utils/__tests__/autonomyAuthority.test.ts` * `src/utils/__tests__/autonomyRuns.test.ts` * `src/utils/__tests__/cronScheduler.baseline.test.ts` * `src/utils/__tests__/cronTasks.baseline.test.ts` * `src/utils/__tests__/language.test.ts` * `src/utils/__tests__/pipeMuteState.test.ts` * `src/utils/__tests__/taskSummary.test.ts` * `src/utils/autonomyAuthority.ts` * `src/utils/autonomyFlows.ts` * `src/utils/autonomyPersistence.ts` * `src/utils/autonomyRuns.ts` * `src/utils/config.ts` * `src/utils/handlePromptSubmit.ts` * `src/utils/language.ts` * `src/utils/pipeMuteState.ts` * `src/utils/pipePermissionRelay.ts` * `src/utils/pipeTransport.ts` * `src/utils/swarm/inProcessRunner.ts` * `src/utils/swarm/spawnInProcess.ts` * `src/utils/taskSummary.ts` * `tests/mocks/file-system.ts` * `tsconfig.json` </details> <details> <summary>💤 Files with no reviewable changes (3)</summary> * src/hooks/useAwaySummary.ts * src/assistant/AssistantSessionChooser.ts * src/commands/assistant/assistant.ts </details> </details> <!-- This is an auto-generated comment by CodeRabbit for review status -->
| const response = await axios.post( | ||
| `${baseUrl}/v1/sessions/${sessionId}/events`, | ||
| { | ||
| events: [ | ||
| { | ||
| type: 'push_notification', | ||
| title: input.title, | ||
| body: input.body, | ||
| priority: input.priority ?? 'normal', | ||
| }, | ||
| ], | ||
| }, | ||
| { | ||
| headers: { | ||
| Authorization: `Bearer ${token}`, | ||
| 'Content-Type': 'application/json', | ||
| 'anthropic-version': '2023-06-01', | ||
| }, | ||
| timeout: 10_000, | ||
| validateStatus: (s: number) => s < 500, | ||
| }, | ||
| ) | ||
| if (response.status >= 200 && response.status < 300) { | ||
| logForDebugging(`[PushNotification] delivered via bridge session=${sessionId}`) | ||
| return { data: { sent: true } } | ||
| } | ||
| logForDebugging(`[PushNotification] bridge delivery failed: status=${response.status}`) | ||
| } | ||
| } catch (e) { | ||
| logForDebugging(`[PushNotification] bridge delivery error: ${e}`) | ||
| } | ||
| } | ||
| } | ||
|
|
||
| // Fallback: no bridge available, push was not delivered to a remote device. | ||
| logForDebugging(`[PushNotification] no bridge available, not delivered: ${input.title}`) | ||
| return { data: { sent: false, error: 'No Remote Control bridge configured. Notification not delivered.' } } |
There was a problem hiding this comment.
Return a bridge-failure error that matches the actual failure.
By Line 121 or Line 124, the bridge is already configured and the send attempt has failed. Falling through to Line 131 still reports a configuration problem, which is false for expired tokens, rejected events, or transient transport errors.
Proposed fix
if (response.status >= 200 && response.status < 300) {
logForDebugging(`[PushNotification] delivered via bridge session=${sessionId}`)
return { data: { sent: true } }
}
logForDebugging(`[PushNotification] bridge delivery failed: status=${response.status}`)
+ return {
+ data: {
+ sent: false,
+ error: `Remote Control bridge rejected the notification (status ${response.status}).`,
+ },
+ }
}
} catch (e) {
logForDebugging(`[PushNotification] bridge delivery error: ${e}`)
+ return {
+ data: {
+ sent: false,
+ error: 'Remote Control bridge request failed. Notification not delivered.',
+ },
+ }
}
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const response = await axios.post( | |
| `${baseUrl}/v1/sessions/${sessionId}/events`, | |
| { | |
| events: [ | |
| { | |
| type: 'push_notification', | |
| title: input.title, | |
| body: input.body, | |
| priority: input.priority ?? 'normal', | |
| }, | |
| ], | |
| }, | |
| { | |
| headers: { | |
| Authorization: `Bearer ${token}`, | |
| 'Content-Type': 'application/json', | |
| 'anthropic-version': '2023-06-01', | |
| }, | |
| timeout: 10_000, | |
| validateStatus: (s: number) => s < 500, | |
| }, | |
| ) | |
| if (response.status >= 200 && response.status < 300) { | |
| logForDebugging(`[PushNotification] delivered via bridge session=${sessionId}`) | |
| return { data: { sent: true } } | |
| } | |
| logForDebugging(`[PushNotification] bridge delivery failed: status=${response.status}`) | |
| } | |
| } catch (e) { | |
| logForDebugging(`[PushNotification] bridge delivery error: ${e}`) | |
| } | |
| } | |
| } | |
| // Fallback: no bridge available, push was not delivered to a remote device. | |
| logForDebugging(`[PushNotification] no bridge available, not delivered: ${input.title}`) | |
| return { data: { sent: false, error: 'No Remote Control bridge configured. Notification not delivered.' } } | |
| const response = await axios.post( | |
| `${baseUrl}/v1/sessions/${sessionId}/events`, | |
| { | |
| events: [ | |
| { | |
| type: 'push_notification', | |
| title: input.title, | |
| body: input.body, | |
| priority: input.priority ?? 'normal', | |
| }, | |
| ], | |
| }, | |
| { | |
| headers: { | |
| Authorization: `Bearer ${token}`, | |
| 'Content-Type': 'application/json', | |
| 'anthropic-version': '2023-06-01', | |
| }, | |
| timeout: 10_000, | |
| validateStatus: (s: number) => s < 500, | |
| }, | |
| ) | |
| if (response.status >= 200 && response.status < 300) { | |
| logForDebugging(`[PushNotification] delivered via bridge session=${sessionId}`) | |
| return { data: { sent: true } } | |
| } | |
| logForDebugging(`[PushNotification] bridge delivery failed: status=${response.status}`) | |
| return { | |
| data: { | |
| sent: false, | |
| error: `Remote Control bridge rejected the notification (status ${response.status}).`, | |
| }, | |
| } | |
| } | |
| } catch (e) { | |
| logForDebugging(`[PushNotification] bridge delivery error: ${e}`) | |
| return { | |
| data: { | |
| sent: false, | |
| error: 'Remote Control bridge request failed. Notification not delivered.', | |
| }, | |
| } | |
| } | |
| } | |
| } | |
| // Fallback: no bridge available, push was not delivered to a remote device. | |
| logForDebugging(`[PushNotification] no bridge available, not delivered: ${input.title}`) | |
| return { data: { sent: false, error: 'No Remote Control bridge configured. Notification not delivered.' } } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@packages/builtin-tools/src/tools/PushNotificationTool/PushNotificationTool.ts`
around lines 95 - 131, The current fallback always returns a "No Remote Control
bridge configured" error even when a bridge existed but delivery failed; update
the logic around the axios response handling in PushNotificationTool (check
response.status and the catch block where response errors are logged via
logForDebugging) to return a failure result that reflects the actual bridge
error: when response exists and status >=300 return { data: { sent: false,
error: `Bridge delivery failed: status=${response.status}` } }, and when an
exception is caught (variable e in the catch) return { data: { sent: false,
error: `Bridge delivery error: ${e.message ?? String(e)}` } } instead of falling
through to the "No Remote Control bridge configured" message; keep references to
sessionId and token in the logs for context but ensure the returned error
accurately describes the bridge failure.
| logForDebugging(`[PushNotification] delivered via bridge session=${sessionId}`) | ||
| return { data: { sent: true } } | ||
| } | ||
| logForDebugging(`[PushNotification] bridge delivery failed: status=${response.status}`) | ||
| } | ||
| } catch (e) { | ||
| logForDebugging(`[PushNotification] bridge delivery error: ${e}`) | ||
| } |
There was a problem hiding this comment.
Redact user/session data from these debug logs.
Line 118 logs the session ID, Line 124 logs the raw transport error, and Line 130 logs the notification title. Those values can contain user or session metadata, so the debug trail should keep only non-sensitive status information.
Also applies to: 130-130
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@packages/builtin-tools/src/tools/PushNotificationTool/PushNotificationTool.ts`
around lines 118 - 125, The debug logs in PushNotificationTool currently emit
sensitive values (sessionId, raw transport error, notification title) via
logForDebugging; update the log statements in the Send/bridge delivery code
paths (the logForDebugging calls around the bridge response handling and the
catch block, and the log that prints the notification title) to redact or
replace sensitive fields (e.g., use a fixed token like "REDACTED" or a
non-reversible hash) instead of printing sessionId or title, and log only
non-sensitive status info and a sanitized error indicator (e.g., error type or
"transport error occurred" without stack/PII). Locate these calls by searching
for logForDebugging within the PushNotificationTool class (bridge delivery
success/failure logging, catch(e) block, and the notification title log) and
change the messages to omit or mask raw user/session data while preserving
status context.
| useKeybindings( | ||
| { | ||
| 'select:next': () => setFocusIndex(i => (i + 1) % sessions.length), | ||
| 'select:previous': () => setFocusIndex(i => (i - 1 + sessions.length) % sessions.length), | ||
| 'select:accept': () => onSelect(sessions[focusIndex]!.id), |
There was a problem hiding this comment.
Guard the chooser against an empty sessions array.
If sessions.length is 0, Lines 29-30 produce NaN and Line 31 throws on Enter because sessions[focusIndex] is undefined. The component should no-op or render an empty state instead of crashing.
Proposed fix
+ const hasSessions = sessions.length > 0;
useKeybindings(
{
- 'select:next': () => setFocusIndex(i => (i + 1) % sessions.length),
- 'select:previous': () => setFocusIndex(i => (i - 1 + sessions.length) % sessions.length),
- 'select:accept': () => onSelect(sessions[focusIndex]!.id),
+ 'select:next': () => {
+ if (hasSessions) setFocusIndex(i => (i + 1) % sessions.length);
+ },
+ 'select:previous': () => {
+ if (hasSessions) setFocusIndex(i => (i - 1 + sessions.length) % sessions.length);
+ },
+ 'select:accept': () => {
+ const session = sessions[focusIndex];
+ if (session) onSelect(session.id);
+ },
},📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| useKeybindings( | |
| { | |
| 'select:next': () => setFocusIndex(i => (i + 1) % sessions.length), | |
| 'select:previous': () => setFocusIndex(i => (i - 1 + sessions.length) % sessions.length), | |
| 'select:accept': () => onSelect(sessions[focusIndex]!.id), | |
| const hasSessions = sessions.length > 0; | |
| useKeybindings( | |
| { | |
| 'select:next': () => { | |
| if (hasSessions) setFocusIndex(i => (i + 1) % sessions.length); | |
| }, | |
| 'select:previous': () => { | |
| if (hasSessions) setFocusIndex(i => (i - 1 + sessions.length) % sessions.length); | |
| }, | |
| 'select:accept': () => { | |
| const session = sessions[focusIndex]; | |
| if (session) onSelect(session.id); | |
| }, | |
| }, |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/assistant/AssistantSessionChooser.tsx` around lines 27 - 31, The
keybinding handlers using useKeybindings should guard against an empty sessions
array: check sessions.length > 0 before calling setFocusIndex or onSelect so
arithmetic like (i + 1) % sessions.length and accessing sessions[focusIndex] are
not executed when sessions is empty; update the 'select:next' and
'select:previous' callbacks to no-op when sessions.length === 0 and change
'select:accept' to first verify sessions[focusIndex] exists (or sessions.length
> 0) before calling onSelect, and consider rendering an empty-state UI when
sessions is empty instead of allowing the handlers to run.
| if (!target) { | ||
| // Find bg sessions (tmux or detached) | ||
| const bgSessions = sessions.filter( | ||
| s => s.tmuxSessionName || s.engine === 'detached', | ||
| ) | ||
| if (bgSessions.length === 0) { | ||
| console.log( | ||
| 'No background sessions to attach to. Start one with `claude daemon bg`.', | ||
| ) | ||
| return | ||
| } | ||
| if (bgSessions.length === 1) { | ||
| target = bgSessions[0]!.sessionId | ||
| } else { | ||
| console.log('Multiple background sessions. Specify one:') | ||
| for (const s of bgSessions) { | ||
| const label = s.name ? `${s.name} (${s.sessionId})` : s.sessionId | ||
| const engineType = resolveSessionEngine(s) | ||
| console.log(` ${label} PID=${s.pid} engine=${engineType}`) | ||
| } | ||
| return | ||
| } | ||
| } |
There was a problem hiding this comment.
Use resolveSessionEngine() in the no-target attach prefilter.
The current bgSessions filter only keeps entries with tmuxSessionName or engine === 'detached'. A legacy detached record with neither field still resolves as 'detached' via resolveSessionEngine(), but it never makes it into this candidate list, so claude daemon attach can incorrectly report that there are no background sessions.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/cli/bg.ts` around lines 165 - 187, The prefilter that builds bgSessions
currently only checks s.tmuxSessionName or s.engine === 'detached', which misses
legacy detached records that resolve to 'detached' via resolveSessionEngine(s);
update the bgSessions filter to include sessions where resolveSessionEngine(s)
=== 'detached' (i.e. keep s.tmuxSessionName || s.engine === 'detached' ||
resolveSessionEngine(s) === 'detached') so legacy detached entries are
considered when choosing target; reference the sessions array, bgSessions
filter, resolveSessionEngine(s), and the target assignment logic to locate and
change the condition.
| async start(opts: BgStartOptions): Promise<BgStartResult> { | ||
| const entrypoint = process.argv[1]! | ||
| const cmd = quote([process.execPath, entrypoint, ...opts.args]) | ||
|
|
||
| const tmuxEnv: Record<string, string | undefined> = { | ||
| ...opts.env, | ||
| CLAUDE_CODE_SESSION_KIND: 'bg', | ||
| CLAUDE_CODE_SESSION_NAME: opts.sessionName, | ||
| CLAUDE_CODE_SESSION_LOG: opts.logPath, | ||
| CLAUDE_CODE_TMUX_SESSION: opts.sessionName, | ||
| } | ||
|
|
||
| const result = spawnSync( | ||
| 'tmux', | ||
| ['new-session', '-d', '-s', opts.sessionName, cmd], | ||
| { stdio: 'inherit', env: tmuxEnv }, |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's locate and examine the BgStartOptions interface definition
fd -t f "\.ts$" | xargs rg -l "BgStartOptions" | head -20Repository: claude-code-best/claude-code
Length of output: 178
🏁 Script executed:
# Also examine the complete start() method in tmux.ts
cat -n src/cli/bg/engines/tmux.tsRepository: claude-code-best/claude-code
Length of output: 2795
🏁 Script executed:
# Search for any usage of cwd in the tmux.ts file
rg "cwd" src/cli/bg/engines/tmux.ts -B 2 -A 2Repository: claude-code-best/claude-code
Length of output: 54
🏁 Script executed:
# Check the BgStartOptions interface definition
cat -n src/cli/bg/engine.ts | head -100Repository: claude-code-best/claude-code
Length of output: 1435
🏁 Script executed:
# Verify tmux documentation for the -c flag
# Search the codebase for other usages of tmux -c to see if it's used elsewhere
rg "\-c" --type ts -B 3 -A 3 | grep -i tmux -A 5 -B 5Repository: claude-code-best/claude-code
Length of output: 22375
Pass the requested working directory to tmux.
BgStartOptions.cwd is never used in the start() method. Without tmux new-session -c, the background session starts in the tmux server's default directory instead of the requested project directory. This breaks consistency with the DetachedEngine which respects the working directory.
📁 Proposed fix
const result = spawnSync(
'tmux',
- ['new-session', '-d', '-s', opts.sessionName, cmd],
+ ['new-session', '-d', '-s', opts.sessionName, '-c', opts.cwd, cmd],
{ stdio: 'inherit', env: tmuxEnv },
)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async start(opts: BgStartOptions): Promise<BgStartResult> { | |
| const entrypoint = process.argv[1]! | |
| const cmd = quote([process.execPath, entrypoint, ...opts.args]) | |
| const tmuxEnv: Record<string, string | undefined> = { | |
| ...opts.env, | |
| CLAUDE_CODE_SESSION_KIND: 'bg', | |
| CLAUDE_CODE_SESSION_NAME: opts.sessionName, | |
| CLAUDE_CODE_SESSION_LOG: opts.logPath, | |
| CLAUDE_CODE_TMUX_SESSION: opts.sessionName, | |
| } | |
| const result = spawnSync( | |
| 'tmux', | |
| ['new-session', '-d', '-s', opts.sessionName, cmd], | |
| { stdio: 'inherit', env: tmuxEnv }, | |
| async start(opts: BgStartOptions): Promise<BgStartResult> { | |
| const entrypoint = process.argv[1]! | |
| const cmd = quote([process.execPath, entrypoint, ...opts.args]) | |
| const tmuxEnv: Record<string, string | undefined> = { | |
| ...opts.env, | |
| CLAUDE_CODE_SESSION_KIND: 'bg', | |
| CLAUDE_CODE_SESSION_NAME: opts.sessionName, | |
| CLAUDE_CODE_SESSION_LOG: opts.logPath, | |
| CLAUDE_CODE_TMUX_SESSION: opts.sessionName, | |
| } | |
| const result = spawnSync( | |
| 'tmux', | |
| ['new-session', '-d', '-s', opts.sessionName, '-c', opts.cwd, cmd], | |
| { stdio: 'inherit', env: tmuxEnv }, |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/cli/bg/engines/tmux.ts` around lines 14 - 29, The start() method ignores
BgStartOptions.cwd so tmux starts in the server default directory; update the
spawnSync tmux arguments in start (function name: start, class/file: tmux.ts) to
include the working-directory flag '-c' with opts.cwd (falling back to
process.cwd() if undefined) before the command, e.g. insert '-c', opts.cwd into
the args array passed to spawnSync so the new-session uses the requested project
directory.
| const heartbeatTaskLastRunByKey = new Map<string, number>() | ||
|
|
||
| function isPathWithinRoot(rootDir: string, currentDir: string): boolean { | ||
| const delta = relative(rootDir, currentDir) | ||
| return delta === '' || (!delta.startsWith('..') && !isAbsolute(delta)) |
There was a problem hiding this comment.
Heartbeat due-state shouldn't live only in process memory.
heartbeatTaskLastRunByKey is reset on every restart and isn't shared across processes. That makes 30m / 7d tasks immediately due again after a daemon restart or from a second instance against the same repo, so the interval contract does not hold. This state needs to be persisted or derived from the persisted run/flow history.
Also applies to: 358-369, 371-373
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/utils/autonomyAuthority.ts` around lines 69 - 73, The in-memory Map
heartbeatTaskLastRunByKey is volatile and causes heartbeat intervals to reset
across restarts or different processes; replace or augment its usage by
persisting the last-run timestamp (e.g., in your existing run/flow history store
or a lightweight persisted key-value table) and/or derive the last-run time by
querying persisted runs/flows when determining due-state; update code paths that
read/write heartbeatTaskLastRunByKey so they first consult the persistent store
(falling back to the Map as an in-memory cache) and write back updates to the
persistent store whenever a heartbeat is recorded, ensuring consistency across
restarts and multiple instances.
| if ( | ||
| record.parentFlowId && | ||
| record.flowStepId && | ||
| record.parentFlowSyncMode === 'managed' | ||
| ) { | ||
| const stepIndex = | ||
| ( | ||
| await getAutonomyFlowById(record.parentFlowId, rootDir) | ||
| )?.stateJson?.steps.findIndex( | ||
| step => step.stepId === record.flowStepId, | ||
| ) ?? 0 | ||
| await queueManagedAutonomyFlowStepRun({ | ||
| flowId: record.parentFlowId, | ||
| stepId: record.flowStepId, | ||
| stepIndex: stepIndex >= 0 ? stepIndex : 0, | ||
| runId: record.runId, | ||
| rootDir, | ||
| nowMs: record.createdAt, | ||
| }) |
There was a problem hiding this comment.
Don't remap an unknown managed step to index 0.
If the flow revision changed or flowStepId is stale, findIndex() returns -1 and this code still queues step 0. That corrupts the flow/run linkage by marking the wrong step queued.
Possible guard
- const stepIndex =
- (
- await getAutonomyFlowById(record.parentFlowId, rootDir)
- )?.stateJson?.steps.findIndex(
- step => step.stepId === record.flowStepId,
- ) ?? 0
+ const stepIndex = (
+ await getAutonomyFlowById(record.parentFlowId, rootDir)
+ )?.stateJson?.steps.findIndex(
+ step => step.stepId === record.flowStepId,
+ )
+ if (stepIndex == null || stepIndex < 0) {
+ return record
+ }
await queueManagedAutonomyFlowStepRun({
flowId: record.parentFlowId,
stepId: record.flowStepId,
- stepIndex: stepIndex >= 0 ? stepIndex : 0,
+ stepIndex,
runId: record.runId,
rootDir,
nowMs: record.createdAt,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/utils/autonomyRuns.ts` around lines 241 - 259, The code currently maps a
missing managed step (findIndex() === -1) to index 0 which can corrupt flow/run
linkage; update the logic in the block that calls getAutonomyFlowById(...) and
queueManagedAutonomyFlowStepRun(...) to detect when stepIndex is -1 (i.e., the
step with stepId === record.flowStepId was not found) and skip/abort queuing
(and ideally log a warning/error including record.parentFlowId and
record.flowStepId) instead of passing 0; adjust the use of the stepIndex
variable from the current stepIndex >= 0 ? stepIndex : 0 behavior to a guard
that only calls queueManagedAutonomyFlowStepRun when stepIndex >= 0.
| for (const runId of pendingAutonomyRunIds) { | ||
| void markAutonomyRunFailed( | ||
| runId, | ||
| `Teammate ${agentId ?? taskId} was stopped before it could consume the queued autonomy prompt.`, | ||
| ) | ||
| } |
There was a problem hiding this comment.
Handle rejected autonomy-run updates to avoid unhandled promise rejections.
markAutonomyRunFailed() is async, but these calls are fire-and-forget without .catch(...). If persistence fails, this can surface as unhandled rejection and lose failure-state bookkeeping.
💡 Suggested hardening
- for (const runId of pendingAutonomyRunIds) {
- void markAutonomyRunFailed(
- runId,
- `Teammate ${agentId ?? taskId} was stopped before it could consume the queued autonomy prompt.`,
- )
- }
+ for (const runId of new Set(pendingAutonomyRunIds)) {
+ void markAutonomyRunFailed(
+ runId,
+ `Teammate ${agentId ?? taskId} was stopped before it could consume the queued autonomy prompt.`,
+ ).catch(err => {
+ logForDebugging(
+ `[killInProcessTeammate] Failed to mark autonomy run ${runId} as failed: ${
+ err instanceof Error ? err.message : String(err)
+ }`,
+ )
+ })
+ }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/utils/swarm/spawnInProcess.ts` around lines 314 - 319, The loop currently
calls the async markAutonomyRunFailed(...) for each runId fire-and-forget which
can cause unhandled promise rejections; change this so the promises are
observed—either map pendingAutonomyRunIds to the markAutonomyRunFailed(...)
calls and await Promise.allSettled(...) (and log any failures with the runId),
or attach a .catch(...) to each markAutonomyRunFailed(...) that records the
error (including runId and agentId/taskId) so persistence failures are not
unhandled; update the code around the pendingAutonomyRunIds loop (and references
to agentId/taskId) to use one of these patterns.
There was a problem hiding this comment.
Actionable comments posted: 16
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
packages/builtin-tools/src/tools/PushNotificationTool/PushNotificationTool.ts (1)
27-27:⚠️ Potential issue | 🟠 MajorType mismatch:
PushOutputdoes not includeerrorproperty.
PushOutputis defined as{ sent: boolean }(Line 27), but Line 131 returns{ sent: false, error: '...' }. This may cause TypeScript strict mode errors.Proposed fix: Update PushOutput type
-type PushOutput = { sent: boolean } +type PushOutput = { sent: boolean; error?: string }Also applies to: 131-131
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/builtin-tools/src/tools/PushNotificationTool/PushNotificationTool.ts` at line 27, The PushOutput type is missing an error field but is used to return { sent: false, error: '...' }; update the PushOutput type to include an optional or nullable error property (e.g., error?: string or error: string | null) so the return value in the function that constructs PushOutput (where { sent: false, error: '...' } is returned) matches the type; ensure any other returns of PushOutput in the file (search for PushOutput and the function that returns it) are updated to adhere to the new error field contract.src/utils/swarm/inProcessRunner.ts (1)
1297-1300:⚠️ Potential issue | 🟠 MajorAborting the teammate can leave the current autonomy run stuck in
running.This
breakexits the loop before theworkWasAborted/currentAutonomyRunIdfinalization block runs, and the clean exit after the loop never clears or failscurrentAutonomyRunId. Killing a teammate mid-turn can therefore leak an unfinished autonomy run.Suggested direction
- if (abortController.signal.aborted) { - break - } + if (abortController.signal.aborted) { + if (currentAutonomyRunId) { + await markAutonomyRunFailed( + currentAutonomyRunId, + ERROR_MESSAGE_USER_ABORT, + ) + currentAutonomyRunId = undefined + } + break + }If you want to cover aborts that happen after
currentAutonomyRunIdis set but before the next loop iteration starts, add the same guard once more immediately after thewhileloop.Also applies to: 1440-1484
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/swarm/inProcessRunner.ts` around lines 1297 - 1300, The loop can exit early on abort (using abortController.signal.aborted) before the finalization that clears workWasAborted/currentAutonomyRunId runs, leaking an unfinished autonomy run; add the same abort guard immediately after the while loop (the one currently inside the loop) so that if abortController.signal.aborted became true between iterations you still run the finalization/cleanup for workWasAborted and clear currentAutonomyRunId (references: abortController.signal.aborted, workWasAborted, currentAutonomyRunId in inProcessRunner).src/utils/handlePromptSubmit.ts (1)
469-486:⚠️ Potential issue | 🟠 MajorCollect all dequeued autonomy run IDs before processing the batch.
autonomyRunIdsis only populated once each command is reached. If an earlierprocessUserInput()throws, any later autonomy command inqueuedCommandshas already been dequeued for this turn but never reaches the catch path, so its run can stay unfinalized.Suggested direction
- const autonomyRunIds = new Set<string>() + const autonomyRunIds = new Set( + commands.flatMap(cmd => (cmd.autonomy?.runId ? [cmd.autonomy.runId] : [])), + ) try { await runWithWorkload(turnWorkload, async () => { for (let i = 0; i < commands.length; i++) { const cmd = commands[i]! const isFirst = i === 0 if (cmd.autonomy?.runId) { - autonomyRunIds.add(cmd.autonomy.runId) await markAutonomyRunRunning(cmd.autonomy.runId) }Also applies to: 619-626
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/handlePromptSubmit.ts` around lines 469 - 486, The code only adds cmd.autonomy.runId to autonomyRunIds as each command is processed, so if an earlier processUserInput throws some later autonomy commands were dequeued but never recorded; before starting the processing loop inside runWithWorkload (and similarly for the other occurrence around markAutonomyRunRunning at lines ~619-626), pre-scan the queued commands array (commands or queuedCommands) and add every cmd.autonomy.runId to the autonomyRunIds Set so all dequeued autonomy runs are tracked; keep the existing per-command call to markAutonomyRunRunning when processing each command, but ensure the upfront collection happens before any awaits or execution that could throw.src/screens/REPL.tsx (1)
4825-4854:⚠️ Potential issue | 🟠 MajorRemove the type cast to preserve support for structured content payloads.
handleIncomingPromptacceptsQueuedCommandwherevaluecan bestring | Array<ContentBlockParam>, but line 4851 casts it back tostring. This breaks structured content payloads when they reach this code path. SincecreateUserMessagealready expectscontent: string | ContentBlockParam[], the cast is unnecessary and incorrect.Fix
const userMessage = createUserMessage({ - content: queuedCommand.value as string, + content: queuedCommand.value, isMeta: queuedCommand.isMeta ? true : undefined, origin: queuedCommand.origin, });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/screens/REPL.tsx` around lines 4825 - 4854, The code casts queuedCommand.value to string when calling createUserMessage, which strips support for structured payloads; update the call that constructs userMessage (the createUserMessage invocation in the anonymous handler / handleIncomingPrompt flow) to pass queuedCommand.value directly (type: string | ContentBlockParam[]) instead of using "as string", so createUserMessage receives the original typed content and preserves structured content handling.
♻️ Duplicate comments (8)
packages/builtin-tools/src/tools/PushNotificationTool/PushNotificationTool.ts (2)
117-125:⚠️ Potential issue | 🟠 MajorReturn an error that reflects the actual bridge failure instead of falling through.
When bridge delivery fails (non-2xx status at Line 121) or throws an exception (Line 124), the code falls through to Line 131 which returns "No Remote Control bridge configured." This is misleading—the bridge was configured but delivery failed.
Proposed fix
if (response.status >= 200 && response.status < 300) { logForDebugging(`[PushNotification] delivered via bridge`) return { data: { sent: true } } } - logForDebugging(`[PushNotification] bridge delivery failed: status=${response.status}`) + logForDebugging(`[PushNotification] bridge delivery failed: status=${response.status}`) + return { + data: { + sent: false, + error: `Bridge delivery failed (status ${response.status}).`, + }, + } } } catch (e) { - logForDebugging(`[PushNotification] bridge delivery error: ${e}`) + logForDebugging(`[PushNotification] bridge delivery error`) + return { + data: { + sent: false, + error: 'Bridge request failed. Notification not delivered.', + }, + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/builtin-tools/src/tools/PushNotificationTool/PushNotificationTool.ts` around lines 117 - 125, The bridge delivery branch currently logs failures/exceptions and then falls through to return "No Remote Control bridge configured"; update the delivery logic in the PushNotificationTool method handling bridge sends so that on a non-2xx response (check response.status) you return an error result indicating bridge delivery failed (include status and sessionId) and on catch(e) you return an error result containing the exception message (and sessionId) instead of falling through; use the existing logForDebugging, sessionId, and response/status symbols to build the error payload so callers receive a clear bridge failure response rather than the misleading "No Remote Control bridge configured."
118-118:⚠️ Potential issue | 🟡 MinorRedact session ID from debug log.
Line 118 logs the
sessionId, which is user/session metadata. Consider logging only a truncated hash or omitting it entirely to avoid leaking sensitive identifiers in debug output.Also applies to Line 124 (raw error
e) and Line 130 (notification title).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/builtin-tools/src/tools/PushNotificationTool/PushNotificationTool.ts` at line 118, Replace raw sensitive values in debug logs: change the logForDebugging call that prints sessionId to instead log a redacted value (e.g., truncated hash or masked string) by hashing or masking sessionId before logging; likewise do not log the raw error object `e`—log a sanitized message or include only e.message and a correlation id; and avoid printing the full notification title (e.g., `notification.title`) in debug output—log a sanitized/truncated version or a placeholder. Update the logging calls in PushNotificationTool (references: logForDebugging, sessionId, variable `e`, and `notification.title`) to perform masking/truncation or replace with non-sensitive tokens.src/main.tsx (1)
1805-1807:⚠️ Potential issue | 🟠 Major
settings.assistantis still ignored by this gate.The surrounding block says trusted
.claude/settings.jsonshould enable assistant mode, but this predicate still only honors the forced latch and--assistant. A repo withassistant: truenever reaches the KAIROS activation path here.Suggested fix
if ( feature("KAIROS") && assistantModule && (assistantModule.isAssistantForced() || - (options as Record<string, unknown>).assistant === true) && + getInitialSettings().assistant === true || + (options as Record<string, unknown>).assistant === true) && !(options as { agentId?: unknown }).agentId && kairosGate ) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main.tsx` around lines 1805 - 1807, The gate that decides KAIROS activation currently only checks assistantModule.isAssistantForced() and the CLI flag (options.assistant) but ignores the trusted settings file; modify the predicate to also honor the trusted settings' assistant flag (e.g., include settings.assistant === true) so the condition reads roughly: assistantModule && (assistantModule.isAssistantForced() || (options as Record<string, unknown>).assistant === true || settings?.assistant === true); update the code paths that reference assistantModule/isAssistantForced and options to include this settings check so repos with .claude/settings.json enabling assistant mode trigger the KAIROS path.src/cli/print.ts (1)
2114-2116:⚠️ Potential issue | 🟠 MajorDon't let different autonomy runs share one
ask()turn.This still collects multiple
autonomy.runIdvalues from a merged batch, butcanBatchWith()does not gate oncommand.autonomy?.runId. Two autonomy prompts with the sameworkload/isMetacan still be merged and then marked running/completed from a single model response.🧩 Suggested guard in
canBatchWith()export function canBatchWith( head: QueuedCommand, next: QueuedCommand | undefined, ): boolean { return ( next !== undefined && next.mode === 'prompt' && next.workload === head.workload && - next.isMeta === head.isMeta + next.isMeta === head.isMeta && + next.autonomy?.runId === head.autonomy?.runId ) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/cli/print.ts` around lines 2114 - 2116, The batcher is allowing items from different autonomy runs to be merged so a single ask() response can be attributed to multiple runIds; update the batching logic so canBatchWith() also requires command.autonomy?.runId to match (treat undefined vs undefined as match) and ensure any grouping that builds autonomyRunIds from batch items only combines items whose autonomy.runId are identical (do not merge items with different runIds); update canBatchWith() and the batch-building path that produces autonomyRunIds to enforce this guard so each ask() turn is isolated per autonomy.runId.src/daemon/main.ts (1)
246-253:⚠️ Potential issue | 🟠 MajorGuard the state file and only remove it after workers exit.
A second
daemon startcan still clobber a live supervisor becausewriteDaemonState()is unconditional, and Line 261 clears the state before the worker-drain at Line 287 finishes. That makesstatus/stopreport "stopped" while the old supervisor is still alive.🛡️ Suggested fix
const workers: WorkerState[] = [ { kind: 'remoteControl', process: null, backoffMs: BACKOFF_INITIAL_MS, failureCount: 0, parked: false, lastStartTime: 0, }, ] + const existing = queryDaemonStatus() + if (existing.status === 'running') { + console.error(`daemon already running (PID: ${existing.state!.pid})`) + process.exitCode = 1 + return + } + // Write daemon state file so other CLI processes can query/stop us writeDaemonState({ pid: process.pid, cwd: dir, startedAt: new Date().toISOString(), @@ const shutdown = () => { console.log('[daemon] supervisor shutting down...') controller.abort() - removeDaemonState() for (const w of workers) { if (w.process && !w.process.killed) { w.process.kill('SIGTERM') } } @@ await Promise.all( workers .filter(w => w.process && !w.process.killed) .map( w => @@ ), ) + removeDaemonState() console.log('[daemon] supervisor stopped')Also applies to: 258-261, 287-310
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/daemon/main.ts` around lines 246 - 253, The daemon state file is written unconditionally by writeDaemonState and currently cleared before worker shutdown completes, letting a concurrent `daemon start` clobber a running supervisor; fix by first checking for an existing state (read existing state file and validate PID is not active) before calling writeDaemonState to avoid overwriting a live daemon, and move the code that clears/removes the state file so it only executes after workers have fully drained (after the worker-drain/shutdown path completes), ensuring cleanup happens in the shutdown/finally path (use the same removeDaemonState or writeDaemonState(... lastStatus: 'stopped') call post-drain). Ensure these checks reference writeDaemonState, the workers array and the shutdown/drain routine so the state is only removed once workers exit.src/screens/REPL.tsx (1)
4856-4883:⚠️ Potential issue | 🟠 MajorThe autonomy-run persistence calls are still detached from error handling.
markAutonomyRunRunning,finalizeAutonomyRunCompleted, andfinalizeAutonomyRunFailedare still launched withvoid, so any rejection bypasses this chain’scatch. That can leave unhandled rejections and silently dropnextCommandsafter a successfulonQuery.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/screens/REPL.tsx` around lines 4856 - 4883, The detached autonomy-run calls (markAutonomyRunRunning, finalizeAutonomyRunCompleted, finalizeAutonomyRunFailed) are started with `void` so their rejections bypass the outer `catch`; remove `void` and properly await or return these promises so errors propagate into the surrounding chain. Specifically, call markAutonomyRunRunning without `void` and await it (or return it) before calling onQuery; in the onQuery `.then` handler return the Promise from finalizeAutonomyRunCompleted so its rejection is catchable and then enqueue nextCommands inside its resolved handler; in the `.catch` handler return or await finalizeAutonomyRunFailed so its errors are also handled; keep references to onQuery, enqueue, getCwd, and the three finalize/mark functions to locate edits. Ensure all started promises are awaited/returned rather than using `void`.src/utils/autonomyRuns.ts (2)
241-259:⚠️ Potential issue | 🟠 MajorDon't remap an unknown managed step to index 0.
If the flow revision changed or
flowStepIdis stale,findIndex()returns-1and this still queues step0. That corrupts the flow/run linkage by attaching the new run to the wrong step.🐛 Possible fix
const stepIndex = ( await getAutonomyFlowById(record.parentFlowId, rootDir) )?.stateJson?.steps.findIndex( step => step.stepId === record.flowStepId, - ) ?? 0 + ) + if (stepIndex == null || stepIndex < 0) { + return record + } await queueManagedAutonomyFlowStepRun({ flowId: record.parentFlowId, stepId: record.flowStepId, - stepIndex: stepIndex >= 0 ? stepIndex : 0, + stepIndex, runId: record.runId, rootDir, nowMs: record.createdAt,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/autonomyRuns.ts` around lines 241 - 259, The code currently maps a missing step (findIndex === -1) to index 0 which can attach runs to the wrong step; in the block handling record.parentFlowId/record.flowStepId where you call getAutonomyFlowById and compute stepIndex, change the logic to detect stepIndex === -1 and do not call queueManagedAutonomyFlowStepRun in that case (instead log/warn or return), only queue when stepIndex >= 0; reference the variables and functions record.parentFlowId, record.flowStepId, parentFlowSyncMode === 'managed', getAutonomyFlowById(...).stateJson.steps.findIndex(...), stepIndex, and queueManagedAutonomyFlowStepRun to locate and update the code.
716-744:⚠️ Potential issue | 🟠 MajorDon't consume every due HEARTBEAT task before the loop decides which ones can run.
The initial
commitAutonomyQueuedPrompt({ prepared, ... })commitsprepared.dueHeartbeatTasksup front. Any task skipped later byshouldCreate()or bystartManagedAutonomyFlowFromHeartbeatTask()returningnullis still marked consumed, so it disappears until the next interval.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/autonomyRuns.ts` around lines 716 - 744, The code currently calls commitAutonomyQueuedPrompt(...) immediately which marks all prepared.dueHeartbeatTasks as consumed even if later skipped by params.shouldCreate() or startManagedAutonomyFlowFromHeartbeatTask() returning null; defer or restrict the commit so only tasks actually started or intentionally consumed are committed. Concretely, replace the immediate await commitAutonomyQueuedPrompt({ prepared, ... }) call with logic that (a) iterates prepared.dueHeartbeatTasks first, checks params.shouldCreate() and calls startManagedAutonomyFlowFromHeartbeatTask(...) to collect the actual flow commands and any tasks you intend to mark consumed, then (b) call commitAutonomyQueuedPrompt(...) with the reduced set (or after the loop) so only the tasks that produced flows or were explicitly consumed are committed; reference commitAutonomyQueuedPrompt, prepared.dueHeartbeatTasks, params.shouldCreate, startManagedAutonomyFlowFromHeartbeatTask, and the commands array when making the change.
🟡 Minor comments (5)
src/services/langfuse/__tests__/langfuse.isolated.ts-231-234 (1)
231-234:⚠️ Potential issue | 🟡 Minor
shutdownLangfusetest name and assertions are inconsistent.Line 231 says it verifies
forceFlush/shutdown, but Line 233 only checks resolution. Add explicit mock-call assertions (after initialization) so this test can catch regressions.Assertion-focused fix
describe('shutdownLangfuse', () => { test('calls forceFlush and shutdown on processor', async () => { - const { shutdownLangfuse } = await import('../client.js') - await expect(shutdownLangfuse()).resolves.toBeUndefined() + process.env.LANGFUSE_PUBLIC_KEY = 'pk-test' + process.env.LANGFUSE_SECRET_KEY = 'sk-test' + const { initLangfuse, shutdownLangfuse } = await import('../client.js') + expect(initLangfuse()).toBe(true) + await expect(shutdownLangfuse()).resolves.toBeUndefined() + expect(mockForceFlush).toHaveBeenCalled() + expect(mockShutdown).toHaveBeenCalled() }) })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/langfuse/__tests__/langfuse.isolated.ts` around lines 231 - 234, The test description says it should verify forceFlush and shutdown on the processor but currently only checks that shutdownLangfuse() resolves; update the test for shutdownLangfuse to import or access the mocked processor instance (the mock that exposes forceFlush and shutdown), call any necessary initialization so the processor is registered, invoke await shutdownLangfuse(), and then add explicit assertions that the processor.forceFlush and processor.shutdown mock functions were called (e.g., expect(processor.forceFlush).toHaveBeenCalled() and expect(processor.shutdown).toHaveBeenCalled()) to match the test name and catch regressions.docs/test-plans/openclaw-autonomy-baseline.md-11-11 (1)
11-11:⚠️ Potential issue | 🟡 MinorFix typo in repository name.
"Claude-code-bast" should be "Claude-code-best".
-Establish a stable baseline around the parts of `Claude-code-bast` that later autonomy work is most likely to touch: +Establish a stable baseline around the parts of `Claude-code-best` that later autonomy work is most likely to touch:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/test-plans/openclaw-autonomy-baseline.md` at line 11, Fix the typo in the documentation string by replacing the incorrect repository name "Claude-code-bast" with the correct name "Claude-code-best" wherever it appears (the phrase "Claude-code-bast" in the line starting with "Establish a stable baseline..."). Make sure the corrected string matches the exact repository name "Claude-code-best".docs/task/task-013-bg-engine-abstraction.md-135-169 (1)
135-169:⚠️ Potential issue | 🟡 MinorThe task spec still documents superseded daemon commands.
This doc says to use
claude daemon bg/claude daemon attach, but the PR objective describes the bg-session flow as part of the new/daemonhierarchy. Please update the examples and checklist so the task spec matches the shipped command surface.Also applies to: 187-194
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/task/task-013-bg-engine-abstraction.md` around lines 135 - 169, Update the docs to reflect the new /daemon subcommand hierarchy: replace references to the old `claude daemon bg`/`claude daemon attach` with the new daemon flow (invoke the bg flow via the daemon start subcommand used by handleBgStart and show reconnect via the daemon attach subcommand), update the example output to match the messages printed by handleBgStart (sessionName, engineUsed, logPath and reconnect hint), and update the checklist and any prose that described engine selection to note that attachHandler chooses the engine based on session.engine (using TmuxEngine when session.engine === 'tmux' and tmuxSessionName, otherwise DetachedEngine).docs/features/daemon-restructure-design.md-58-71 (1)
58-71:⚠️ Potential issue | 🟡 MinorAdd language identifiers to these fenced blocks.
Markdownlint is already flagging MD040 here, so these unlabeled fences will keep the doc noisy or failing until each block is tagged (
text,ts, etc.).Also applies to: 112-122, 126-129, 235-246, 250-257, 261-268, 293-298
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/features/daemon-restructure-design.md` around lines 58 - 71, The fenced code blocks that show the CLI/REPL usage (the block starting with "claude daemon <subcommand> ← CLI 入口 (cli.tsx 快速路径)" and other unlabeled blocks that list subcommands such as "status start stop bg attach logs kill") are missing language identifiers and triggering MD040; update each unlabeled triple-backtick fence to include an appropriate language tag (e.g., text, bash, sh, or console) so the markdown linter stops flagging them and the snippets render correctly.docs/features/daemon-restructure-design.md-73-96 (1)
73-96:⚠️ Potential issue | 🟡 MinorAlign this CLI snippet with the shipped feature gate.
The example still routes
claude daemon ...only whenfeature('DAEMON')is enabled, butsrc/entrypoints/cli.tsxnow enables that fast-path forDAEMON || BG_SESSIONS. As written, the design doc misstates BG_SESSIONS-only behavior.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/features/daemon-restructure-design.md` around lines 73 - 96, Update the CLI snippet so the fast-path routing uses the same feature gate as the shipped code: change the initial condition to check feature('DAEMON') || feature('BG_SESSIONS') and keep the same handling for subcommands (daemonMain, bg.handleBgStart, bg[`${sub}Handler`]) so BG_SESSIONS-only installs still get the unified "daemon" entry; also adjust or remove the subsequent "向后兼容 (deprecated)" block so it no longer contradicts the new gate and reflects that BG_SESSIONS-enabled deployments will already be routed via the unified handler.
🧹 Nitpick comments (14)
src/services/langfuse/__tests__/langfuse.isolated.ts (1)
634-654: This “query reuse” test is tautological and not exercising production logic.From Line 648 onward it only asserts local assignments (
ownsTrace,langfuseTrace), so it cannot catch regressions in actual query trace-reuse behavior.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/langfuse/__tests__/langfuse.isolated.ts` around lines 634 - 654, The test currently only asserts local variables (ownsTrace/langfuseTrace) and never exercises the real query logic; update it to import and call the actual query function (the module that decides trace reuse) instead of just creating a subTrace, pass subTrace as the langfuseTrace argument, and assert the production code reuses it: spy/mock the createSubagentTrace export from ../tracing.js to verify it is NOT called when a langfuseTrace is provided, and assert the trace object used/returned by the query function is strictly equal to subTrace; keep the test name 'query reuses passed langfuseTrace instead of creating new one' and references to createSubagentTrace, ownsTrace/langfuseTrace only as needed for the assertion.src/services/api/openai/__tests__/queryModelOpenAI.test.ts (1)
4-39: Extract the isolated-subprocess runner into a shared test helper.This block is effectively duplicated in
src/services/langfuse/__tests__/langfuse.test.ts(same spawn/capture/throw pattern). Consolidating it will reduce maintenance drift when invocation or diagnostics need updates.Refactor sketch
-const isolatedPath = fileURLToPath( - new URL('./queryModelOpenAI.isolated.ts', import.meta.url), -) +import { runIsolatedTestFile } from 'src/__tests__/utils/runIsolatedTestFile.js' + +const isolatedPath = fileURLToPath( + new URL('./queryModelOpenAI.isolated.ts', import.meta.url), +) ... - const proc = Bun.spawn({ - cmd: [process.execPath, 'test', isolatedPath], - cwd: process.cwd(), - stdout: 'pipe', - stderr: 'pipe', - env: process.env, - }) - - const [stdout, stderr, exitCode] = await Promise.all([ - new Response(proc.stdout).text(), - new Response(proc.stderr).text(), - proc.exited, - ]) - - if (exitCode !== 0) { - throw new Error( - [ - `isolated queryModelOpenAI test failed with exit code ${exitCode}`, - '', - 'STDOUT:', - stdout, - '', - 'STDERR:', - stderr, - ].join('\n'), - ) - } - - expect(exitCode).toBe(0) + await expect( + runIsolatedTestFile({ + isolatedPath, + label: 'queryModelOpenAI', + }), + ).resolves.toBeUndefined()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/api/openai/__tests__/queryModelOpenAI.test.ts` around lines 4 - 39, This test duplicates an isolated-subprocess spawn/capture/throw pattern; extract it into a shared helper (e.g., runIsolatedSubprocess) that encapsulates Bun.spawn invocation (using process.execPath and given isolatedPath), collects stdout/stderr via new Response(proc.stdout).text() and new Response(proc.stderr).text(), awaits proc.exited for exitCode, and returns { stdout, stderr, exitCode }; then replace the inline block in queryModelOpenAI.test.ts (the isolatedPath/Bun.spawn block) and the matching block in the other test with a call to this helper and reuse the same non-zero-exit error construction and expect(exitCode).toBe(0).docs/features/stub-recovery-design-1-4.md (5)
33-51: Add error handling for corrupted or manually-edited state files.The MVP design describes reading and writing the daemon state file but doesn't address what happens if the file is corrupted, manually edited, or contains invalid JSON. This could lead to confusing error messages or unexpected behavior.
Consider adding to the design:
- Schema validation when reading the state file
- Graceful degradation if parsing fails (treat as "stopped" state)
- Optional
--forceflag forstopcommand to handle edge cases- Clear user-facing error messages for corrupted state
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/features/stub-recovery-design-1-4.md` around lines 33 - 51, The design omits handling for corrupted or invalid daemon state files (e.g., ~/.claude/daemon/remote-control.json); update the start/status/stop flows to validate and handle bad state: add JSON/schema validation when reading the state file (used by the status command and stop flow), treat parse/validation failures as a safe "stopped" or "stale" state with a clear user-facing error message, add an optional --force flag for the stop command to allow forcible cleanup when the PID or state is inconsistent, and ensure automatic cleanup of invalid state files (or prompt the user) to avoid crashes; reference the state file, and the start/status/stop behaviors in your changes.
138-142: Specify the Windows tmux degradation strategy.The design mentions that "Windows 下 tmux 路径需要明确降级策略" (Windows tmux path needs explicit degradation strategy) but doesn't specify what that strategy should be. Since this is identified as a risk, the design document should outline at least a preliminary approach.
Consider adding a brief degradation strategy, such as:
- Detect tmux availability at runtime
- Fall back to detached process mode if tmux is unavailable on Windows
- Store session metadata differently when tmux is not used
- Document limitations of non-tmux mode (e.g., no
attachsupport)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/features/stub-recovery-design-1-4.md` around lines 138 - 142, The design doc lists a risk "Windows 下 tmux 路径需要明确降级策略" but provides no strategy; update the "风险" section to specify a Windows tmux degradation plan: state runtime detection of tmux availability, a fallback to detached/background process mode for commands like attach / --bg when tmux is missing, an alternative approach for storing session metadata when tmux isn't used, and note limitations (e.g., attach not supported in non-tmux mode); reference the "attach" / "--bg" behavior and "tmux" metadata handling so readers can find and review the changes.
242-249: Add security and authorization considerations for session attachment.The Phase 4A design allows attaching to a session via
claude assistant <sessionId>but doesn't address security concerns such as:
- Can any user attach to any session, or should there be ownership/permission checks?
- What if the session contains sensitive data?
- Should there be audit logging when someone attaches to a session?
Consider adding a security section to the design that addresses:
- Session ownership validation (only creator can attach, or token-based access?)
- Read-only vs. interactive attachment modes
- Audit trail for attachment events
- Handling of sessions in shared/multi-user environments
This is especially important if sessions might contain credentials, API keys, or confidential information.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/features/stub-recovery-design-1-4.md` around lines 242 - 249, Add a Security & Authorization section to the Phase 4A design (around the "claude assistant <sessionId>" behavior) that defines: session ownership and permission checks (e.g., only creator or holders of a session token may attach), token-based and role-based access options, read-only vs interactive attachment modes, audit logging for attach/detach/access events, and explicit handling rules for sessions containing secrets or credentials (redaction, credential expiry, or requiring re-authentication); reference the "claude assistant <sessionId>" command and the existing attach branch/flow so implementers know where to enforce these checks and logs.
168-185: Define job state lifecycle and concurrency handling.The MVP design describes creating jobs and updating
state.jsonbut doesn't specify:
- What are the valid job states? (e.g., pending, running, completed, failed, cancelled?)
- How are state transitions validated?
- What happens if multiple processes attempt to update the same job simultaneously?
Consider adding to the design:
- Explicit state machine diagram or list of valid states
- File-locking strategy for
state.jsonupdates- Retry/conflict resolution strategy
- Whether jobs should be append-only or allow mutations
This will prevent race conditions and ensure consistent job state management.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/features/stub-recovery-design-1-4.md` around lines 168 - 185, Add an explicit job state lifecycle and concurrency section to the MVP doc: list valid states (e.g., pending, running, completed, failed, cancelled), define allowed state transitions and where transition validation occurs (when writing state.json in the job directory created by new), specify a file-locking strategy for state.json updates (e.g., advisory lock per ~/.claude/jobs/<job-id>/state.json or atomic write+rename) and a retry/conflict-resolution policy (retry with backoff, fail-fast on repeated conflicts, and merge rules), and state whether replies are append-only (replies.jsonl) or allow mutations to input.txt/state.json and how concurrent reply <job-id> <text> commands should be serialized; reference the new, reply, and list flows so reviewers can see where to enforce these rules.
1-310: Consider adding test coverage and documentation update requirements.The design document provides manual verification steps for each feature but doesn't specify:
- Automated test requirements: Should each MVP include unit tests? Integration tests? What's the coverage target?
- Documentation updates: Which user-facing docs need updates? (CLI help, README, migration guides?)
- Backwards compatibility: How to handle users upgrading from versions with stub commands?
Consider adding to each section:
- Test coverage: Required unit/integration tests for the MVP
- Documentation plan: Which docs to update (e.g., CLI help text, feature docs)
- Migration notes: Breaking changes or behavior differences users should know about
- Observability: Logging/metrics to add for debugging and monitoring
This ensures a complete feature delivery beyond just the code implementation.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/features/stub-recovery-design-1-4.md` around lines 1 - 310, Add explicit "Tests, Documentation, Migration & Observability" requirements into this design doc for each feature (1. `claude daemon status`/`stop`, 2. `BG_SESSIONS`, 3. `TEMPLATES`, 4. `assistant [sessionId]`): for each feature add short bullets specifying required unit/integration tests and coverage targets (e.g., unit tests for state handling in daemon `start/status/stop`, integration tests for `ps/logs/kill` in `BG_SESSIONS`, FS lifecycle tests for `TEMPLATES` job creation/reply, and attach/edge-case tests for `assistant`), list which user-facing docs to update (CLI help text, README, feature doc), note migration/backwards-compatibility considerations (stale state file handling, behavior changes), and add minimal observability requirements (log messages/metrics to emit for `state.ts` updates, session registry events in `concurrentSessions`, job state writes in `jobs/state.ts`, and attach attempts in `assistant/sessionDiscovery`).src/proactive/__tests__/state.baseline.test.ts (1)
16-16: Add.jsextension to module import for ESM consistency.The import
from '../index'works in Bun but lacks the.jsextension. For ESM module consistency across the codebase, consider adding the extension.-} from '../index' +} from '../index.js'🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/proactive/__tests__/state.baseline.test.ts` at line 16, Update the ESM import to include the .js extension for consistency: change the import that currently reads from '../index' in the test file (where symbols are imported from '../index') to reference '../index.js' so ESM loaders (non-Bun) resolve the module consistently.src/utils/autonomyPersistence.ts (1)
40-40: Consider logging unlock failures for debugging.Silently swallowing unlock errors (
catch(() => {})) may hide issues during debugging. A non-blocking log could help diagnose stale lock problems.Proposed enhancement
- await unlock().catch(() => {}) + await unlock().catch((e) => { + // Log but don't throw - lock may already be released + console.debug?.('[autonomyPersistence] unlock failed:', e) + })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/autonomyPersistence.ts` at line 40, The catch on await unlock() currently swallows all errors; update the call in autonomyPersistence.ts to log failures non-blockingly instead of ignoring them—e.g., replace await unlock().catch(() => {}) with a pattern that catches the error and emits a warning (using the existing logger or console.warn) including context like "unlock failed" and the error, so unlock() failures are recorded for debugging while not rethrowing.src/assistant/index.ts (1)
45-54: Synchronous file I/O is acceptable here, but consider async for future scale.
readFileSyncis fine for startup config loading, but if this function is called on hot paths or multiple times, an async version with caching may be preferable.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/assistant/index.ts` around lines 45 - 54, The current getAssistantSystemPromptAddendum uses synchronous readFileSync which is fine at startup but should be converted to an async, cached implementation for hot paths: change getAssistantSystemPromptAddendum to an async function (or add getAssistantSystemPromptAddendumAsync) that uses fs.promises.readFile(join(getClaudeConfigHomeDir(), 'agents', 'assistant.md'), 'utf-8'), store the result in a module-level cache variable (e.g. cachedAssistantAddendum) so subsequent calls return the cached string, and on any read error return '' (or resolve to ''). Update callers to await the async function as needed.src/daemon/state.ts (2)
45-51: Consider validating parsed JSON structure.
JSON.parse(raw) as DaemonStateDatatrusts the file content without validation. Malformed or tampered state files could cause runtime errors in callers expecting validDaemonStateDataproperties.Proposed validation
export function readDaemonState( name = 'remote-control', ): DaemonStateData | null { const filePath = getDaemonStateFilePath(name) try { const raw = readFileSync(filePath, 'utf-8') - return JSON.parse(raw) as DaemonStateData + const parsed = JSON.parse(raw) + // Basic shape validation + if (typeof parsed?.pid !== 'number' || typeof parsed?.cwd !== 'string') { + return null + } + return parsed as DaemonStateData } catch { return null } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/daemon/state.ts` around lines 45 - 51, The code unconditionally casts JSON.parse(raw) to DaemonStateData which can hide malformed or tampered state files; replace that with runtime validation: after parsing (using JSON.parse(raw)), validate the resulting object with a dedicated type guard or schema validator (e.g., isDaemonStateData(obj) or a zod/schema check) that confirms required properties and their types for DaemonStateData, and only return the object when validation passes; if validation fails, return null (or log and return null) instead of casting, and keep references to JSON.parse, readFileSync and the DaemonStateData type to locate and update the parsing logic.
145-156: Verify process termination after SIGKILL.After sending SIGKILL and waiting 500ms, the function assumes the process is dead and returns
true. On heavily loaded systems, the kernel may not have reaped the process yet. Consider adding a final liveness check.Proposed enhancement
// Brief wait for SIGKILL to take effect await new Promise(resolve => setTimeout(resolve, 500)) + // Final verification - if still alive after SIGKILL, something is very wrong + if (isProcessAlive(pid)) { + console.error(`[daemon] Process ${pid} survived SIGKILL`) + } + removeDaemonState(name) return true }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/daemon/state.ts` around lines 145 - 156, After sending SIGKILL with process.kill(pid, 'SIGKILL') and awaiting 500ms, add a final liveness check before calling removeDaemonState(name) and returning true: implement a short retry loop (e.g., up to a configurable timeout like another 500–1000ms with small sleeps) that uses a non-destructive check (process.kill(pid, 0) in a try/catch or an OS-specific check) to confirm the PID is gone; only call removeDaemonState(name) and return true if the process is no longer alive, otherwise return false (or surface an error) after the timeout. Ensure the new check is placed after the existing 500ms wait and references the same pid and removeDaemonState(name).src/daemon/__tests__/state.test.ts (1)
170-184: PID 999999 assumption may rarely fail on some systems.The test assumes PID 999999 is dead, but Linux allows PIDs up to
pid_max(default 32768, max 4194304). While unlikely, this could theoretically collide with a real process on systems with high PID values.More robust approach
test('returns stale when PID is dead and cleans up', () => { + // Use a very high PID unlikely to exist; /proc/sys/kernel/pid_max is typically ≤4194304 writeDaemonState( { - pid: 999999, + pid: 2147483647, // Max 32-bit signed int, exceeds typical pid_max cwd: '/',🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/daemon/__tests__/state.test.ts` around lines 170 - 184, The test's hardcoded PID 999999 can collide on some systems; instead spawn a short-lived child process, capture its pid, terminate and await its exit, then use that pid when calling writeDaemonState so queryDaemonStatus sees a truly dead PID; update the test to create the child (via child_process.spawn), grab child.pid, call child.kill() and await the 'exit' event before calling writeDaemonState('stale-test'), then assert queryDaemonStatus('stale-test') returns 'stale' and the state file is removed (references: writeDaemonState, queryDaemonStatus, getDaemonStateFilePath).src/hooks/usePipeMuteSync.ts (1)
91-103: MoveonAbort()calls outside the state updater.The
onAbort()calls inside thesetToolUseConfirmQueueupdater function create a side effect in what should be a pure function. React invokes state updaters twice in StrictMode during development to detect impurities—this would cause the same prompts to be aborted twice.Extract the abort logic before calling
setState:Suggested refactor
const toAbort = setToolUseConfirmQueue((queue: Record<string, unknown>[]) => { const items = queue.filter( (item: Record<string, unknown>) => item.pipeName === name, ) return queue.filter((item: Record<string, unknown>) => item.pipeName !== name) }) // Execute side effects after state update for (const item of toAbort) { try { ;(item.onAbort as (() => void) | undefined)?.() } catch { // onAbort may throw if client disconnected — safe to ignore } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/hooks/usePipeMuteSync.ts` around lines 91 - 103, The abort callback invocations (item.onAbort) are side effects and must be moved out of the setToolUseConfirmQueue updater; instead, determine the items to abort by filtering the current queue for item.pipeName === name (store that result in a local toAbort variable), then call setToolUseConfirmQueue to remove those items (return queue.filter(item => item.pipeName !== name)), and only after the state update iterate over toAbort and call each (item.onAbort) inside try/catch. Ensure you remove the onAbort invocation from inside the updater and reference setToolUseConfirmQueue, onAbort, pipeName, and name when making the change.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 19b70662-762c-46bc-be1f-561358fdbf41
📒 Files selected for processing (109)
.gitignore02-kairos (1).mdbuild.tsdocs/features/daemon-restructure-design.mddocs/features/stub-recovery-design-1-4.mddocs/task/task-001-daemon-status-stop.mddocs/task/task-002-bg-sessions-ps-logs-kill.mddocs/task/task-003-templates-job-mvp.mddocs/task/task-004-assistant-session-attach.mddocs/task/task-013-bg-engine-abstraction.mddocs/task/task-014-daemon-command-hierarchy.mddocs/task/task-015-job-command-hierarchy.mddocs/task/task-016-backward-compat-tests.mddocs/test-plans/openclaw-autonomy-baseline.mdpackages/builtin-tools/src/tools/PushNotificationTool/PushNotificationTool.tspackages/builtin-tools/src/tools/SendUserFileTool/SendUserFileTool.tsscripts/dev.tssrc/__tests__/context.baseline.test.tssrc/assistant/AssistantSessionChooser.tssrc/assistant/AssistantSessionChooser.tsxsrc/assistant/gate.tssrc/assistant/index.tssrc/assistant/sessionDiscovery.tssrc/cli/bg.tssrc/cli/bg/__tests__/detached.test.tssrc/cli/bg/__tests__/engine.test.tssrc/cli/bg/__tests__/tail.test.tssrc/cli/bg/engine.tssrc/cli/bg/engines/detached.tssrc/cli/bg/engines/index.tssrc/cli/bg/engines/tmux.tssrc/cli/bg/tail.tssrc/cli/handlers/ant.tssrc/cli/handlers/templateJobs.tssrc/cli/print.tssrc/cli/rollback.tssrc/cli/up.tssrc/commands.tssrc/commands/__tests__/autonomy.test.tssrc/commands/__tests__/proactive.baseline.test.tssrc/commands/assistant/assistant.tssrc/commands/assistant/assistant.tsxsrc/commands/assistant/gate.tssrc/commands/autonomy.tssrc/commands/daemon/__tests__/daemon.test.tssrc/commands/daemon/daemon.tsxsrc/commands/daemon/index.tssrc/commands/init.tssrc/commands/job/__tests__/job.test.tssrc/commands/job/index.tssrc/commands/job/job.tsxsrc/commands/lang/index.tssrc/commands/lang/lang.tssrc/commands/send/send.tssrc/commands/torch.tssrc/daemon/__tests__/daemonMain.test.tssrc/daemon/__tests__/state.test.tssrc/daemon/main.tssrc/daemon/state.tssrc/entrypoints/cli.tsxsrc/hooks/useAwaySummary.tssrc/hooks/useMasterMonitor.tssrc/hooks/usePipeIpc.tssrc/hooks/usePipeMuteSync.tssrc/hooks/usePipePermissionForward.tssrc/hooks/usePipeRelay.tssrc/hooks/useScheduledTasks.tssrc/jobs/__tests__/classifier.test.tssrc/jobs/__tests__/state.test.tssrc/jobs/__tests__/templates.test.tssrc/jobs/classifier.tssrc/jobs/state.tssrc/jobs/templates.tssrc/main.tsxsrc/proactive/__tests__/state.baseline.test.tssrc/proactive/useProactive.tssrc/screens/REPL.tsxsrc/services/analytics/growthbook.tssrc/services/api/openai/__tests__/queryModelOpenAI.isolated.tssrc/services/api/openai/__tests__/queryModelOpenAI.test.tssrc/services/api/openai/__tests__/streamAdapter.test.tssrc/services/awaySummary.tssrc/services/langfuse/__tests__/langfuse.isolated.tssrc/services/langfuse/__tests__/langfuse.test.tssrc/tasks/InProcessTeammateTask/InProcessTeammateTask.tsxsrc/tasks/InProcessTeammateTask/types.tssrc/types/textInputTypes.tssrc/utils/__tests__/autonomyAuthority.test.tssrc/utils/__tests__/autonomyRuns.test.tssrc/utils/__tests__/cronScheduler.baseline.test.tssrc/utils/__tests__/cronTasks.baseline.test.tssrc/utils/__tests__/language.test.tssrc/utils/__tests__/pipeMuteState.test.tssrc/utils/__tests__/taskSummary.test.tssrc/utils/autonomyAuthority.tssrc/utils/autonomyFlows.tssrc/utils/autonomyPersistence.tssrc/utils/autonomyRuns.tssrc/utils/config.tssrc/utils/handlePromptSubmit.tssrc/utils/language.tssrc/utils/pipeMuteState.tssrc/utils/pipePermissionRelay.tssrc/utils/pipeTransport.tssrc/utils/swarm/inProcessRunner.tssrc/utils/swarm/spawnInProcess.tssrc/utils/taskSummary.tstests/mocks/file-system.tstsconfig.json
💤 Files with no reviewable changes (3)
- src/assistant/AssistantSessionChooser.ts
- src/hooks/useAwaySummary.ts
- src/commands/assistant/assistant.ts
✅ Files skipped from review due to trivial changes (29)
- .gitignore
- src/cli/bg/tests/tail.test.ts
- src/cli/bg/tests/detached.test.ts
- src/services/awaySummary.ts
- src/commands/job/tests/job.test.ts
- src/hooks/usePipePermissionForward.ts
- docs/task/task-001-daemon-status-stop.md
- src/commands/daemon/tests/daemon.test.ts
- src/commands/init.ts
- src/commands/send/send.ts
- docs/task/task-016-backward-compat-tests.md
- tsconfig.json
- src/hooks/usePipeIpc.ts
- docs/task/task-004-assistant-session-attach.md
- docs/task/task-002-bg-sessions-ps-logs-kill.md
- src/utils/tests/cronScheduler.baseline.test.ts
- src/utils/tests/pipeMuteState.test.ts
- src/commands/job/index.ts
- docs/task/task-003-templates-job-mvp.md
- src/cli/bg/engine.ts
- src/utils/tests/taskSummary.test.ts
- src/commands/tests/proactive.baseline.test.ts
- src/utils/tests/cronTasks.baseline.test.ts
- src/utils/language.ts
- docs/task/task-014-daemon-command-hierarchy.md
- docs/task/task-015-job-command-hierarchy.md
- src/services/api/openai/tests/queryModelOpenAI.isolated.ts
- src/utils/tests/autonomyRuns.test.ts
- src/cli/bg.ts
🚧 Files skipped from review as they are similar to previous changes (44)
- scripts/dev.ts
- build.ts
- src/assistant/gate.ts
- src/cli/bg/tests/engine.test.ts
- src/commands/assistant/gate.ts
- src/commands/daemon/index.ts
- src/commands/lang/index.ts
- src/assistant/sessionDiscovery.ts
- tests/mocks/file-system.ts
- src/services/api/openai/tests/streamAdapter.test.ts
- src/utils/config.ts
- src/cli/bg/engines/index.ts
- src/cli/up.ts
- src/tasks/InProcessTeammateTask/types.ts
- src/commands/daemon/daemon.tsx
- src/commands/torch.ts
- src/commands/job/job.tsx
- src/jobs/classifier.ts
- src/cli/bg/engines/detached.ts
- src/utils/swarm/spawnInProcess.ts
- src/cli/bg/tail.ts
- src/commands/tests/autonomy.test.ts
- src/assistant/AssistantSessionChooser.tsx
- src/daemon/tests/daemonMain.test.ts
- src/hooks/useMasterMonitor.ts
- src/jobs/tests/classifier.test.ts
- src/utils/taskSummary.ts
- packages/builtin-tools/src/tools/SendUserFileTool/SendUserFileTool.ts
- src/jobs/templates.ts
- src/cli/bg/engines/tmux.ts
- src/jobs/tests/state.test.ts
- src/tests/context.baseline.test.ts
- 02-kairos (1).md
- src/utils/pipeMuteState.ts
- src/hooks/useScheduledTasks.ts
- src/jobs/state.ts
- src/types/textInputTypes.ts
- src/cli/rollback.ts
- src/proactive/useProactive.ts
- src/tasks/InProcessTeammateTask/InProcessTeammateTask.tsx
- src/utils/autonomyAuthority.ts
- src/hooks/usePipeRelay.ts
- src/utils/pipeTransport.ts
- src/utils/pipePermissionRelay.ts
| [src/daemon/main.ts](</e:/Source_code/Claude-code-bast/src/daemon/main.ts:1>) | ||
| [src/daemon/workerRegistry.ts](</e:/Source_code/Claude-code-bast/src/daemon/workerRegistry.ts:1>) | ||
| - `status` / `stop` 目前只是占位输出: | ||
| [src/daemon/main.ts](</e:/Source_code/Claude-code-bast/src/daemon/main.ts:49>) | ||
| - `/remote-control-server` 有自己的命令内 UI 状态,但只维护当前进程内的 `daemonProcess`,并不适合作为跨进程 CLI 管理基础: | ||
| [src/commands/remoteControlServer/remoteControlServer.tsx](</e:/Source_code/Claude-code-bast/src/commands/remoteControlServer/remoteControlServer.tsx:32>) |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Use repository-relative paths instead of absolute Windows paths.
The file path references use absolute Windows paths (e.g., e:/Source_code/Claude-code-bast/...), which are not portable and will break for other developers or CI environments.
📝 Recommended fix: use repository-relative paths
Replace absolute paths with repository-relative references:
-- [src/daemon/main.ts](</e:/Source_code/Claude-code-bast/src/daemon/main.ts:1>)
-- [src/daemon/workerRegistry.ts](</e:/Source_code/Claude-code-bast/src/daemon/workerRegistry.ts:1>)
+- [src/daemon/main.ts](../../src/daemon/main.ts)
+- [src/daemon/workerRegistry.ts](../../src/daemon/workerRegistry.ts)Apply the same pattern to all other file references throughout the document.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/features/stub-recovery-design-1-4.md` around lines 19 - 24, Replace the
absolute Windows file references (strings like
"e:/Source_code/Claude-code-bast/…") in the document with repository-relative
paths (e.g., "src/daemon/main.ts", "src/daemon/workerRegistry.ts",
"src/commands/remoteControlServer/remoteControlServer.tsx") so links are
portable; search the doc for any occurrences of the absolute-root pattern and
update them to use relative references or markdown-style repository-relative
links consistently.
| import { getRecentActivity } from '../../utils/logoV2Utils.js' | ||
| import type { LogOption } from '../../types/logs.js' | ||
|
|
||
| const DEFAULT_LIST = 'default' |
There was a problem hiding this comment.
The fallback task list name is wrong.
Line 12 defaults every task command here to "default", but the CLI help in src/main.tsx documents "tasklist" as the default list. Without -l, these handlers will read and write a different directory than the rest of the task flow.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/cli/handlers/ant.ts` at line 12, The DEFAULT_LIST constant in
src/cli/handlers/ant.ts is set to 'default' but should match the CLI help's
documented fallback 'tasklist'; update the DEFAULT_LIST value from 'default' to
'tasklist' so all task command handlers (the constant DEFAULT_LIST used in the
ant handlers) read/write the same directory as the rest of the task flow and
ensure no other places rely on the old string.
| export async function errorHandler(num: number | undefined): Promise<void> { | ||
| // Error log viewing — shows recent session errors | ||
| const logs = await getRecentActivity() | ||
| const count = num ?? 5 | ||
|
|
||
| console.log(`Last ${count} sessions:`) | ||
| for (let i = 0; i < Math.min(count, logs.length); i++) { | ||
| const log = logs[i]! | ||
| const date = log.modified | ||
| ? new Date(log.modified).toLocaleString() | ||
| : 'unknown' | ||
| console.log(` ${i}: ${log.sessionId} (${date})`) | ||
| } |
There was a problem hiding this comment.
claude error isn't reading error logs.
This handler only lists recent sessions from getRecentActivity(). It never loads any error-log source, so the command output doesn't match its contract.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/cli/handlers/ant.ts` around lines 147 - 159, The errorHandler currently
only lists recent sessions via getRecentActivity() but never reads any error
logs; update errorHandler to actually load and print error entries: keep using
getRecentActivity() to list sessions but for each sessionId call the appropriate
log-loading helper (e.g., readErrorLog(sessionId) or
getErrorLogsForSession(sessionId) — create one if it doesn't exist) to fetch the
error records and output their timestamps and messages (falling back to
log.modified === undefined => 'unknown' as before); ensure you respect the count
parameter, handle missing logs gracefully, and include the error details (not
just sessionId) in the console output.
| export async function exportHandler( | ||
| source: string, | ||
| outputFile: string, | ||
| ): Promise<void> { | ||
| const { writeFile, readFile } = await import('fs/promises') | ||
| const logs = await getRecentActivity() | ||
|
|
||
| // Try as index first | ||
| const idx = parseInt(source, 10) | ||
| let log: LogOption | undefined | ||
| if (Number.isFinite(idx) && idx >= 0 && idx < logs.length) { | ||
| log = logs[idx] | ||
| } else { | ||
| log = logs.find(l => l.sessionId === source) | ||
| } | ||
|
|
||
| if (!log) { | ||
| // Try as file path | ||
| try { | ||
| const content = await readFile(source, 'utf-8') | ||
| await writeFile(outputFile, content, 'utf-8') | ||
| console.log(`Exported ${source} → ${outputFile}`) | ||
| return | ||
| } catch { | ||
| console.error(`Source not found: ${source}`) | ||
| process.exitCode = 1 | ||
| return | ||
| } | ||
| } | ||
|
|
||
| await writeFile(outputFile, JSON.stringify(log, null, 2), 'utf-8') | ||
| console.log(`Exported session ${log.sessionId} → ${outputFile}`) |
There was a problem hiding this comment.
export currently emits metadata JSON, not a text conversation export.
When source matches a session, Lines 192-193 write JSON.stringify(log, null, 2). That serializes the lightweight LogOption, not the conversation transcript the command description/examples promise.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/cli/handlers/ant.ts` around lines 162 - 193, The exportHandler currently
writes JSON.stringify(log) (LogOption) which emits metadata instead of the
conversation transcript; change the handler so when a session match is found
(exportHandler, using logs from getRecentActivity and LogOption), it loads the
full session transcript instead of serializing the lightweight LogOption — e.g.,
if LogOption contains a file/path property, read that file with readFile;
otherwise call the existing session retrieval function (e.g.,
getConversation/getLogBySessionId or similar) to fetch the transcript text, then
write that text with writeFile and keep the console message `Exported session
${log.sessionId} → ${outputFile}`; also handle read/write errors and set
process.exitCode=1 on failure.
| export async function completionHandler( | ||
| shell: string, | ||
| opts: { output?: string }, | ||
| _program: Command, | ||
| ): Promise<void> { | ||
| const { regenerateCompletionCache } = await import( | ||
| '../../utils/completionCache.js' | ||
| ) | ||
|
|
||
| if (opts.output) { | ||
| // Generate and write to file | ||
| await regenerateCompletionCache() | ||
| console.log(`Completion cache regenerated for ${shell}.`) | ||
| } else { | ||
| // Regenerate and output to stdout | ||
| await regenerateCompletionCache() | ||
| console.log(`Completion cache regenerated for ${shell}.`) | ||
| } |
There was a problem hiding this comment.
The completion command never produces completion output.
Both branches just regenerate the cache and print a status message. shell, opts.output, and _program are otherwise unused, so claude completion <shell> cannot write a script to stdout or a file as advertised.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/cli/handlers/ant.ts` around lines 198 - 215, The completionHandler
currently only calls regenerateCompletionCache and prints a status message, so
invoking completionHandler(shell, opts, _program) never emits a completion
script; update completionHandler to (1) call await regenerateCompletionCache(),
(2) generate the completion script for the requested shell using the _program
object (use the CLI library method on _program that returns the completion
script string for the given shell), and (3) if opts.output is set write that
script to the file path in opts.output, otherwise write the script to stdout;
keep the existing status logging but ensure shell, opts.output, and _program are
actually used to produce the script.
| // LOCAL_GATE_DEFAULTS take priority over remote values and disk cache. | ||
| // In fork/self-hosted deployments, the GrowthBook server may push false | ||
| // for gates we intentionally enable. Local defaults represent the | ||
| // project's intentional configuration and override everything except | ||
| // env/config overrides (which are explicit user intent). | ||
| const localDefault = getLocalGateDefault(feature) | ||
| if (localDefault !== undefined) { | ||
| return localDefault as T | ||
| } |
There was a problem hiding this comment.
Preserve exposure tracking before local-default short-circuit.
At Line 839, the early return now bypasses Lines 844-849, so locally-defaulted gates never enter pendingExposures; experiment exposures for those gates won’t be logged after init.
💡 Suggested fix
- const localDefault = getLocalGateDefault(feature)
- if (localDefault !== undefined) {
- return localDefault as T
- }
-
// Log experiment exposure if data is available, otherwise defer until after init
if (experimentDataByFeature.has(feature)) {
logExposureForFeature(feature)
} else {
pendingExposures.add(feature)
}
+
+ const localDefault = getLocalGateDefault(feature)
+ if (localDefault !== undefined) {
+ return localDefault as T
+ }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/services/analytics/growthbook.ts` around lines 834 - 842, The early
return when getLocalGateDefault(feature) !== undefined skips the
exposure-tracking logic that adds entries to pendingExposures; before returning
the local default you must run the same exposure capture used for remote/disk
values so experiment exposures are logged after init—i.e., when localDefault is
defined, invoke the existing pendingExposures-addition/recording logic (the same
code path used for non-local results) for the given feature and context, then
return localDefault as T.
| test('returns true when keys are configured', async () => { | ||
| process.env.LANGFUSE_PUBLIC_KEY = 'pk-test' | ||
| process.env.LANGFUSE_SECRET_KEY = 'sk-test' | ||
| const { isLangfuseEnabled } = await import('../client.js') | ||
| expect(isLangfuseEnabled()).toBe(true) | ||
| }) |
There was a problem hiding this comment.
initLangfuse success-path test is asserting the wrong function.
On Line 217-Line 219, this test verifies isLangfuseEnabled() rather than initLangfuse(), so initLangfuse’s enabled-path behavior is not actually covered.
Minimal fix
- test('returns true when keys are configured', async () => {
+ test('returns true when keys are configured', async () => {
process.env.LANGFUSE_PUBLIC_KEY = 'pk-test'
process.env.LANGFUSE_SECRET_KEY = 'sk-test'
- const { isLangfuseEnabled } = await import('../client.js')
- expect(isLangfuseEnabled()).toBe(true)
+ const { initLangfuse } = await import('../client.js')
+ expect(initLangfuse()).toBe(true)
})📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| test('returns true when keys are configured', async () => { | |
| process.env.LANGFUSE_PUBLIC_KEY = 'pk-test' | |
| process.env.LANGFUSE_SECRET_KEY = 'sk-test' | |
| const { isLangfuseEnabled } = await import('../client.js') | |
| expect(isLangfuseEnabled()).toBe(true) | |
| }) | |
| test('returns true when keys are configured', async () => { | |
| process.env.LANGFUSE_PUBLIC_KEY = 'pk-test' | |
| process.env.LANGFUSE_SECRET_KEY = 'sk-test' | |
| const { initLangfuse } = await import('../client.js') | |
| expect(initLangfuse()).toBe(true) | |
| }) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/services/langfuse/__tests__/langfuse.isolated.ts` around lines 214 - 219,
The test currently imports and asserts isLangfuseEnabled() instead of exercising
the init path; update the test to import initLangfuse from ../client.js, call
await initLangfuse() after setting process.env.LANGFUSE_PUBLIC_KEY and
LANGFUSE_SECRET_KEY, and assert the result is truthy (or true) to cover the
enabled success-path for initLangfuse; keep the existing env setup and remove or
replace the isLangfuseEnabled() assertion.
| export async function listAutonomyFlows( | ||
| rootDir: string = getProjectRoot(), | ||
| ): Promise<AutonomyFlowRecord[]> { | ||
| try { | ||
| const raw = (await getFsImplementation().readFile( | ||
| resolveAutonomyFlowsPath(rootDir), | ||
| { | ||
| encoding: 'utf-8', | ||
| }, | ||
| )) as string | ||
| const parsed = JSON.parse(raw) as Partial<AutonomyFlowsFile> | ||
| if (!Array.isArray(parsed.flows)) { | ||
| return [] | ||
| } | ||
| return parsed.flows | ||
| .filter((flow): flow is AutonomyFlowRecord => { | ||
| return Boolean( | ||
| flow && | ||
| typeof flow.flowId === 'string' && | ||
| typeof flow.flowKey === 'string' && | ||
| typeof flow.trigger === 'string' && | ||
| typeof flow.status === 'string' && | ||
| typeof flow.rootDir === 'string' && | ||
| typeof flow.createdAt === 'number' && | ||
| typeof flow.updatedAt === 'number', | ||
| ) | ||
| }) | ||
| .map(normalizeFlowRecord) | ||
| .sort((left, right) => right.updatedAt - left.updatedAt) | ||
| } catch { | ||
| return [] | ||
| } |
There was a problem hiding this comment.
Don't collapse read/parse failures into an empty flow store.
This helper feeds the start/update paths that write flows.json back out. If the file is malformed or temporarily unreadable, returning [] here causes the next write to overwrite the entire store with only the new snapshot.
🛡️ Possible fix
- } catch {
- return []
+ } catch (error) {
+ if ((error as NodeJS.ErrnoException).code === 'ENOENT') {
+ return []
+ }
+ throw error
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/utils/autonomyFlows.ts` around lines 328 - 359, The current
listAutonomyFlows function swallows all read/parse errors and returns an empty
array, which can cause writes to overwrite the entire flows store; change the
error handling in listAutonomyFlows so that read/JSON parse failures are not
collapsed into [] — instead surface the error (rethrow or throw a descriptive
error) so callers can handle it; locate listAutonomyFlows and update its catch
block (and any use of getFsImplementation(), resolveAutonomyFlowsPath,
normalizeFlowRecord if needed) to propagate the exception rather than returning
an empty list.
| commitPreparedAutonomyTurn(params.prepared) | ||
| const value = params.prepared.prompt | ||
| const run = await createAutonomyRun({ | ||
| trigger: params.prepared.trigger, | ||
| prompt: value, | ||
| rootDir, | ||
| currentDir, | ||
| sourceId: params.sourceId, | ||
| sourceLabel: params.sourceLabel, | ||
| flow: params.flow, | ||
| }) |
There was a problem hiding this comment.
Persist the run before committing the prepared autonomy turn.
commitPreparedAutonomyTurn() runs before createAutonomyRun(). If the run write fails, the prepared turn has already been consumed, so the work is lost without any queued command to recover.
🧯 Safer ordering
- commitPreparedAutonomyTurn(params.prepared)
const value = params.prepared.prompt
const run = await createAutonomyRun({
trigger: params.prepared.trigger,
prompt: value,
rootDir,
currentDir,
sourceId: params.sourceId,
sourceLabel: params.sourceLabel,
flow: params.flow,
})
+ commitPreparedAutonomyTurn(params.prepared)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| commitPreparedAutonomyTurn(params.prepared) | |
| const value = params.prepared.prompt | |
| const run = await createAutonomyRun({ | |
| trigger: params.prepared.trigger, | |
| prompt: value, | |
| rootDir, | |
| currentDir, | |
| sourceId: params.sourceId, | |
| sourceLabel: params.sourceLabel, | |
| flow: params.flow, | |
| }) | |
| const value = params.prepared.prompt | |
| const run = await createAutonomyRun({ | |
| trigger: params.prepared.trigger, | |
| prompt: value, | |
| rootDir, | |
| currentDir, | |
| sourceId: params.sourceId, | |
| sourceLabel: params.sourceLabel, | |
| flow: params.flow, | |
| }) | |
| commitPreparedAutonomyTurn(params.prepared) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/utils/autonomyRuns.ts` around lines 659 - 669, The current code calls
commitPreparedAutonomyTurn(...) before persisting the run with
createAutonomyRun(...), which can lose the prepared turn if the run write fails;
change the ordering so you first call createAutonomyRun({ trigger:
params.prepared.trigger, prompt: params.prepared.prompt, rootDir, currentDir,
sourceId: params.sourceId, sourceLabel: params.sourceLabel, flow: params.flow })
and only after that succeeds call commitPreparedAutonomyTurn(params.prepared);
additionally wrap createAutonomyRun in error handling (try/catch or promise
rejection handling) so that on failure you do not call
commitPreparedAutonomyTurn and you surface/log the error accordingly.
559e65f to
1acabda
Compare
There was a problem hiding this comment.
Actionable comments posted: 20
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
packages/builtin-tools/src/tools/PushNotificationTool/PushNotificationTool.ts (1)
27-27:⚠️ Potential issue | 🟡 MinorReturn type mismatch:
errorproperty not declared inPushOutput.
PushOutputis typed as{ sent: boolean }, but line 131 returns an object with an additionalerrorproperty. This creates a type inconsistency. Update the type definition to include the optional error field.-type PushOutput = { sent: boolean } +type PushOutput = { sent: boolean; error?: string }Also applies to: 131-131
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/builtin-tools/src/tools/PushNotificationTool/PushNotificationTool.ts` at line 27, The PushOutput type currently declares only { sent: boolean } but code (e.g., the return in PushNotificationTool where an object with an additional error property is returned) includes an error field; update the PushOutput type to include an optional error property (e.g., error?: string) so returns from the send/execute method in PushNotificationTool conform to the type, and adjust any other return sites to either populate or omit the optional error field accordingly.packages/builtin-tools/src/tools/SendUserFileTool/SendUserFileTool.ts (1)
21-21:⚠️ Potential issue | 🟡 MinorReturn type mismatch:
size,file_uuid, anderrorproperties not declared inSendUserFileOutput.The type definition includes only
{ sent: boolean; file_path: string }, but the actual return object includessize,file_uuid, anderrorfields. Update the interface to reflect the complete shape.-type SendUserFileOutput = { sent: boolean; file_path: string } +interface SendUserFileOutput { + sent: boolean + file_path: string + size?: number + file_uuid?: string + error?: string +}Also applies to: 112-118
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/builtin-tools/src/tools/SendUserFileTool/SendUserFileTool.ts` at line 21, Update the SendUserFileOutput type to match the actual returned object shape: include optional or required properties for size (number), file_uuid (string), and error (string | null) alongside sent (boolean) and file_path (string); locate the SendUserFileOutput type declaration and the SendUserFileTool/send method that returns this object and ensure the return signature and any places that construct the return value are updated to use the amended SendUserFileOutput type so TypeScript types align with the actual returned fields.src/utils/swarm/inProcessRunner.ts (2)
1297-1300:⚠️ Potential issue | 🟠 MajorFail the in-flight autonomy run on lifecycle abort.
This
breakskips the later complete/fail bookkeeping, so killing the teammate during an autonomy turn leavescurrentAutonomyRunIdstuck inrunning. Mark it failed before exiting the loop.🛠️ Minimal fix
// Check if lifecycle aborted during agent run (kills whole teammate) if (abortController.signal.aborted) { + if (currentAutonomyRunId) { + await markAutonomyRunFailed( + currentAutonomyRunId, + ERROR_MESSAGE_USER_ABORT, + ) + currentAutonomyRunId = undefined + } break }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/swarm/inProcessRunner.ts` around lines 1297 - 1300, The loop currently breaks on lifecycle abort (checking abortController.signal.aborted) without performing completion/failure bookkeeping, leaving currentAutonomyRunId stuck as "running"; update the abort path in the inProcessRunner loop to call the same failure/completion handler used elsewhere (mark the run failed) before breaking out—use the existing currentAutonomyRunId and the run-failure helper (the function or method that normally sets status to failed/completed in the autonomy bookkeeping) so the run is explicitly marked failed when abortController.signal.aborted is true, then exit the loop.
719-748:⚠️ Potential issue | 🟠 MajorDon't dequeue the pending prompt before the run transitions to
running.The message is removed from
pendingUserMessagesbeforemarkAutonomyRunRunning()succeeds. If that await throws, the prompt is gone andcurrentAutonomyRunIdis never set, so the autonomy run is orphaned. Move the dequeue after a successful transition, or requeue on failure.🛠️ Safer ordering
const pending = task.pendingUserMessages[0]! // Safe: checked length > 0 - // Pop the message from the queue - setAppState(prev => { - const prevTask = prev.tasks[taskId] - if (!prevTask || prevTask.type !== 'in_process_teammate') { - return prev - } - return { - ...prev, - tasks: { - ...prev.tasks, - [taskId]: { - ...prevTask, - pendingUserMessages: prevTask.pendingUserMessages.slice(1), - }, - }, - } - }) - logForDebugging( - `[inProcessRunner] ${identity.agentName} found pending user message (poll #${pollCount})`, - ) if (pending.autonomyRunId) { await markAutonomyRunRunning(pending.autonomyRunId) } + setAppState(prev => { + const prevTask = prev.tasks[taskId] + if (!prevTask || prevTask.type !== 'in_process_teammate') { + return prev + } + return { + ...prev, + tasks: { + ...prev.tasks, + [taskId]: { + ...prevTask, + pendingUserMessages: prevTask.pendingUserMessages.slice(1), + }, + }, + } + }) + logForDebugging( + `[inProcessRunner] ${identity.agentName} found pending user message (poll #${pollCount})`, + )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/swarm/inProcessRunner.ts` around lines 719 - 748, The pending user message is dequeued from task.pendingUserMessages via setAppState before markAutonomyRunRunning(pending.autonomyRunId) completes, which can orphan an autonomy run if that await throws; change the flow in inProcessRunner so you first call await markAutonomyRunRunning(...) (and handle/throw on error) and only after it succeeds update state to remove the head of pendingUserMessages for the taskId, or if you prefer keep the current order then requeue the message inside a catch of markAutonomyRunRunning; ensure you still return the same { type: 'new_message', message: ..., autonomyRunId: ..., from: 'user' } after successful transition and reference the same symbols (pendingUserMessages, setAppState, markAutonomyRunRunning, taskId) when making the change.
♻️ Duplicate comments (11)
packages/builtin-tools/src/tools/PushNotificationTool/PushNotificationTool.ts (2)
117-125:⚠️ Potential issue | 🟠 MajorBridge failure still falls through to misleading "No Remote Control bridge configured" error.
When the bridge request returns a non-2xx status (line 121) or throws an exception (line 124), the code logs the failure but falls through to line 131, returning a message that implies no bridge was configured—which is false when the bridge existed but delivery failed. Return distinct error payloads for these cases.
Proposed fix
if (response.status >= 200 && response.status < 300) { - logForDebugging(`[PushNotification] delivered via bridge session=${sessionId}`) + logForDebugging('[PushNotification] delivered via bridge') return { data: { sent: true } } } - logForDebugging(`[PushNotification] bridge delivery failed: status=${response.status}`) + logForDebugging('[PushNotification] bridge delivery failed') + return { + data: { + sent: false, + error: `Bridge delivery failed (status ${response.status}).`, + }, + } } } catch (e) { - logForDebugging(`[PushNotification] bridge delivery error: ${e}`) + logForDebugging('[PushNotification] bridge delivery error') + return { + data: { + sent: false, + error: 'Bridge request failed. Notification not delivered.', + }, + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/builtin-tools/src/tools/PushNotificationTool/PushNotificationTool.ts` around lines 117 - 125, The bridge delivery branch in PushNotificationTool currently logs failures but falls through to the generic "No Remote Control bridge configured" response; update the send-via-bridge logic (around logForDebugging, response, sessionId handling in PushNotificationTool) so that a non-2xx response returns a distinct error payload (e.g., { data: { sent: false, error: 'bridge_delivery_failed', status: response.status } }) and exceptions caught in the catch block return a distinct payload (e.g., { data: { sent: false, error: 'bridge_delivery_error', message: e?.message || String(e) } }) instead of falling through to the "no bridge configured" message, ensuring the function returns immediately after these bridge-failure responses.
118-118:⚠️ Potential issue | 🟠 MajorDebug logs expose potentially sensitive data.
Line 118 logs
sessionId, line 124 logs the raw exception which may contain request/response details, and line 130 logs the notificationtitle. These could leak user or session metadata. Redact sensitive values from debug output.Also applies to: 124-124, 130-130
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/builtin-tools/src/tools/PushNotificationTool/PushNotificationTool.ts` at line 118, The debug logs in PushNotificationTool expose sensitive values (sessionId, raw caught error, and notification title) — update the logForDebugging calls inside PushNotificationTool (where sessionId, the caught error, and title are currently logged) to log sanitized values: redact or truncate sessionId (e.g., show only last 4 chars or hashed), replace the raw error with a sanitized summary that strips request/response/PII fields (use a helper like sanitizeError or maskSensitive), and mask the notification title (e.g., replace content with “[REDACTED]” or a safe truncated version); add or reuse small helpers (sanitizeText/sanitizeError/maskSessionId) and call them before passing data to logForDebugging so no raw sensitive data is emitted.src/main.tsx (1)
1805-1807:⚠️ Potential issue | 🟠 MajorSettings-based assistant activation is still skipped.
This predicate still ignores
getInitialSettings().assistant, so a trusted repo withassistant: truein settings will never enter the assistant-mode init path unless the hidden--assistantflag is present. That breaks the documented settings-driven activation flow.Suggested fix
if ( feature("KAIROS") && assistantModule && (assistantModule.isAssistantForced() || + getInitialSettings().assistant === true || (options as Record<string, unknown>).assistant === true) && !(options as { agentId?: unknown }).agentId && kairosGate ) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main.tsx` around lines 1805 - 1807, The assistant-mode activation predicate currently checks assistantModule.isAssistantForced() and (options as Record<string, unknown>).assistant but omits the persisted setting from getInitialSettings(); update the condition to also consult getInitialSettings().assistant (e.g., const initial = getInitialSettings(); ...) so the combined boolean becomes assistantModule && (assistantModule.isAssistantForced() || (options as Record<string, unknown>).assistant === true || initial?.assistant === true); ensure you call getInitialSettings() only where available and handle possible undefined return safely.src/cli/handlers/ant.ts (4)
198-215:⚠️ Potential issue | 🟠 MajorThe completion command still doesn't emit a completion script.
Both branches only regenerate the cache and print a message;
shell,opts.output, and_programnever contribute to producing stdout or file output.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/cli/handlers/ant.ts` around lines 198 - 215, The completionHandler currently only calls regenerateCompletionCache and prints a message but never generates or writes a shell completion script using the provided shell, opts.output, or _program; update completionHandler to (1) use _program (the Command instance) to generate the completion script for the requested shell, (2) if opts.output is set write the generated script to that file path, otherwise write the script to stdout, and (3) still call regenerateCompletionCache() as needed; reference the completionHandler function, the regenerateCompletionCache import, the shell parameter, opts.output, and the _program Command to locate where to generate and route the completion script.
12-12:⚠️ Potential issue | 🟠 MajorThe fallback task list is still wrong.
DEFAULT_LISTis"default"here, so every task command without--listis routed to a different directory than the documented"tasklist"flow.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/cli/handlers/ant.ts` at line 12, DEFAULT_LIST is set to "default" but the documented and intended fallback for task commands is "tasklist"; change the DEFAULT_LIST constant from 'default' to 'tasklist' so any command that omits --list routes to the correct directory, and verify the code paths that reference DEFAULT_LIST (the task command handling/dispatch logic) still behave correctly and tests/fixtures expecting the "tasklist" flow are updated or passing.
162-193:⚠️ Potential issue | 🟠 MajorSession export still writes metadata JSON instead of the conversation.
When
sourcematches a session, Line 192 serializes theLogOptionsummary object rather than loading and exporting the actual transcript/content.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/cli/handlers/ant.ts` around lines 162 - 193, exportHandler currently writes the LogOption summary (JSON.stringify(log)) when a session match is found; instead, load and export the actual conversation/transcript for that session. Replace the final writeFile(JSON.stringify(log,...)) step with logic that retrieves the session transcript by sessionId (for example call a session loader like getSession/getTranscript/getSessionById or read the session file referenced on LogOption if it has a path property), then write the transcript content to outputFile using writeFile and keep the same console message using log.sessionId; ensure any async loader errors are caught and set process.exitCode = 1 as done for the file-path branch.
147-159:⚠️ Potential issue | 🟠 Major
errorHandlerstill never loads error records.This handler only re-lists recent sessions. It doesn't read any error-log source, so users still don't see the actual failures the command claims to show.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/cli/handlers/ant.ts` around lines 147 - 159, The current errorHandler only calls getRecentActivity() and re-lists sessions but never fetches any error records; update errorHandler to actually load and display error details by replacing or extending the getRecentActivity() call with a call to the real error-source (e.g. await getErrorRecords() or await getSessionErrors(sessionId) — implement that helper if it doesn't exist), then iterate the returned error items instead of just sessions and print useful fields such as error.message, error.stack (or error.details) and timestamps (using log.modified as fallback), and handle the case of no errors found by printing a friendly message.src/cli/bg/engines/tmux.ts (1)
26-29:⚠️ Potential issue | 🟠 MajorPass
opts.cwdthrough totmux new-session.The tmux branch still ignores the requested working directory, so bg sessions start in the tmux server's default directory instead of the target project.
📁 Suggested change
const result = spawnSync( 'tmux', - ['new-session', '-d', '-s', opts.sessionName, cmd], + ['new-session', '-d', '-s', opts.sessionName, '-c', opts.cwd, cmd], { stdio: 'inherit', env: tmuxEnv }, )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/cli/bg/engines/tmux.ts` around lines 26 - 29, The tmux launch currently calls spawnSync('tmux', ['new-session', '-d', '-s', opts.sessionName, cmd], ...) but does not pass the requested working directory, so sessions start in the tmux server default; update the spawnSync invocation to forward opts.cwd by adding a cwd property to the options object (i.e., include cwd: opts.cwd alongside env: tmuxEnv) where the spawnSync is called (the tmux new-session code path in src/cli/bg/engines/tmux.ts) so the new tmux session starts in the intended directory.src/daemon/main.ts (1)
246-261:⚠️ Potential issue | 🟠 MajorState is still published and cleared at the wrong points in the supervisor lifecycle.
writeDaemonState()still runs before proving there isn't already a live supervisor, andremoveDaemonState()still runs before the worker-drain wait. That leavesstartvulnerable to clobbering a live PID and letsstatus/stopreport "stopped" while children are still shutting down.Also applies to: 287-310
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/daemon/main.ts` around lines 246 - 261, The daemon state is being written and cleared at incorrect times: move the writeDaemonState call so it only runs after you’ve verified there is no existing live supervisor (i.e., after the check that confirms this process will be the active supervisor) instead of before; likewise, change the shutdown flow so removeDaemonState is invoked only after workers finish draining (after the worker drain/wait logic completes) rather than immediately in the shutdown() function. Update references in main.ts to call writeDaemonState after the liveness check that decides to become supervisor and call removeDaemonState at the end of the graceful shutdown sequence (after worker drain/wait), adjusting the shutdown function and any code paths at lines around the existing shutdown/worker wait logic so state is published/cleared at the correct lifecycle points.docs/features/stub-recovery-design-1-4.md (1)
18-24:⚠️ Potential issue | 🟡 MinorReplace the hard-coded Windows paths with repository-relative references.
These
e:/Source_code/...links are still machine-specific, so the document won't be navigable for other contributors or rendered repo views. The same pattern repeats throughout the file.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/features/stub-recovery-design-1-4.md` around lines 18 - 24, The document contains hard-coded Windows absolute links (e:/Source_code/Claude-code-bast/...) that break navigation for other contributors; replace each absolute path with repository-relative references such as "src/daemon/main.ts", "src/daemon/workerRegistry.ts", and "src/commands/remoteControlServer/remoteControlServer.tsx" (using standard Markdown link syntax or repo-root-relative paths) so links resolve in the repo view and on other machines, and sweep the entire file to update every occurrence of the e:/Source_code/... pattern.src/cli/bg.ts (1)
165-169:⚠️ Potential issue | 🟠 MajorLegacy detached sessions are still excluded from auto-attach.
The no-target filter still only keeps sessions with
tmuxSessionNameor an explicitengine === 'detached'. Older detached records that resolve viaresolveSessionEngine(s) === 'detached'never enterbgSessions, soattachcan still report that there are no background sessions when one exists.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/cli/bg.ts` around lines 165 - 169, The bgSessions filter excludes legacy detached sessions because it only checks s.tmuxSessionName or s.engine === 'detached'; update the filter used when computing bgSessions (the sessions.filter call) to also treat sessions as detached when resolveSessionEngine(s) === 'detached' (e.g., s.tmuxSessionName || s.engine === 'detached' || resolveSessionEngine(s) === 'detached') so legacy records are included for auto-attach.
🧹 Nitpick comments (8)
src/services/langfuse/__tests__/langfuse.test.ts (2)
9-16: Consider keeping the heavy-dependency test pattern aligned with repo guidance.This subprocess-only approach is valid, but it bypasses the project’s preferred pattern for heavy dependencies (
mock.module()+ inlineawait import()). Consider keeping at least one adjacent in-process test using that pattern for consistency and faster fault localization.As per coding guidelines:
Use mock.module() with inline await import() in test files for testing modules with heavy dependencies.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/langfuse/__tests__/langfuse.test.ts` around lines 9 - 16, Add an adjacent in-process test using the project's heavy-dependency pattern: create a test next to "passes in isolated subprocess" that uses mock.module() with an inline await import() to load the module under test instead of spawning a subprocess; reference the same isolatedPath/test behavior but replace Bun.spawn usage with mock.module() and inline await import() so heavy dependencies are mocked in-process and test failures are easier to localize (keep the original subprocess test but add this additional in-process variant).
24-39: Remove redundant success assertion after explicit failure throw.After throwing on
exitCode !== 0, the finalexpect(exitCode).toBe(0)is redundant noise.♻️ Proposed simplification
@@ - expect(exitCode).toBe(0)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/services/langfuse/__tests__/langfuse.test.ts` around lines 24 - 39, The test currently throws an Error when exitCode !== 0 and then still contains a redundant assertion expect(exitCode).toBe(0); remove that final expect to avoid duplicate checks — locate the test block that uses the exitCode variable and the throw on non-zero exit code and delete the expect(exitCode).toBe(0) line so the test relies on the explicit throw for failure.packages/builtin-tools/src/tools/SendUserFileTool/SendUserFileTool.ts (1)
105-107: Consider logging bridge upload failures for debuggability.The catch block silently swallows exceptions. While this is intentional for best-effort behavior,
PushNotificationToollogs similar failures vialogForDebugging. Adding a debug log here would improve consistency and aid troubleshooting.} catch { - // Best-effort upload — local path is always available + // Best-effort upload — local path is always available + logForDebugging('[SendUserFile] bridge upload failed, falling back to local path') }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/builtin-tools/src/tools/SendUserFileTool/SendUserFileTool.ts` around lines 105 - 107, The catch in SendUserFileTool silently swallows bridge upload errors; change it to catch the error object and call the existing logForDebugging helper (as used by PushNotificationTool) to record the failure. Locate the try/catch in the SendUserFileTool upload routine, update the catch to "catch (err)" and call this.logForDebugging (or the tool's logForDebugging function) with a concise message like "bridge upload failed" plus the error/stack to preserve the best-effort behavior while improving debuggability.src/utils/__tests__/cronTasks.baseline.test.ts (1)
1-1: Consider aligning baseline test filename with<module>.test.tsconvention.If you want to keep baseline semantics explicit, put
"baseline"indescribe(...)while keeping filename convention (cronTasks.test.ts) for consistency with repo standards.As per coding guidelines, unit tests should be placed adjacent in
__tests__with<module>.test.tsnaming.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/__tests__/cronTasks.baseline.test.ts` at line 1, Rename the test file to follow the <module>.test.ts convention (change cronTasks.baseline.test.ts to cronTasks.test.ts) and update any imports or test runner references accordingly; if you want to retain "baseline" semantics keep the test suite descriptive by adding "baseline" inside the describe(...) call (e.g., describe('cronTasks - baseline', ...) or similar) so that the filename matches repo standards while the test intent remains explicit; ensure the file stays in the same __tests__ directory and run the test suite to confirm no path or naming breakages.src/cli/handlers/ant.ts (1)
2-10: Switch internal imports to thesrc/alias.These relative imports add avoidable path churn in a TS file that is supposed to use the repository alias convention. Please import these modules via
src/...instead of../../....As per coding guidelines, "Use
src/path alias for imports; valid paths likeimport { ... } from 'src/utils/...'".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/cli/handlers/ant.ts` around lines 2 - 10, The imports for createTask, getTask, updateTask, listTasks, getTasksDir, getRecentActivity and the LogOption type are using relative paths; switch them to the repository alias by replacing '../../utils/tasks.js' and '../../utils/logoV2Utils.js' (and '../../types/logs.js') with the equivalent 'src/utils/tasks', 'src/utils/logoV2Utils' and 'src/types/logs' imports so the file uses the src/ path alias consistently for those symbols.src/utils/taskSummary.ts (1)
37-43: Replace the direct type assertion with a type guard forforkContextMessages.The code violates the guideline to use type guards instead of direct casting. Instead of
const messages = options.forkContextMessages as | Array<...> | undefined, create a type guard function to safely narrow the type:function isForkContextMessages( value: unknown ): value is Array<{ type: string; message?: { content?: unknown } }> { return Array.isArray(value) && value.every(m => typeof m.type === 'string') } // Then in the function: const messages = isForkContextMessages(options.forkContextMessages) ? options.forkContextMessages : undefinedNote: Line 58–60 uses a similar direct assertion for
lastBlockthat could also be refactored to follow the same pattern.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/taskSummary.ts` around lines 37 - 43, Replace the unsafe type assertion on options.forkContextMessages with a proper type guard: add an isForkContextMessages(value: unknown): value is Array<{ type: string; message?: { content?: unknown } }> that checks Array.isArray(value) and that every item has typeof item.type === 'string', then use that guard to assign messages (e.g., const messages = isForkContextMessages(options.forkContextMessages) ? options.forkContextMessages : undefined); also apply the same pattern to the lastBlock assertion (lines ~58–60) by adding a similar guard for its expected shape and using it to narrow the type instead of direct casting.src/assistant/sessionDiscovery.ts (1)
8-13: Prefer aninterfaceforAssistantSession.This is a plain object shape, so an
interfacematches the repo's TS convention better here.♻️ Suggested change
-export type AssistantSession = { +export interface AssistantSession { id: string title: string status: string created_at: string }As per coding guidelines,
In TypeScript files, prefer interface for defining object shapes instead of type.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/assistant/sessionDiscovery.ts` around lines 8 - 13, Replace the exported type alias AssistantSession with an exported interface to follow the repo TS convention: change the declaration of AssistantSession from a "type" to an "interface" while keeping the same property names and types (id, title, status, created_at) so existing usages of AssistantSession remain compatible; update the declaration near the current export to "export interface AssistantSession { ... }".src/tasks/InProcessTeammateTask/types.ts (1)
22-26: Preferinterfacefor this new state payload.This is a new exported object shape, so it should follow the repo's TypeScript convention and use
interfaceinstead oftype.♻️ Small cleanup
-export type PendingTeammateUserMessage = { +export interface PendingTeammateUserMessage { message: string autonomyRunId?: string origin?: MessageOrigin }As per coding guidelines, "In TypeScript files, prefer
interfacefor defining object shapes instead oftype".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/tasks/InProcessTeammateTask/types.ts` around lines 22 - 26, Replace the exported type alias PendingTeammateUserMessage with an exported interface of the same name so it follows the project convention for object shapes; update the declaration in types.ts to "export interface PendingTeammateUserMessage { message: string; autonomyRunId?: string; origin?: MessageOrigin }" and keep the same property names/optionality so all usages of PendingTeammateUserMessage remain compatible.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/features/daemon-restructure-design.md`:
- Around line 76-96: The daemon entry gating incorrectly requires
feature('DAEMON') only; change the initial conditional to allow
feature('BG_SESSIONS') as well (e.g. feature('DAEMON') ||
feature('BG_SESSIONS')) so the unified "claude daemon ..." route remains
reachable during BG-only rollouts, and ensure the switch handlers (daemonMain,
bg.handleBgStart, bg[`${sub}Handler`]) still behave the same when only
BG_SESSIONS is enabled (preserve handling for
'status'/'bg'/'attach'/'logs'/'kill' and the deprecated delegation block).
In `@docs/test-plans/openclaw-autonomy-baseline.md`:
- Line 11: The baseline goal contains a repository name typo "Claude-code-bast";
update that string to the correct repository name (e.g., "Claude-code-base" or
the actual repo canonical name) wherever it appears in the document
(specifically the baseline goal sentence) so readers/searches aren't confused;
ensure the replacement preserves surrounding backticks and formatting (the token
`Claude-code-bast` in the baseline goal line).
- Around line 34-38: Acceptance criteria mismatch: the baseline test-file list
(`src/proactive/__tests__/state.baseline.test.ts`,
`src/commands/__tests__/proactive.baseline.test.ts`,
`src/utils/__tests__/cronTasks.baseline.test.ts`,
`src/utils/__tests__/cronScheduler.baseline.test.ts`,
`src/__tests__/context.baseline.test.ts`) shows five files but the acceptance
criteria at Line 86 says four; update the acceptance criteria to state five
baseline test files (or remove one file from the list if the intent is four) so
the documented count and the list are consistent, and ensure the same corrected
count appears in both occurrences (Lines ~34–38 and 86–87).
In
`@packages/builtin-tools/src/tools/PushNotificationTool/PushNotificationTool.ts`:
- Line 79: The method signature for PushNotificationTool.async call currently
omits a type for the context parameter; update the signature of call(input:
PushInput, context) to annotate context with the proper Tool context type (e.g.,
ToolContext or the exported type used by src/Tool.js) and add the corresponding
import of that type at the top of the file so the file compiles under strict
TypeScript (bunx tsc --noEmit). Ensure you reference the PushNotificationTool
class and its call method when making the change and use the exact exported type
name from src/Tool.js to avoid creating a new interface.
In `@packages/builtin-tools/src/tools/SendUserFileTool/SendUserFileTool.ts`:
- Line 73: The call method on SendUserFileTool is missing an explicit type for
the context parameter; update the signature of async call(input:
SendUserFileInput, context) to include the same explicit context type used by
PushNotificationTool (e.g., ToolContext or the project's standard tool context
interface), ensuring imports are added if necessary and the signature becomes
async call(input: SendUserFileInput, context: <SameContextType>).
In `@src/assistant/index.ts`:
- Around line 37-38: initializeAssistantTeam is currently a no-op and must
return the pre-seeded in-process team the caller awaits; implement it to
construct and return the assistant leader team object expected by the caller
(instead of Promise<undefined>) by instantiating the assistant leader via
Agent(name: ...) and building the same team shape used for
initialState.teamContext so callers receive a populated team context. Ensure the
function returns the proper team value (not undefined) and matches the expected
type used where initializeAssistantTeam() is awaited.
In `@src/cli/bg.ts`:
- Around line 35-38: The code currently parses session JSON and casts it to
SessionEntry then trusts entry.pid; instead, change the logic around the
read/parse block (where readFile, jsonParse and sessions.push are used) to parse
the JSON into unknown, implement and call a type guard (e.g.,
isSessionEntry(obj): obj is SessionEntry) that verifies required fields and
their types (including pid as a number), and also check that the parsed
entry.pid strictly equals the pid extracted from the filename; if the guard
fails or pid mismatches, reject/skip the file and do not push into sessions.
Ensure you stop using a direct cast to SessionEntry and use the type guard to
narrow types before using entry elsewhere (affecting functions that rely on
entry.pid such as kill/attach).
- Around line 247-268: The current shutdown removes the session file
unconditionally even if SIGKILL failed; update the logic in the stop sequence
handling around session, isProcessRunning, and the SIGKILL try/catch so you only
call unlink(pidFile) after confirming the PID is no longer running. After
attempting SIGKILL (process.kill(session.pid, 'SIGKILL') in that block),
re-check isProcessRunning(session.pid) (optionally with a short wait/retry loop
and a reasonable timeout), and only when it returns false call
unlink(join(getSessionsDir(), `${session.pid}.json`)); if the process is still
alive, leave the session file in place and log an appropriate message instead of
removing it.
In `@src/cli/handlers/ant.ts`:
- Around line 88-91: The update logic in the ant CLI handler uses truthiness
checks (if (opts.subject), if (opts.description)) which prevent empty-string
values from being applied; change these to explicit undefined checks (e.g.,
opts.subject !== undefined and opts.description !== undefined) so callers can
intentionally clear fields; update the four conditions around opts.status,
opts.subject, opts.description, and opts.owner in the function handling updates
to use !== undefined to set properties on the updates object.
In `@src/daemon/state.ts`:
- Around line 41-47: The readDaemonState function currently returns JSON parsed
with a type cast which can allow invalid shapes (e.g., non-integer or negative
pid) to propagate; change parsing to parse into unknown and implement a type
guard (e.g., isDaemonStateData(obj): obj is DaemonStateData) that checks
required fields — at minimum verify obj.pid exists, is a number,
Number.isInteger(obj.pid) and obj.pid > 0, and other fields (like name or
socket) have the expected types — then return the narrowed DaemonStateData only
if the guard passes, otherwise return null and log or handle the malformed file;
update readDaemonState to call this guard before returning.
- Around line 89-100: The state file currently only stores pid so
isProcessAlive(state.pid) can misidentify a recycled PID; update the daemon
state to persist a stronger identity (e.g., startTime or a startup token/UUID
saved when the daemon is launched) and use that to validate the PID before
acting. Specifically: extend readDaemonState/writeDaemonState to include the
daemon's startTime or token (set by the code that spawns the daemon), add a
helper like getProcessStartTime(pid) or getProcessCommand(pid) to fetch the
running process' start time/command, and replace raw isProcessAlive checks in
functions referenced (readDaemonState usage in queryDaemonStatus / the status
code block and stopDaemonByPid) with a verification that the running process’
startTime/token matches the recorded value; if it does not match treat the state
as stale and removeDaemonState(name) rather than reporting or signaling the
process.
In `@src/hooks/useMasterMonitor.ts`:
- Around line 207-210: The current removal of the /send override inside
useMasterMonitor (the hasSendOverride(name) / removeSendOverride(name) block)
runs before later listeners (like attachPipeEntryEmitter) can inspect the same
'done'/'error' packet, allowing shouldDropMutedMessage() to drop the terminal
event; instead defer the cleanup so other message listeners see the override:
either move the removeSendOverride(name) call into the end of
attachPipeEntryEmitter's processing for that packet (so it runs after those
listeners), or change useMasterMonitor to schedule removal asynchronously (e.g.
via setImmediate/process.nextTick) so removeSendOverride(name) runs after other
message handlers have executed; ensure you keep the same hasSendOverride(name)
check before removing.
In `@src/hooks/usePipeMuteSync.ts`:
- Around line 134-140: On unmount the cleanup currently only calls
clearMasterMutedPipes() and clearSendOverrides(), leaving slaves still muted;
before clearing state, iterate the tracked muted pipes (e.g.,
prevMutedRef.current or whatever master-muted collection is used) and send a
matching "relay_unmute" to each slave connection for those pipe IDs, then call
clearMasterMutedPipes() and clearSendOverrides(); reference prevMutedRef,
clearMasterMutedPipes, clearSendOverrides and the "relay_unmute" message name so
you send the unmute messages for all previously-muted pipes during the useEffect
teardown.
- Around line 91-103: The updater passed to setToolUseConfirmQueue currently
invokes side-effecting callbacks (item.onAbort) inside the state updater; move
those calls out so the updater remains pure. Change the setter usage in
usePipeMuteSync to have the updater only compute and return the new queue (e.g.,
const next = queue.filter(item => item.pipeName !== name); return next), capture
the items to abort before or after the setState call, and then invoke their
onAbort callbacks outside the updater (calling (item.onAbort as () => void)() in
a try/catch) so side effects run exactly once and not during render or state
reconciliation. Ensure you reference setToolUseConfirmQueue, the local name
variable, and item.onAbort when making the change.
In `@src/proactive/useProactive.ts`:
- Around line 75-90: The detached async IIFE that calls
createProactiveAutonomyCommands must be serialized with the timer lifecycle:
introduce an inFlight flag (and a cancelled flag cleared on cleanup) around
createProactiveAutonomyCommands to prevent concurrent runs, set inFlight=true
before calling it and clear in finally, check cancelled before calling
optsRef.current.onQueueTick for each command so we don't enqueue after disable,
and ensure scheduleTick() is called in the finally block (not immediately) so
the next tick is always scheduled after the generation completes or is aborted;
update the code around createProactiveAutonomyCommands,
optsRef.current.onQueueTick, and scheduleTick to implement these checks and
state transitions.
In `@src/services/api/openai/__tests__/queryModelOpenAI.isolated.ts`:
- Around line 91-97: The test helper runQueryModel should clear shared captured
state before use: reset the module-level _lastCreateArgs (and any similar shared
captures) at the start of runQueryModel so each invocation begins fresh and
assertions like max_tokens rely only on the current run; locate runQueryModel
and add a statement to reinitialize _lastCreateArgs (and the other shared
captures referenced on lines around 162-163 and 484-486) before wiring
_nextEvents and applying envOverrides so stale values from prior tests cannot
affect chat.completions.create assertions.
In `@src/utils/__tests__/cronTasks.baseline.test.ts`:
- Around line 194-202: The test's intent is to verify that
oneShotJitteredNextCronRunMs returns the exact next fire time when there is no
second match, but it never asserts equality with nextCronRunMs; update the test
to compute exact = nextCronRunMs('0 0 29 2 *', fromMs) and jittered =
oneShotJitteredNextCronRunMs('0 0 29 2 *', fromMs, '89abcdef'), assert both are
not null, and then add an assertion that jittered === exact (or toBe(exact)) so
the baseline enforces the exact-match behavior of oneShotJitteredNextCronRunMs;
ensure you perform the null checks before comparing to avoid type errors.
In `@src/utils/autonomyRuns.ts`:
- Around line 145-147: The catch-all in listAutonomyRuns currently swallows
every error and returns [], risking data loss; change the catch in
listAutonomyRuns so it only returns [] when the error is an ENOENT (file not
found) and rethrows any other errors; use a typed check (e.g. treat the caught
error as NodeJS.ErrnoException or check (err as any).code === 'ENOENT') and
ensure behavior matches listAutonomyFlows' intended fix (return [] only for
ENOENT, otherwise throw).
In `@src/utils/taskSummary.ts`:
- Around line 31-35: The timestamp lastSummaryTime is being advanced prematurely
in maybeGenerateTaskSummary, causing the 30s rate-limit to be consumed even when
no summary is emitted; modify maybeGenerateTaskSummary so that lastSummaryTime
is only updated after payload validation and after successfully enqueueing
updateSessionActivity (i.e., move the assignment to lastSummaryTime to the
success path), and apply the same change to the other early-update sites
referenced (the blocks around the other occurrences currently at the 44-45 and
68-74 regions), ensuring lastSummaryTime is set only once the summary payload is
valid and updateSessionActivity has been scheduled.
- Around line 51-64: The status variable is incorrectly defaulted to 'busy'
causing completed assistant turns to remain active; change the logic in the
lastAssistant handling so status defaults to 'idle' and is only set to 'busy'
when you detect a tool use block (inspect lastAssistant,
lastAssistant.message.content, Array.isArray(content), lastBlock and
lastBlock.type === 'tool_use'), and also ensure waitingFor is only populated for
tool_use; if content is absent or not a tool_use treat status as 'idle' and
clear waitingFor.
---
Outside diff comments:
In
`@packages/builtin-tools/src/tools/PushNotificationTool/PushNotificationTool.ts`:
- Line 27: The PushOutput type currently declares only { sent: boolean } but
code (e.g., the return in PushNotificationTool where an object with an
additional error property is returned) includes an error field; update the
PushOutput type to include an optional error property (e.g., error?: string) so
returns from the send/execute method in PushNotificationTool conform to the
type, and adjust any other return sites to either populate or omit the optional
error field accordingly.
In `@packages/builtin-tools/src/tools/SendUserFileTool/SendUserFileTool.ts`:
- Line 21: Update the SendUserFileOutput type to match the actual returned
object shape: include optional or required properties for size (number),
file_uuid (string), and error (string | null) alongside sent (boolean) and
file_path (string); locate the SendUserFileOutput type declaration and the
SendUserFileTool/send method that returns this object and ensure the return
signature and any places that construct the return value are updated to use the
amended SendUserFileOutput type so TypeScript types align with the actual
returned fields.
In `@src/utils/swarm/inProcessRunner.ts`:
- Around line 1297-1300: The loop currently breaks on lifecycle abort (checking
abortController.signal.aborted) without performing completion/failure
bookkeeping, leaving currentAutonomyRunId stuck as "running"; update the abort
path in the inProcessRunner loop to call the same failure/completion handler
used elsewhere (mark the run failed) before breaking out—use the existing
currentAutonomyRunId and the run-failure helper (the function or method that
normally sets status to failed/completed in the autonomy bookkeeping) so the run
is explicitly marked failed when abortController.signal.aborted is true, then
exit the loop.
- Around line 719-748: The pending user message is dequeued from
task.pendingUserMessages via setAppState before
markAutonomyRunRunning(pending.autonomyRunId) completes, which can orphan an
autonomy run if that await throws; change the flow in inProcessRunner so you
first call await markAutonomyRunRunning(...) (and handle/throw on error) and
only after it succeeds update state to remove the head of pendingUserMessages
for the taskId, or if you prefer keep the current order then requeue the message
inside a catch of markAutonomyRunRunning; ensure you still return the same {
type: 'new_message', message: ..., autonomyRunId: ..., from: 'user' } after
successful transition and reference the same symbols (pendingUserMessages,
setAppState, markAutonomyRunRunning, taskId) when making the change.
---
Duplicate comments:
In `@docs/features/stub-recovery-design-1-4.md`:
- Around line 18-24: The document contains hard-coded Windows absolute links
(e:/Source_code/Claude-code-bast/...) that break navigation for other
contributors; replace each absolute path with repository-relative references
such as "src/daemon/main.ts", "src/daemon/workerRegistry.ts", and
"src/commands/remoteControlServer/remoteControlServer.tsx" (using standard
Markdown link syntax or repo-root-relative paths) so links resolve in the repo
view and on other machines, and sweep the entire file to update every occurrence
of the e:/Source_code/... pattern.
In
`@packages/builtin-tools/src/tools/PushNotificationTool/PushNotificationTool.ts`:
- Around line 117-125: The bridge delivery branch in PushNotificationTool
currently logs failures but falls through to the generic "No Remote Control
bridge configured" response; update the send-via-bridge logic (around
logForDebugging, response, sessionId handling in PushNotificationTool) so that a
non-2xx response returns a distinct error payload (e.g., { data: { sent: false,
error: 'bridge_delivery_failed', status: response.status } }) and exceptions
caught in the catch block return a distinct payload (e.g., { data: { sent:
false, error: 'bridge_delivery_error', message: e?.message || String(e) } })
instead of falling through to the "no bridge configured" message, ensuring the
function returns immediately after these bridge-failure responses.
- Line 118: The debug logs in PushNotificationTool expose sensitive values
(sessionId, raw caught error, and notification title) — update the
logForDebugging calls inside PushNotificationTool (where sessionId, the caught
error, and title are currently logged) to log sanitized values: redact or
truncate sessionId (e.g., show only last 4 chars or hashed), replace the raw
error with a sanitized summary that strips request/response/PII fields (use a
helper like sanitizeError or maskSensitive), and mask the notification title
(e.g., replace content with “[REDACTED]” or a safe truncated version); add or
reuse small helpers (sanitizeText/sanitizeError/maskSessionId) and call them
before passing data to logForDebugging so no raw sensitive data is emitted.
In `@src/cli/bg.ts`:
- Around line 165-169: The bgSessions filter excludes legacy detached sessions
because it only checks s.tmuxSessionName or s.engine === 'detached'; update the
filter used when computing bgSessions (the sessions.filter call) to also treat
sessions as detached when resolveSessionEngine(s) === 'detached' (e.g.,
s.tmuxSessionName || s.engine === 'detached' || resolveSessionEngine(s) ===
'detached') so legacy records are included for auto-attach.
In `@src/cli/bg/engines/tmux.ts`:
- Around line 26-29: The tmux launch currently calls spawnSync('tmux',
['new-session', '-d', '-s', opts.sessionName, cmd], ...) but does not pass the
requested working directory, so sessions start in the tmux server default;
update the spawnSync invocation to forward opts.cwd by adding a cwd property to
the options object (i.e., include cwd: opts.cwd alongside env: tmuxEnv) where
the spawnSync is called (the tmux new-session code path in
src/cli/bg/engines/tmux.ts) so the new tmux session starts in the intended
directory.
In `@src/cli/handlers/ant.ts`:
- Around line 198-215: The completionHandler currently only calls
regenerateCompletionCache and prints a message but never generates or writes a
shell completion script using the provided shell, opts.output, or _program;
update completionHandler to (1) use _program (the Command instance) to generate
the completion script for the requested shell, (2) if opts.output is set write
the generated script to that file path, otherwise write the script to stdout,
and (3) still call regenerateCompletionCache() as needed; reference the
completionHandler function, the regenerateCompletionCache import, the shell
parameter, opts.output, and the _program Command to locate where to generate and
route the completion script.
- Line 12: DEFAULT_LIST is set to "default" but the documented and intended
fallback for task commands is "tasklist"; change the DEFAULT_LIST constant from
'default' to 'tasklist' so any command that omits --list routes to the correct
directory, and verify the code paths that reference DEFAULT_LIST (the task
command handling/dispatch logic) still behave correctly and tests/fixtures
expecting the "tasklist" flow are updated or passing.
- Around line 162-193: exportHandler currently writes the LogOption summary
(JSON.stringify(log)) when a session match is found; instead, load and export
the actual conversation/transcript for that session. Replace the final
writeFile(JSON.stringify(log,...)) step with logic that retrieves the session
transcript by sessionId (for example call a session loader like
getSession/getTranscript/getSessionById or read the session file referenced on
LogOption if it has a path property), then write the transcript content to
outputFile using writeFile and keep the same console message using
log.sessionId; ensure any async loader errors are caught and set
process.exitCode = 1 as done for the file-path branch.
- Around line 147-159: The current errorHandler only calls getRecentActivity()
and re-lists sessions but never fetches any error records; update errorHandler
to actually load and display error details by replacing or extending the
getRecentActivity() call with a call to the real error-source (e.g. await
getErrorRecords() or await getSessionErrors(sessionId) — implement that helper
if it doesn't exist), then iterate the returned error items instead of just
sessions and print useful fields such as error.message, error.stack (or
error.details) and timestamps (using log.modified as fallback), and handle the
case of no errors found by printing a friendly message.
In `@src/daemon/main.ts`:
- Around line 246-261: The daemon state is being written and cleared at
incorrect times: move the writeDaemonState call so it only runs after you’ve
verified there is no existing live supervisor (i.e., after the check that
confirms this process will be the active supervisor) instead of before;
likewise, change the shutdown flow so removeDaemonState is invoked only after
workers finish draining (after the worker drain/wait logic completes) rather
than immediately in the shutdown() function. Update references in main.ts to
call writeDaemonState after the liveness check that decides to become supervisor
and call removeDaemonState at the end of the graceful shutdown sequence (after
worker drain/wait), adjusting the shutdown function and any code paths at lines
around the existing shutdown/worker wait logic so state is published/cleared at
the correct lifecycle points.
In `@src/main.tsx`:
- Around line 1805-1807: The assistant-mode activation predicate currently
checks assistantModule.isAssistantForced() and (options as Record<string,
unknown>).assistant but omits the persisted setting from getInitialSettings();
update the condition to also consult getInitialSettings().assistant (e.g., const
initial = getInitialSettings(); ...) so the combined boolean becomes
assistantModule && (assistantModule.isAssistantForced() || (options as
Record<string, unknown>).assistant === true || initial?.assistant === true);
ensure you call getInitialSettings() only where available and handle possible
undefined return safely.
---
Nitpick comments:
In `@packages/builtin-tools/src/tools/SendUserFileTool/SendUserFileTool.ts`:
- Around line 105-107: The catch in SendUserFileTool silently swallows bridge
upload errors; change it to catch the error object and call the existing
logForDebugging helper (as used by PushNotificationTool) to record the failure.
Locate the try/catch in the SendUserFileTool upload routine, update the catch to
"catch (err)" and call this.logForDebugging (or the tool's logForDebugging
function) with a concise message like "bridge upload failed" plus the
error/stack to preserve the best-effort behavior while improving debuggability.
In `@src/assistant/sessionDiscovery.ts`:
- Around line 8-13: Replace the exported type alias AssistantSession with an
exported interface to follow the repo TS convention: change the declaration of
AssistantSession from a "type" to an "interface" while keeping the same property
names and types (id, title, status, created_at) so existing usages of
AssistantSession remain compatible; update the declaration near the current
export to "export interface AssistantSession { ... }".
In `@src/cli/handlers/ant.ts`:
- Around line 2-10: The imports for createTask, getTask, updateTask, listTasks,
getTasksDir, getRecentActivity and the LogOption type are using relative paths;
switch them to the repository alias by replacing '../../utils/tasks.js' and
'../../utils/logoV2Utils.js' (and '../../types/logs.js') with the equivalent
'src/utils/tasks', 'src/utils/logoV2Utils' and 'src/types/logs' imports so the
file uses the src/ path alias consistently for those symbols.
In `@src/services/langfuse/__tests__/langfuse.test.ts`:
- Around line 9-16: Add an adjacent in-process test using the project's
heavy-dependency pattern: create a test next to "passes in isolated subprocess"
that uses mock.module() with an inline await import() to load the module under
test instead of spawning a subprocess; reference the same isolatedPath/test
behavior but replace Bun.spawn usage with mock.module() and inline await
import() so heavy dependencies are mocked in-process and test failures are
easier to localize (keep the original subprocess test but add this additional
in-process variant).
- Around line 24-39: The test currently throws an Error when exitCode !== 0 and
then still contains a redundant assertion expect(exitCode).toBe(0); remove that
final expect to avoid duplicate checks — locate the test block that uses the
exitCode variable and the throw on non-zero exit code and delete the
expect(exitCode).toBe(0) line so the test relies on the explicit throw for
failure.
In `@src/tasks/InProcessTeammateTask/types.ts`:
- Around line 22-26: Replace the exported type alias PendingTeammateUserMessage
with an exported interface of the same name so it follows the project convention
for object shapes; update the declaration in types.ts to "export interface
PendingTeammateUserMessage { message: string; autonomyRunId?: string; origin?:
MessageOrigin }" and keep the same property names/optionality so all usages of
PendingTeammateUserMessage remain compatible.
In `@src/utils/__tests__/cronTasks.baseline.test.ts`:
- Line 1: Rename the test file to follow the <module>.test.ts convention (change
cronTasks.baseline.test.ts to cronTasks.test.ts) and update any imports or test
runner references accordingly; if you want to retain "baseline" semantics keep
the test suite descriptive by adding "baseline" inside the describe(...) call
(e.g., describe('cronTasks - baseline', ...) or similar) so that the filename
matches repo standards while the test intent remains explicit; ensure the file
stays in the same __tests__ directory and run the test suite to confirm no path
or naming breakages.
In `@src/utils/taskSummary.ts`:
- Around line 37-43: Replace the unsafe type assertion on
options.forkContextMessages with a proper type guard: add an
isForkContextMessages(value: unknown): value is Array<{ type: string; message?:
{ content?: unknown } }> that checks Array.isArray(value) and that every item
has typeof item.type === 'string', then use that guard to assign messages (e.g.,
const messages = isForkContextMessages(options.forkContextMessages) ?
options.forkContextMessages : undefined); also apply the same pattern to the
lastBlock assertion (lines ~58–60) by adding a similar guard for its expected
shape and using it to narrow the type instead of direct casting.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
|
|
||
| ## Goal | ||
|
|
||
| Establish a stable baseline around the parts of `Claude-code-bast` that later autonomy work is most likely to touch: |
There was a problem hiding this comment.
Fix repository name typo in the baseline goal section.
Line 11 references Claude-code-bast, which looks like a misspelling and may confuse future readers/searchability.
✏️ Proposed fix
-Establish a stable baseline around the parts of `Claude-code-bast` that later autonomy work is most likely to touch:
+Establish a stable baseline around the parts of `claude-code` that later autonomy work is most likely to touch:📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| Establish a stable baseline around the parts of `Claude-code-bast` that later autonomy work is most likely to touch: | |
| Establish a stable baseline around the parts of `claude-code` that later autonomy work is most likely to touch: |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/test-plans/openclaw-autonomy-baseline.md` at line 11, The baseline goal
contains a repository name typo "Claude-code-bast"; update that string to the
correct repository name (e.g., "Claude-code-base" or the actual repo canonical
name) wherever it appears in the document (specifically the baseline goal
sentence) so readers/searches aren't confused; ensure the replacement preserves
surrounding backticks and formatting (the token `Claude-code-bast` in the
baseline goal line).
| - `src/proactive/__tests__/state.baseline.test.ts` | ||
| - `src/commands/__tests__/proactive.baseline.test.ts` | ||
| - `src/utils/__tests__/cronTasks.baseline.test.ts` | ||
| - `src/utils/__tests__/cronScheduler.baseline.test.ts` | ||
| - `src/__tests__/context.baseline.test.ts` |
There was a problem hiding this comment.
Align acceptance criteria with the documented test-file count.
Lines 34–38 list five baseline test files, but Line 86 says four. This mismatch makes completion criteria ambiguous.
✅ Proposed fix
-1. The four new test files pass.
+1. The five new test files pass.Also applies to: 86-87
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/test-plans/openclaw-autonomy-baseline.md` around lines 34 - 38,
Acceptance criteria mismatch: the baseline test-file list
(`src/proactive/__tests__/state.baseline.test.ts`,
`src/commands/__tests__/proactive.baseline.test.ts`,
`src/utils/__tests__/cronTasks.baseline.test.ts`,
`src/utils/__tests__/cronScheduler.baseline.test.ts`,
`src/__tests__/context.baseline.test.ts`) shows five files but the acceptance
criteria at Line 86 says four; update the acceptance criteria to state five
baseline test files (or remove one file from the list if the intent is four) so
the documented count and the list are consistent, and ensure the same corrected
count appears in both occurrences (Lines ~34–38 and 86–87).
| sent: false, | ||
| error: 'PushNotification requires the KAIROS transport layer.', | ||
| }, | ||
| async call(input: PushInput, context) { |
There was a problem hiding this comment.
Missing type annotation for context parameter.
The context parameter lacks an explicit type, which may cause TypeScript strict mode violations (bunx tsc --noEmit must pass with zero errors per coding guidelines). Add the appropriate type from src/Tool.js or define the expected interface.
- async call(input: PushInput, context) {
+ async call(input: PushInput, context: ToolCallContext) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@packages/builtin-tools/src/tools/PushNotificationTool/PushNotificationTool.ts`
at line 79, The method signature for PushNotificationTool.async call currently
omits a type for the context parameter; update the signature of call(input:
PushInput, context) to annotate context with the proper Tool context type (e.g.,
ToolContext or the exported type used by src/Tool.js) and add the corresponding
import of that type at the top of the file so the file compiles under strict
TypeScript (bunx tsc --noEmit). Ensure you reference the PushNotificationTool
class and its call method when making the change and use the exact exported type
name from src/Tool.js to avoid creating a new interface.
| async call(_input: SendUserFileInput) { | ||
| // File transfer is handled by the KAIROS assistant transport layer. | ||
| // Without the KAIROS runtime, this tool is not available. | ||
| async call(input: SendUserFileInput, context) { |
There was a problem hiding this comment.
Missing type annotation for context parameter.
Same issue as PushNotificationTool: the context parameter lacks an explicit type, which may cause TypeScript strict mode violations. Add the appropriate type annotation.
- async call(input: SendUserFileInput, context) {
+ async call(input: SendUserFileInput, context: ToolCallContext) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/builtin-tools/src/tools/SendUserFileTool/SendUserFileTool.ts` at
line 73, The call method on SendUserFileTool is missing an explicit type for the
context parameter; update the signature of async call(input: SendUserFileInput,
context) to include the same explicit context type used by PushNotificationTool
(e.g., ToolContext or the project's standard tool context interface), ensuring
imports are added if necessary and the signature becomes async call(input:
SendUserFileInput, context: <SameContextType>).
| async function runQueryModel( | ||
| events: BetaRawMessageStreamEvent[], | ||
| envOverrides: Record<string, string | undefined> = {}, | ||
| ) { | ||
| // Wire events into the mocked stream adapter | ||
| _nextEvents = events | ||
| // Save + apply env overrides |
There was a problem hiding this comment.
Reset captured request args before each run to avoid stale-pass tests.
_lastCreateArgs is shared across tests and never cleared. If chat.completions.create stops being called, the max_tokens assertion can still pass using previous state.
Suggested fix
async function runQueryModel(
events: BetaRawMessageStreamEvent[],
envOverrides: Record<string, string | undefined> = {},
) {
+ _lastCreateArgs = null
// Wire events into the mocked stream adapter
_nextEvents = eventsAlso applies to: 162-163, 484-486
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/services/api/openai/__tests__/queryModelOpenAI.isolated.ts` around lines
91 - 97, The test helper runQueryModel should clear shared captured state before
use: reset the module-level _lastCreateArgs (and any similar shared captures) at
the start of runQueryModel so each invocation begins fresh and assertions like
max_tokens rely only on the current run; locate runQueryModel and add a
statement to reinitialize _lastCreateArgs (and the other shared captures
referenced on lines around 162-163 and 484-486) before wiring _nextEvents and
applying envOverrides so stale values from prior tests cannot affect
chat.completions.create assertions.
| test('jitteredNextCronRunMs returns the exact next fire time when no second match exists in range', () => { | ||
| const fromMs = new Date('2026-04-12T10:00:00').getTime() | ||
| const exact = nextCronRunMs('0 0 29 2 *', fromMs) | ||
| const jittered = oneShotJitteredNextCronRunMs('0 0 29 2 *', fromMs, '89abcdef') | ||
|
|
||
| expect(exact).not.toBeNull() | ||
| expect(jittered).not.toBeNull() | ||
| expect(jittered!).toBeGreaterThanOrEqual(fromMs) | ||
| }) |
There was a problem hiding this comment.
Test intent and assertion are inconsistent in jittered-next-run case.
Line 194 says this should verify the exact next fire time when no second match exists, but the test never checks jittered === exact. This weakens the baseline guardrail.
🧪 Proposed fix
test('jitteredNextCronRunMs returns the exact next fire time when no second match exists in range', () => {
const fromMs = new Date('2026-04-12T10:00:00').getTime()
const exact = nextCronRunMs('0 0 29 2 *', fromMs)
const jittered = oneShotJitteredNextCronRunMs('0 0 29 2 *', fromMs, '89abcdef')
expect(exact).not.toBeNull()
expect(jittered).not.toBeNull()
- expect(jittered!).toBeGreaterThanOrEqual(fromMs)
+ expect(jittered).toBe(exact)
})📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| test('jitteredNextCronRunMs returns the exact next fire time when no second match exists in range', () => { | |
| const fromMs = new Date('2026-04-12T10:00:00').getTime() | |
| const exact = nextCronRunMs('0 0 29 2 *', fromMs) | |
| const jittered = oneShotJitteredNextCronRunMs('0 0 29 2 *', fromMs, '89abcdef') | |
| expect(exact).not.toBeNull() | |
| expect(jittered).not.toBeNull() | |
| expect(jittered!).toBeGreaterThanOrEqual(fromMs) | |
| }) | |
| test('jitteredNextCronRunMs returns the exact next fire time when no second match exists in range', () => { | |
| const fromMs = new Date('2026-04-12T10:00:00').getTime() | |
| const exact = nextCronRunMs('0 0 29 2 *', fromMs) | |
| const jittered = oneShotJitteredNextCronRunMs('0 0 29 2 *', fromMs, '89abcdef') | |
| expect(exact).not.toBeNull() | |
| expect(jittered).not.toBeNull() | |
| expect(jittered!).toBe(exact!) | |
| }) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/utils/__tests__/cronTasks.baseline.test.ts` around lines 194 - 202, The
test's intent is to verify that oneShotJitteredNextCronRunMs returns the exact
next fire time when there is no second match, but it never asserts equality with
nextCronRunMs; update the test to compute exact = nextCronRunMs('0 0 29 2 *',
fromMs) and jittered = oneShotJitteredNextCronRunMs('0 0 29 2 *', fromMs,
'89abcdef'), assert both are not null, and then add an assertion that jittered
=== exact (or toBe(exact)) so the baseline enforces the exact-match behavior of
oneShotJitteredNextCronRunMs; ensure you perform the null checks before
comparing to avoid type errors.
| } catch { | ||
| return [] | ||
| } |
There was a problem hiding this comment.
Same data-loss risk as autonomyFlows: catch block returns empty array for all errors.
listAutonomyRuns has the same pattern as listAutonomyFlows—returning [] for all exceptions can cause the next write to overwrite existing runs. Only ENOENT should return [].
🛡️ Suggested fix
- } catch {
- return []
+ } catch (error) {
+ if ((error as NodeJS.ErrnoException).code === 'ENOENT') {
+ return []
+ }
+ throw error
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| } catch { | |
| return [] | |
| } | |
| } catch (error) { | |
| if ((error as NodeJS.ErrnoException).code === 'ENOENT') { | |
| return [] | |
| } | |
| throw error | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/utils/autonomyRuns.ts` around lines 145 - 147, The catch-all in
listAutonomyRuns currently swallows every error and returns [], risking data
loss; change the catch in listAutonomyRuns so it only returns [] when the error
is an ENOENT (file not found) and rethrows any other errors; use a typed check
(e.g. treat the caught error as NodeJS.ErrnoException or check (err as any).code
=== 'ENOENT') and ensure behavior matches listAutonomyFlows' intended fix
(return [] only for ENOENT, otherwise throw).
| export function maybeGenerateTaskSummary( | ||
| options: Record<string, unknown>, | ||
| ): void { | ||
| lastSummaryTime = Date.now() | ||
|
|
There was a problem hiding this comment.
Rate-limit timestamp is updated even when no summary is emitted.
Line 34 advances lastSummaryTime before payload validation and before enqueueing updateSessionActivity. Empty/invalid turns consume the 30s window and skip the next valid update.
Proposed fix
export function maybeGenerateTaskSummary(
options: Record<string, unknown>,
): void {
- lastSummaryTime = Date.now()
+ if (!shouldGenerateTaskSummary()) return
try {
@@
if (!messages || messages.length === 0) return
@@
// Fire-and-forget update to session registry
+ lastSummaryTime = Date.now()
void updateSessionActivity({
status,
waitingFor,
}).catch(err => {Also applies to: 44-45, 68-74
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/utils/taskSummary.ts` around lines 31 - 35, The timestamp lastSummaryTime
is being advanced prematurely in maybeGenerateTaskSummary, causing the 30s
rate-limit to be consumed even when no summary is emitted; modify
maybeGenerateTaskSummary so that lastSummaryTime is only updated after payload
validation and after successfully enqueueing updateSessionActivity (i.e., move
the assignment to lastSummaryTime to the success path), and apply the same
change to the other early-update sites referenced (the blocks around the other
occurrences currently at the 44-45 and 68-74 regions), ensuring lastSummaryTime
is set only once the summary payload is valid and updateSessionActivity has been
scheduled.
| let status: 'busy' | 'idle' = 'busy' | ||
| let waitingFor: string | undefined | ||
|
|
||
| if (lastAssistant?.message?.content) { | ||
| const content = lastAssistant.message.content | ||
| // Check if last block is tool_use | ||
| if (Array.isArray(content)) { | ||
| const lastBlock = content[content.length - 1] as | ||
| | Record<string, unknown> | ||
| | undefined | ||
| if (lastBlock?.type === 'tool_use') { | ||
| status = 'busy' | ||
| waitingFor = `tool: ${lastBlock.name || 'unknown'}` | ||
| } |
There was a problem hiding this comment.
Defaulting to busy keeps completed turns incorrectly active.
Because status starts as 'busy' (Line 51) and is never set to 'idle' for non-tool_use outputs, completed assistant turns can still appear busy in session status.
Proposed fix
- let status: 'busy' | 'idle' = 'busy'
+ let status: 'busy' | 'idle' = 'idle'
let waitingFor: string | undefined
if (lastAssistant?.message?.content) {
const content = lastAssistant.message.content
// Check if last block is tool_use
if (Array.isArray(content)) {
const lastBlock = content[content.length - 1] as
| Record<string, unknown>
| undefined
if (lastBlock?.type === 'tool_use') {
status = 'busy'
waitingFor = `tool: ${lastBlock.name || 'unknown'}`
}
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let status: 'busy' | 'idle' = 'busy' | |
| let waitingFor: string | undefined | |
| if (lastAssistant?.message?.content) { | |
| const content = lastAssistant.message.content | |
| // Check if last block is tool_use | |
| if (Array.isArray(content)) { | |
| const lastBlock = content[content.length - 1] as | |
| | Record<string, unknown> | |
| | undefined | |
| if (lastBlock?.type === 'tool_use') { | |
| status = 'busy' | |
| waitingFor = `tool: ${lastBlock.name || 'unknown'}` | |
| } | |
| let status: 'busy' | 'idle' = 'idle' | |
| let waitingFor: string | undefined | |
| if (lastAssistant?.message?.content) { | |
| const content = lastAssistant.message.content | |
| // Check if last block is tool_use | |
| if (Array.isArray(content)) { | |
| const lastBlock = content[content.length - 1] as | |
| | Record<string, unknown> | |
| | undefined | |
| if (lastBlock?.type === 'tool_use') { | |
| status = 'busy' | |
| waitingFor = `tool: ${lastBlock.name || 'unknown'}` | |
| } | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/utils/taskSummary.ts` around lines 51 - 64, The status variable is
incorrectly defaulted to 'busy' causing completed assistant turns to remain
active; change the logic in the lastAssistant handling so status defaults to
'idle' and is only set to 'busy' when you detect a tool use block (inspect
lastAssistant, lastAssistant.message.content, Array.isArray(content), lastBlock
and lastBlock.type === 'tool_use'), and also ensure waitingFor is only populated
for tool_use; if content is absent or not a tool_use treat status as 'idle' and
clear waitingFor.
…eScript 错误修复 Squashed merge of: 1. fix/mcp-tsc-errors — 修复上游 MCP 重构后的 tsc 错误和测试失败 2. feat/pipe-mute-disconnect — Pipe IPC 逻辑断开、/lang 命令、mute 状态机 3. feat/stub-recovery-all — 实现全部 stub 恢复 (task 001-012) 4. feat/kairos-activation — KAIROS 激活解除阻塞 + 工具实现 5. codex/openclaw-autonomy-pr — 自治权限系统、运行记录、managed flows Additional: 6. daemon/job 命令层级化重构 (subcommand 架构) 7. 跨平台后台引擎抽象 (detached/tmux engines) 8. 修复 src/ 中 43 个预存在的 TypeScript 类型错误 9. 修复 langfuse isolated test mock 不完整问题 10. 修复 CodeRabbit 审查的 Critical/Major/Minor 问题 11. remote-control-server 测试 stderr 静默化 (logger 抽象)
1acabda to
1e1b80b
Compare
|
关闭重整:本地测试未完全通过(Bun 1.3.12 exit code 1),修复后重新创建 |
Summary
Squashed merge of 8 items (109 files, +11487/-1551):
Key Changes
Daemon/Job 命令层级化
/daemon合并 supervisor + bg sessions(daemon status统一显示)/job收纳new/list/reply(避免顶级命令冲突)跨平台后台引擎
BgEngine抽象接口:Win → DetachedEngine, macOS/Linux → TmuxEngine(有tmux)/DetachedEngine(无tmux)--bg模式因缺少 tmux 而失败的问题自治权限系统
Stub 恢复
Test plan
bun test全部通过bunx tsc --noEmit零错误/daemon status正常显示/job list正常工作--bg模式在 Windows 上使用 DetachedEngineSummary by CodeRabbit
New Features
Improvements
Documentation