Skip to content

fix(providers): support Azure AI Foundry (Anthropic) endpoint and corporate proxy#220

Open
nagarjunr wants to merge 7 commits intorohitg00:mainfrom
nagarjunr:local/all-fixes
Open

fix(providers): support Azure AI Foundry (Anthropic) endpoint and corporate proxy#220
nagarjunr wants to merge 7 commits intorohitg00:mainfrom
nagarjunr:local/all-fixes

Conversation

@nagarjunr
Copy link
Copy Markdown

@nagarjunr nagarjunr commented Apr 30, 2026

Summary

  • Azure OpenAI / Azure AI Foundry provider: New LLM provider for Azure-hosted models. Auto-detects deployment type from endpoint URL — standard Azure OpenAI ({resource}.openai.azure.com) uses OpenAI chat completions format; Azure AI Foundry Anthropic ({resource}.services.ai.azure.com/anthropic) uses Anthropic Messages API format with x-api-key / anthropic-version headers.
  • Corporate proxy support: Node.js 18+ built-in fetch ignores HTTP_PROXY/HTTPS_PROXY, causing DNS failures in corporate networks. Provider tunnels through HTTP CONNECT proxy via tunnel-agent + node-fetch when proxy env vars are present. Proxy credentials (user:pass@host) are extracted from the URL and forwarded as Proxy-Authorization.
  • Foundry URL double-suffix guard: Endpoint normalized so /v1/messages is not appended when the endpoint already ends with that path.
  • VALID_PROVIDERS gap: azure-openai was missing from the runtime allowlist used by loadFallbackConfig(), so FALLBACK_PROVIDERS=azure-openai was silently dropped despite ProviderType including the value.
  • Viewer tab bar / graph canvas fixes: Tab bar no longer crushes content area; graph canvas fills viewport height correctly.
  • JSDoc coverage: Full JSDoc on all exported and private members of AzureOpenAIProvider to satisfy the 80% docstring coverage check.

Changes

File What changed
src/providers/azure-openai.ts New provider: proxy tunnel, dual-format request/response, Foundry URL normalization, full JSDoc
src/providers/index.ts Wire up AzureOpenAIProvider
src/config.ts Azure env var detection; azure-openai added to VALID_PROVIDERS
src/types.ts ProviderType union includes azure-openai
src/viewer/index.html Tab bar overflow fix; graph canvas height fix
README.md Azure OpenAI + Foundry rows in provider table; AZURE_OPENAI_* env vars with proxy note
CHANGELOG.md [Unreleased] entries for all changes

Environment variables

AZURE_OPENAI_API_KEY=...
AZURE_OPENAI_ENDPOINT=https://<resource>.openai.azure.com          # Standard Azure OpenAI
# or
AZURE_OPENAI_ENDPOINT=https://<resource>.services.ai.azure.com/anthropic  # Azure AI Foundry (Anthropic)
AZURE_OPENAI_DEPLOYMENT=<deployment-name>
AZURE_OPENAI_API_VERSION=2024-08-01-preview   # optional, Azure OpenAI only

# Corporate proxy (optional — auto-detected from env)
HTTPS_PROXY=http://user:pass@proxy.corp.example.com:3128

Test plan

  • Graph extraction works in corporate proxy environment (HTTP_PROXY/HTTPS_PROXY set)
  • Authenticated proxy (user:pass@host:port) forwards Proxy-Authorization correctly
  • Azure AI Foundry Anthropic endpoint returns correct responses
  • Standard Azure OpenAI endpoint works unchanged
  • FALLBACK_PROVIDERS=azure-openai is not silently dropped
  • Endpoint with full /v1/messages suffix does not double-append
  • Viewer graph canvas renders nodes/edges without overflow
  • Tab bar does not crush when many sessions open

Summary by CodeRabbit

  • New Features

    • Added Azure OpenAI as a supported LLM provider with endpoint and deployment configuration.
    • Added Azure AI Foundry (Anthropic) endpoint auto-detection and support.
    • Added proxy support for corporate networks via environment variables.
  • Bug Fixes

    • Fixed tab bar overflow that was crushing content area with multiple sessions.
    • Fixed knowledge graph canvas to properly fill available viewport height.

Nagarjun Rajendran added 3 commits April 30, 2026 12:02
- tunnel-agent + node-fetch proxy tunnel when HTTP_PROXY/HTTPS_PROXY set
- detect Foundry Anthropic endpoints via isFoundry getter
- use Anthropic Messages API format for Foundry, OpenAI format otherwise
- handle both response content shapes
@vercel
Copy link
Copy Markdown

vercel Bot commented Apr 30, 2026

Someone is attempting to deploy a commit to the rohitg00's projects Team on Vercel.

A member of the Team first needs to authorize it.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 30, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds Azure OpenAI as a new LLM provider with env-var detection (AZURE_OPENAI_*), a new AzureOpenAIProvider supporting Azure Chat and Foundry/Anthropic message shapes with proxy-aware fetch, registers the provider in the factory and types, updates docs/CHANGELOG/README, and adjusts viewer banner/CSS.

Changes

Cohort / File(s) Summary
Type System
src/types.ts
Added "azure-openai" to the ProviderType union.
Configuration & Detection
src/config.ts
detectProvider now recognizes Azure when AZURE_OPENAI_DEPLOYMENT, AZURE_OPENAI_ENDPOINT, and AZURE_OPENAI_API_KEY are set; startup stderr guidance updated; detectLlmProviderKind treats the Azure triad as "llm"; VALID_PROVIDERS includes "azure-openai".
Provider Implementation
src/providers/azure-openai.ts
New exported AzureOpenAIProvider implementing MemoryProvider; normalizes endpoint, stores API key/deployment/max tokens/apiVersion; auto-detects Foundry/Anthropic endpoints and builds corresponding request shapes; uses proxy-aware fetch via tunnel-agent + node-fetch when proxy env vars set with graceful fallback/warning; validates responses and extracts text content or surfaces detailed errors.
Provider Factory
src/providers/index.ts
createBaseProvider gains an "azure-openai" branch that reads env/config (endpoint, deployment, apiVersion, apiKey, maxTokens) and constructs AzureOpenAIProvider.
UI / Viewer
src/viewer/index.html
Feature-flag banner updated to include Azure env var instructions and Anthropic export flow; CSS tweaks (flex-shrink: 0, graph container sizing/overflow) to prevent header/tab/footer collapsing and ensure the graph fills available height.
Docs & Changelog
README.md, CHANGELOG.md
Documented Azure OpenAI and Azure Foundry (Anthropic) configuration and example .env entries (including optional AZURE_OPENAI_API_VERSION), proxy guidance, and noted viewer/UI fixes.
Package Metadata
package.json
Added node-fetch and tunnel-agent to optionalDependencies (ordering/commas adjusted).

Sequence Diagram

sequenceDiagram
    participant App as Application
    participant Config as Config Detection
    participant Factory as Provider Factory
    participant Provider as AzureOpenAIProvider
    participant Service as Azure/Foundry

    App->>Config: detectProvider()
    activate Config
    Config->>Config: check AZURE_OPENAI_DEPLOYMENT/ENDPOINT/API_KEY
    alt Azure vars present
        Config-->>App: provider="azure-openai" + baseURL & model
    end
    deactivate Config

    App->>Factory: createBaseProvider(config)
    activate Factory
    Factory->>Provider: new AzureOpenAIProvider(apiKey, endpoint, deployment, maxTokens, apiVersion?)
    Factory-->>App: AzureOpenAIProvider instance
    deactivate Factory

    App->>Provider: compress(systemPrompt, userPrompt)
    activate Provider
    Provider->>Provider: buildRequest(Foundry or Azure chat shape)
    Provider->>Provider: buildFetchOptions(include proxy agent if set)
    Provider->>Service: POST /v1/messages or /openai/deployments/.../chat/completions?api-version=...
    Service-->>Provider: Response (2xx or error)
    Provider->>Provider: extractContent() or throw with payload snippet
    Provider-->>App: compressed/summarized string
    deactivate Provider
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Poem

🐇 I found the keys and hopped inside,
Endpoints hummed where models hide,
I tunneled through the proxy mist,
Shaped messages and shortened gist,
Hoppy bytes — compress and glide! ✨

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 25.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The pull request title accurately summarizes the main changes: adding Azure AI Foundry (Anthropic) endpoint support and corporate proxy functionality to the providers.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
Review rate limit: 7/8 reviews remaining, refill in 7 minutes and 30 seconds.

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/config.ts`:
- Around line 90-101: The fallback allowlist omits "azure-openai" so
loadFallbackConfig() filters out FALLBACK_PROVIDERS entries even though
ProviderType and createBaseProvider() support it; update the VALID_PROVIDERS
set/array (used by loadFallbackConfig) to include "azure-openai" so that
specifying FALLBACK_PROVIDERS=azure-openai actually registers the provider, and
ensure any related validation logic around loadFallbackConfig() and
VALID_PROVIDERS acknowledges the new string value.

In `@src/providers/azure-openai.ts`:
- Around line 59-75: The buildRequest method unconditionally appends
"/v1/messages" when this.isFoundry is true which can produce a double-suffixed
URL; update buildRequest to normalize this.endpoint for Foundry by trimming
trailing slashes and checking if it already ends with "v1/messages" (or
"/anthropic/v1/messages") and only append "/v1/messages" when missing, then
construct the url accordingly; use the existing symbols (buildRequest,
this.isFoundry, this.endpoint, url) to locate and change the logic so both forms
(base "/anthropic" or full "/anthropic/v1/messages") are handled without
producing duplicate suffixes.
- Around line 6-23: The proxy handling in buildFetchOptions silently falls back
when requiring node-fetch (ERR_REQUIRE_ESM) and does not support proxy
authentication; update buildFetchOptions to (1) avoid requiring ESM-only
node-fetch—use the global fetch/undici or dynamic import strategy instead so the
configured agent is actually used, (2) replace tunnel-agent with the maintained
tunnel package (or another maintained tunneling library) that accepts proxyAuth,
and (3) extract proxy credentials from proxyUrl (URL.username/URL.password) or
env vars and pass them as proxyAuth into the tunnel/tunneling options so
authenticated corporate proxies work; keep returning { fetchFn, agent } from
buildFetchOptions and document that proxy support is best-effort if a suitable
tunneling lib cannot be loaded.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: c3e8465e-b1aa-4a8f-b437-9e76cc06acc3

📥 Commits

Reviewing files that changed from the base of the PR and between 94fc119 and 725c731.

📒 Files selected for processing (5)
  • src/config.ts
  • src/providers/azure-openai.ts
  • src/providers/index.ts
  • src/types.ts
  • src/viewer/index.html

Comment thread src/config.ts
Comment on lines +6 to +23
function buildFetchOptions(): { fetchFn: typeof fetch; agent?: unknown } {
const proxyUrl = process.env.HTTPS_PROXY || process.env.https_proxy ||
process.env.HTTP_PROXY || process.env.http_proxy;
if (!proxyUrl) return { fetchFn: fetch };
try {
const tunnel = require("tunnel-agent") as {
httpsOverHttp: (opts: { proxy: { host: string; port: number } }) => unknown;
};
const nodeFetch = require("node-fetch") as typeof fetch;
const parsed = new URL(proxyUrl);
const agent = tunnel.httpsOverHttp({
proxy: { host: parsed.hostname, port: parseInt(parsed.port || "3128") },
});
return { fetchFn: nodeFetch, agent };
} catch {
return { fetchFn: fetch };
}
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# Check if file exists and examine package.json configuration
cat -n package.json | head -20

Repository: rohitg00/agentmemory

Length of output: 962


🏁 Script executed:

# Examine the actual file to see the complete context
cat -n src/providers/azure-openai.ts

Repository: rohitg00/agentmemory

Length of output: 5095


🏁 Script executed:

# Check for any dependencies and versions in package.json
rg '"(node-fetch|tunnel-agent)"' package.json

Repository: rohitg00/agentmemory

Length of output: 46


🏁 Script executed:

# Try different search patterns for dependencies
fd package.json | head -10

Repository: rohitg00/agentmemory

Length of output: 169


🏁 Script executed:

# Look for lock files that show dependencies
fd -e "lock.json" -e "yarn.lock" -e "pnpm-lock.yaml" | head -5

Repository: rohitg00/agentmemory

Length of output: 46


🏁 Script executed:

# Search more broadly in package.json
cat package.json | grep -A 50 "dependencies"

Repository: rohitg00/agentmemory

Length of output: 617


🌐 Web query:

node-fetch 3.x require() CommonJS compatibility ESM

💡 Result:

node-fetch 3.x is an ESM-only module and does not support direct import via require in CommonJS environments. Attempting const fetch = require('node-fetch') will throw ERR_REQUIRE_ESM. Official recommendation: Use node-fetch@2 for CommonJS compatibility, which receives critical bug fixes. Workaround for using 3.x in CommonJS: dynamic async import. const fetch = (...args) => import('node-fetch').then(({default: fetch}) => fetch(...args)); Note: Node.js 18+ has native fetch, making node-fetch unnecessary in modern environments.

Citations:


🌐 Web query:

tunnel-agent httpsOverHttp HTTPS proxy authentication support

💡 Result:

The npm package tunnel-agent (https://www.npmjs.com/package/tunnel-agent), a standalone HTTP proxy tunneling agent formerly part of the request module, does not explicitly document support for HTTPS over HTTP proxy authentication in its README or official sources. Its README is minimal, lacking usage examples or configuration details for proxy auth. However, practical usage in projects shows it is used for HTTPS over HTTP tunneling, e.g.: const agent = require('tunnel-agent').httpsOverHttp({ proxy: { host: 'proxyhost', port: 8080 } }); No explicit proxyAuth option like 'user:password' is shown in docs for tunnel-agent. In contrast, the tunnel package (https://www.npmjs.com/package/tunnel, koichik/node-tunnel) explicitly supports httpsOverHttp with proxy authentication via proxy: { ..., proxyAuth: 'user:password' } for Basic auth, as documented in its README and GitHub repo. Example: var tunnelingAgent = tunnel.httpsOverHttp({ proxy: { host: proxyHost, port: proxyPort, proxyAuth: 'user:password' } }); This adds Proxy-Authorization header for the CONNECT method to the HTTP proxy. tunnel-agent (last updated 2017, still widely downloaded) likely supports proxy auth implicitly by passing auth via headers or options to Node's http.request for the CONNECT, but lacks explicit documentation. For clear HTTPS over HTTP proxy auth support, use tunnel package instead. Note: tunnel-agent repo has open issues and minimal docs; consider modern alternatives like https-proxy-agent for better maintenance and features.

Citations:


Proxy support will silently degrade and lacks authentication credentials.

The code attempts require("node-fetch") which fails with ERR_REQUIRE_ESM since node-fetch 3.x is ESM-only; this silently falls back to native fetch without the configured proxy agent, causing requests to bypass the proxy entirely. Additionally, the tunnel-agent library (last updated 2017, minimally documented) receives only host and port—it has no support for proxy authentication credentials, making it unsuitable for authenticated corporate proxies. For environments requiring proxy auth, the fallback to native fetch is unreliable unless CONNECT tunneling works without credentials (rare in corporate settings). Consider using the tunnel package instead, which explicitly supports proxyAuth, or clarify that this proxy handling is best-effort and document the limitations.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/providers/azure-openai.ts` around lines 6 - 23, The proxy handling in
buildFetchOptions silently falls back when requiring node-fetch
(ERR_REQUIRE_ESM) and does not support proxy authentication; update
buildFetchOptions to (1) avoid requiring ESM-only node-fetch—use the global
fetch/undici or dynamic import strategy instead so the configured agent is
actually used, (2) replace tunnel-agent with the maintained tunnel package (or
another maintained tunneling library) that accepts proxyAuth, and (3) extract
proxy credentials from proxyUrl (URL.username/URL.password) or env vars and pass
them as proxyAuth into the tunnel/tunneling options so authenticated corporate
proxies work; keep returning { fetchFn, agent } from buildFetchOptions and
document that proxy support is best-effort if a suitable tunneling lib cannot be
loaded.

Comment thread src/providers/azure-openai.ts
…S gap

- Extract proxy credentials from URL (username:password) and pass as
  proxyAuth to tunnel-agent so authenticated corporate proxies work
- Normalize Foundry endpoint: skip appending /v1/messages if endpoint
  already ends with that path to avoid double-suffixed URLs
- Add azure-openai to VALID_PROVIDERS so FALLBACK_PROVIDERS=azure-openai
  is not silently filtered out (ProviderType already included it)
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (1)
src/providers/azure-openai.ts (1)

10-30: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Proxy fallback can silently disable proxying in common ESM/HTTPS-proxy setups.

When proxy env vars are set, this path can still bypass the proxy: require("node-fetch") may fail in ESM/node-fetch@3 environments, and httpsOverHttp is hardcoded even if the proxy URL is https://. The catch then silently falls back to direct fetch.

🔧 Suggested fix
-function buildFetchOptions(): { fetchFn: typeof fetch; agent?: unknown } {
+async function buildFetchOptions(): Promise<{ fetchFn: typeof fetch; agent?: unknown }> {
   const proxyUrl = process.env.HTTPS_PROXY || process.env.https_proxy ||
                    process.env.HTTP_PROXY  || process.env.http_proxy;
   if (!proxyUrl) return { fetchFn: fetch };
   try {
     const tunnel = require("tunnel-agent") as {
       httpsOverHttp: (opts: { proxy: { host: string; port: number; proxyAuth?: string } }) => unknown;
+      httpsOverHttps: (opts: { proxy: { host: string; port: number; proxyAuth?: string } }) => unknown;
     };
-    const nodeFetch = require("node-fetch") as typeof fetch;
+    const { default: nodeFetch } = (await import("node-fetch")) as { default: typeof fetch };
     const parsed = new URL(proxyUrl);
     const proxyAuth =
       parsed.username
         ? `${decodeURIComponent(parsed.username)}:${decodeURIComponent(parsed.password)}`
         : undefined;
-    const agent = tunnel.httpsOverHttp({
+    const makeTunnel = parsed.protocol === "https:" ? tunnel.httpsOverHttps : tunnel.httpsOverHttp;
+    const agent = makeTunnel({
       proxy: {
         host: parsed.hostname,
         port: parseInt(parsed.port || "3128"),
         ...(proxyAuth ? { proxyAuth } : {}),
       },
     });
     return { fetchFn: nodeFetch, agent };
   } catch {
     return { fetchFn: fetch };
   }
 }
@@
-    const { fetchFn, agent } = buildFetchOptions();
+    const { fetchFn, agent } = await buildFetchOptions();
#!/bin/bash
set -euo pipefail

echo "== dependency versions =="
fd -HI 'package.json' -x sh -c 'echo "--- $1"; jq -r \
  ".dependencies[\"node-fetch\"], .devDependencies[\"node-fetch\"], .dependencies[\"tunnel-agent\"], .devDependencies[\"tunnel-agent\"]" "$1"' sh {}

echo "== lockfile resolved node-fetch versions (if npm lock exists) =="
fd -HI 'package-lock.json' -x jq -r '.. | objects | select(has("node_modules/node-fetch")) | .["node_modules/node-fetch"].version' {}

echo "== proxy implementation usage =="
rg -n -C2 'buildFetchOptions|require\("node-fetch"\)|httpsOverHttp|httpsOverHttps' src/providers/azure-openai.ts

As per coding guidelines, "Use TypeScript and ESM only with "type": "module" in package.json".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/providers/azure-openai.ts` around lines 10 - 30, The current proxy setup
can silently bypass proxies because require("node-fetch") may fail in ESM and
httpsOverHttp is hardcoded; update the block that handles proxyUrl to (1)
attempt a dynamic import of node-fetch via await
import("node-fetch").then(m=>m.default||m) if require fails, falling back only
after imports fail, (2) choose the tunnel method based on parsed.protocol (use
tunnel.httpsOverHttps for "https:" and tunnel.httpsOverHttp for "http:"), (3)
preserve proxyAuth and port logic when building agent, and (4) if any
import/agent construction fails, surface a warning/error instead of silently
returning { fetchFn: fetch } so callers know proxying was disabled; refer to the
symbols tunnel, nodeFetch, parsed, proxyAuth, agent, and fetchFn in
azure-openai.ts when making these changes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/providers/azure-openai.ts`:
- Around line 113-122: The call to fetch in the private async call(systemPrompt,
userPrompt) method has no timeout, so wrap the outbound request with an
AbortController: create an AbortController, start a setTimeout to call
controller.abort() after a reasonable timeout, pass controller.signal into the
fetch options (alongside agent), and clear the timer after fetch resolves;
ensure the fetch invocation from buildFetchOptions() uses that signal and
surface an appropriate error when aborted so summarize()/compress() callers fail
fast instead of hanging.

---

Duplicate comments:
In `@src/providers/azure-openai.ts`:
- Around line 10-30: The current proxy setup can silently bypass proxies because
require("node-fetch") may fail in ESM and httpsOverHttp is hardcoded; update the
block that handles proxyUrl to (1) attempt a dynamic import of node-fetch via
await import("node-fetch").then(m=>m.default||m) if require fails, falling back
only after imports fail, (2) choose the tunnel method based on parsed.protocol
(use tunnel.httpsOverHttps for "https:" and tunnel.httpsOverHttp for "http:"),
(3) preserve proxyAuth and port logic when building agent, and (4) if any
import/agent construction fails, surface a warning/error instead of silently
returning { fetchFn: fetch } so callers know proxying was disabled; refer to the
symbols tunnel, nodeFetch, parsed, proxyAuth, agent, and fetchFn in
azure-openai.ts when making these changes.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 02067ebc-ee08-4ecf-8672-6cdd27b961fa

📥 Commits

Reviewing files that changed from the base of the PR and between 49d4acf and 093a35c.

📒 Files selected for processing (2)
  • src/config.ts
  • src/providers/azure-openai.ts

Comment on lines +113 to +122
private async call(systemPrompt: string, userPrompt: string): Promise<string> {
const { url, headers, body } = this.buildRequest(systemPrompt, userPrompt);

const { fetchFn, agent } = buildFetchOptions();
const response = await (fetchFn as (url: string, opts: Record<string, unknown>) => Promise<Response>)(url, {
method: "POST",
headers,
body: JSON.stringify(body),
...(agent ? { agent } : {}),
});
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail
echo "Node engine constraint:"
jq -r '.engines.node // "engines.node not set"' package.json
echo
echo "Fetch call sites and timeout/signal usage in provider:"
rg -n -C2 'fetch\(|signal:|AbortController|AbortSignal\.timeout' src/providers/azure-openai.ts

Repository: rohitg00/agentmemory

Length of output: 153


🏁 Script executed:

wc -l src/providers/azure-openai.ts

Repository: rohitg00/agentmemory

Length of output: 98


🏁 Script executed:

cat -n src/providers/azure-openai.ts | head -150

Repository: rohitg00/agentmemory

Length of output: 5506


🏁 Script executed:

rg -n 'AbortController|AbortSignal|timeout|signal' src/providers/ -A 2 -B 2

Repository: rohitg00/agentmemory

Length of output: 46


Add a request timeout for outbound model calls.

This external call has no timeout/signal, so a stalled upstream can hang summarize() or compress() calls indefinitely. Use AbortController with a timeout to prevent this.

⏱️ Timeout pattern
   private async call(systemPrompt: string, userPrompt: string): Promise<string> {
     const { url, headers, body } = this.buildRequest(systemPrompt, userPrompt);

     const { fetchFn, agent } = buildFetchOptions();
+    const controller = new AbortController();
+    const timeout = setTimeout(() => controller.abort(), 30_000);
     const response = await (fetchFn as (url: string, opts: Record<string, unknown>) => Promise<Response>)(url, {
       method: "POST",
       headers,
       body: JSON.stringify(body),
+      signal: controller.signal,
       ...(agent ? { agent } : {}),
     });
+    clearTimeout(timeout);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/providers/azure-openai.ts` around lines 113 - 122, The call to fetch in
the private async call(systemPrompt, userPrompt) method has no timeout, so wrap
the outbound request with an AbortController: create an AbortController, start a
setTimeout to call controller.abort() after a reasonable timeout, pass
controller.signal into the fetch options (alongside agent), and clear the timer
after fetch resolves; ensure the fetch invocation from buildFetchOptions() uses
that signal and surface an appropriate error when aborted so
summarize()/compress() callers fail fast instead of hanging.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
src/providers/azure-openai.ts (1)

165-174: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Restore the outbound request timeout.

This fetch call still has no AbortController/timeout, so a stalled Azure endpoint or proxy can hang compress()/summarize() indefinitely. This is the same failure mode already called out in the previous review and it remains unresolved here.

⏱️ Suggested fix
   private async call(systemPrompt: string, userPrompt: string): Promise<string> {
     const { url, headers, body } = this.buildRequest(systemPrompt, userPrompt);

     const { fetchFn, agent } = buildFetchOptions();
+    const controller = new AbortController();
+    const timeout = setTimeout(() => controller.abort(), 30_000);
     const response = await (fetchFn as (url: string, opts: Record<string, unknown>) => Promise<Response>)(url, {
       method: "POST",
       headers,
       body: JSON.stringify(body),
+      signal: controller.signal,
       ...(agent ? { agent } : {}),
     });
+    clearTimeout(timeout);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/providers/azure-openai.ts` around lines 165 - 174, The outbound fetch in
AzureOpenAIProvider.call is missing request timeout handling, which can hang
compress()/summarize(); wrap the POST call with an AbortController and a timeout
(e.g., setTimeout that calls controller.abort()) and pass controller.signal into
the fetch options; use the existing buildFetchOptions() result (agent) and
include signal alongside headers/body, and clear the timeout on success to avoid
leaks. Ensure the controller is created inside call() so each request has its
own timeout and propagate any abort errors appropriately.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/providers/azure-openai.ts`:
- Around line 27-36: The proxy agent creation hard-codes port "3128" when
parsed.port is empty; update the logic around parsed and the
tunnel.httpsOverHttp call to compute a defaultPort based on parsed.protocol (use
'443' for 'https:' and '80' otherwise) and use parseInt(parsed.port ||
defaultPort) for the proxy port; adjust the proxy object passed to
tunnel.httpsOverHttp (and any related variables like proxyAuth and parsed) so
the agent uses the protocol-derived default instead of 3128.
- Around line 151-158: The extractContent method currently returns only the
first Anthropic text block when this.isFoundry is true; update the logic in
extractContent (the isFoundry branch that reads the local variable content) to
collect all entries where b.type === "text" and concatenate them (in order) into
a single string (e.g., join their .text values with an empty string or a
newline) instead of using .find(); preserve the existing return type (string |
undefined) by returning undefined when there are no text blocks and the
concatenated string when there are.

---

Duplicate comments:
In `@src/providers/azure-openai.ts`:
- Around line 165-174: The outbound fetch in AzureOpenAIProvider.call is missing
request timeout handling, which can hang compress()/summarize(); wrap the POST
call with an AbortController and a timeout (e.g., setTimeout that calls
controller.abort()) and pass controller.signal into the fetch options; use the
existing buildFetchOptions() result (agent) and include signal alongside
headers/body, and clear the timeout on success to avoid leaks. Ensure the
controller is created inside call() so each request has its own timeout and
propagate any abort errors appropriately.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 3cfb9b63-402b-4fad-838f-8d297a3e01bc

📥 Commits

Reviewing files that changed from the base of the PR and between 093a35c and a0329c7.

📒 Files selected for processing (1)
  • src/providers/azure-openai.ts

Comment on lines +27 to +36
const parsed = new URL(proxyUrl);
const proxyAuth =
parsed.username
? `${decodeURIComponent(parsed.username)}:${decodeURIComponent(parsed.password)}`
: undefined;
const agent = tunnel.httpsOverHttp({
proxy: {
host: parsed.hostname,
port: parseInt(parsed.port || "3128"),
...(proxyAuth ? { proxyAuth } : {}),
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

fd -t f azure-openai.ts

Repository: rohitg00/agentmemory

Length of output: 94


🏁 Script executed:

cat -n src/providers/azure-openai.ts | sed -n '1,60p'

Repository: rohitg00/agentmemory

Length of output: 2883


🏁 Script executed:

cat -n src/providers/azure-openai.ts | sed -n '18,43p'

Repository: rohitg00/agentmemory

Length of output: 1215


🏁 Script executed:

rg -n "3128|proxyUrl|proxy" src/providers/azure-openai.ts

Repository: rohitg00/agentmemory

Length of output: 939


Derive the proxy port default from the URL scheme instead of hard-coding 3128.

Line 35 defaults to 3128 when the proxy URL omits an explicit port. This breaks compatibility with corporate proxies using standard defaults (port 80 for http://, port 443 for https://). The correct approach is:

const defaultPort = parsed.protocol === 'https:' ? '443' : '80';
port: parseInt(parsed.port || defaultPort)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/providers/azure-openai.ts` around lines 27 - 36, The proxy agent creation
hard-codes port "3128" when parsed.port is empty; update the logic around parsed
and the tunnel.httpsOverHttp call to compute a defaultPort based on
parsed.protocol (use '443' for 'https:' and '80' otherwise) and use
parseInt(parsed.port || defaultPort) for the proxy port; adjust the proxy object
passed to tunnel.httpsOverHttp (and any related variables like proxyAuth and
parsed) so the agent uses the protocol-derived default instead of 3128.

Comment on lines +151 to +158
private extractContent(data: Record<string, unknown>): string | undefined {
if (this.isFoundry) {
const content = data.content as Array<{ type: string; text: string }> | undefined;
return content?.find((b) => b.type === "text")?.text;
}
const choices = data.choices as Array<{ message: { content: string } }> | undefined;
return choices?.[0]?.message?.content;
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cat -n src/providers/azure-openai.ts | head -180

Repository: rohitg00/agentmemory

Length of output: 8153


🏁 Script executed:

# Check remaining lines of the file
sed -n '180,$p' src/providers/azure-openai.ts | head -30

Repository: rohitg00/agentmemory

Length of output: 357


🏁 Script executed:

# Search for test files related to this provider
find . -type f -name "*.test.*" -o -name "*.spec.*" | grep -i azure

Repository: rohitg00/agentmemory

Length of output: 46


🏁 Script executed:

# Search for any references to extractContent or response handling
rg "extractContent|content.*text|Anthropic.*response" --type ts -A 2 -B 2

Repository: rohitg00/agentmemory

Length of output: 23026


🏁 Script executed:

cat -n src/providers/anthropic.ts

Repository: rohitg00/agentmemory

Length of output: 2223


🏁 Script executed:

# Search for any Azure AI Foundry or Anthropic API response documentation in comments
rg "content.*block|text.*block|anthropic.*response" -i --type ts -B 2 -A 2

Repository: rohitg00/agentmemory

Length of output: 2801


🏁 Script executed:

# Check if there are any type definitions or SDK imports that show the response shape
rg "type.*content.*array|Array.*type.*text" --type ts

Repository: rohitg00/agentmemory

Length of output: 596


🏁 Script executed:

# Let's check if package.json has anthropic SDK version
cat package.json | grep -A 5 -B 5 anthropic

Repository: rohitg00/agentmemory

Length of output: 372


🏁 Script executed:

# Search for any test or mock responses from Azure Foundry/Anthropic
rg "foundry|isFoundry" --type ts -B 5 -A 10

Repository: rohitg00/agentmemory

Length of output: 3120


🏁 Script executed:

# Check if there's any documentation or comments about response format
rg "Messages API|content\[" --type ts -i

Repository: rohitg00/agentmemory

Length of output: 2307


🌐 Web query:

Anthropic Messages API response content multiple text blocks

💡 Result:

In Anthropic's Messages API, the response content is an array of content blocks (Vec). Each block has a type, such as "text" for generated text. Currently, responses primarily contain "text" blocks, but can include multiple blocks like "tool_use" when tools are involved. For example: [{"type": "text", "text": "Hi, I'm Claude."}]. With structured outputs, the JSON is returned in response.content[0].text as a single text block. To extract text, iterate over the array, filter by type "text", and concatenate the texts.

Citations:


🏁 Script executed:

# Check the return type signature and how it's used
rg "extractContent" --type ts -B 2 -A 5

Repository: rohitg00/agentmemory

Length of output: 1204


Join all text blocks from Anthropic responses instead of returning only the first.

The Anthropic Messages API response can contain multiple content blocks with type === "text", especially when combined with tool use or other features. The current implementation using .find() returns only the first text block, discarding subsequent ones. Per Anthropic API documentation, all text blocks should be concatenated.

🔧 Suggested fix
  private extractContent(data: Record<string, unknown>): string | undefined {
    if (this.isFoundry) {
      const content = data.content as Array<{ type: string; text: string }> | undefined;
-     return content?.find((b) => b.type === "text")?.text;
+     return content
+       ?.filter((b) => b.type === "text")
+       .map((b) => b.text)
+       .join("");
    }
    const choices = data.choices as Array<{ message: { content: string } }> | undefined;
    return choices?.[0]?.message?.content;
  }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/providers/azure-openai.ts` around lines 151 - 158, The extractContent
method currently returns only the first Anthropic text block when this.isFoundry
is true; update the logic in extractContent (the isFoundry branch that reads the
local variable content) to collect all entries where b.type === "text" and
concatenate them (in order) into a single string (e.g., join their .text values
with an empty string or a newline) instead of using .find(); preserve the
existing return type (string | undefined) by returning undefined when there are
no text blocks and the concatenated string when there are.

@Qodo-Free-For-OSS
Copy link
Copy Markdown

Hi, The new Azure provider unconditionally references tunnel-agent and node-fetch, but these packages are not declared in the root package.json, so the build and/or runtime will fail when this code path is exercised (and the advertised corporate proxy support cannot work reliably).

Severity: action required | Category: reliability

How to fix: Add proxy deps to package.json

Agent prompt to fix - you can give this to your LLM of choice:

Issue description

The Azure provider’s proxy implementation requires tunnel-agent and node-fetch, but they are not declared in the root package.json. This can break builds (dependency resolution) and/or cause runtime failures/silent proxy disablement.

Issue Context

The implementation uses createRequire() to load CJS deps at runtime when HTTP_PROXY/HTTPS_PROXY is present.

Fix Focus Areas

  • package.json[56-78]
  • src/providers/azure-openai.ts[18-42]

Suggested fix

  • Add tunnel-agent and node-fetch to dependencies (or optionalDependencies if you want proxy support to be optional), and ensure the chosen node-fetch major is compatible with require().
  • If using optional deps, consider emitting a clear warning when proxy env vars are set but deps are missing (see separate finding).

Found by Qodo. Free code review for open-source maintainers.

…encies

Both packages were used at runtime for corporate proxy support but not
declared in package.json — silent transitive dep that breaks on updates.
Added to optionalDependencies (proxy is optional; try/catch fallback handles
missing deps). Updated fallback to emit a clear warning when proxy env vars
are set but deps cannot be loaded.
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/providers/azure-openai.ts`:
- Around line 22-47: The current try block around require("tunnel-agent") also
wraps URL parsing, so a malformed proxyUrl will be swallowed and trigger the
optional-deps fallback; fix by validating/parsing proxyUrl (new URL(proxyUrl)
and computing proxyAuth/parsed) before the try and let a thrown TypeError
propagate (or handle it explicitly) so invalid proxy values do not fall back to
global fetch; then restrict the try block to only the dynamic requires/agent
creation (tunnel.httpsOverHttp and node-fetch) and keep the existing fallback
message for missing optional deps.
- Around line 104-106: The isFoundry getter currently checks this.endpoint
string for "/anthropic" which can match hostnames; update the getter to parse
the endpoint as a URL and test the pathname (e.g., new
URL(this.endpoint).pathname.includes("/anthropic")) so only path segments are
considered; handle invalid/relative URLs by falling back to the original string
check to avoid throwing. Apply this change to the private get isFoundry()
accessor referencing the endpoint property.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: abd30334-d4d3-4fb3-8fa1-a730dab6feba

📥 Commits

Reviewing files that changed from the base of the PR and between a0329c7 and 1079b69.

📒 Files selected for processing (2)
  • package.json
  • src/providers/azure-openai.ts
✅ Files skipped from review due to trivial changes (1)
  • package.json

Comment on lines +22 to +47
try {
const tunnel = require("tunnel-agent") as {
httpsOverHttp: (opts: { proxy: { host: string; port: number; proxyAuth?: string } }) => unknown;
};
const nodeFetch = require("node-fetch") as typeof fetch;
const parsed = new URL(proxyUrl);
const proxyAuth =
parsed.username
? `${decodeURIComponent(parsed.username)}:${decodeURIComponent(parsed.password)}`
: undefined;
const agent = tunnel.httpsOverHttp({
proxy: {
host: parsed.hostname,
port: parseInt(parsed.port || "3128"),
...(proxyAuth ? { proxyAuth } : {}),
},
});
return { fetchFn: nodeFetch, agent };
} catch {
process.stderr.write(
"[agentmemory] azure-openai: proxy env vars detected but tunnel-agent/node-fetch " +
"could not be loaded — falling back to global fetch (proxy will be bypassed). " +
"Install optional deps to enable proxy support: npm install tunnel-agent node-fetch\n",
);
return { fetchFn: fetch };
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Don’t silently bypass proxy when proxy URL itself is invalid.

Because new URL(proxyUrl) is inside the broad try, a malformed proxy value also falls into the optional-deps fallback path. That can unexpectedly bypass enforced corporate egress policy instead of failing fast.

Suggested fix
 function buildFetchOptions(): { fetchFn: typeof fetch; agent?: unknown } {
   const proxyUrl = process.env.HTTPS_PROXY || process.env.https_proxy ||
                    process.env.HTTP_PROXY  || process.env.http_proxy;
   if (!proxyUrl) return { fetchFn: fetch };
+  const parsed = new URL(proxyUrl);

   try {
     const tunnel = require("tunnel-agent") as {
       httpsOverHttp: (opts: { proxy: { host: string; port: number; proxyAuth?: string } }) => unknown;
     };
     const nodeFetch = require("node-fetch") as typeof fetch;
-    const parsed = new URL(proxyUrl);
     const proxyAuth =
       parsed.username
         ? `${decodeURIComponent(parsed.username)}:${decodeURIComponent(parsed.password)}`
         : undefined;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/providers/azure-openai.ts` around lines 22 - 47, The current try block
around require("tunnel-agent") also wraps URL parsing, so a malformed proxyUrl
will be swallowed and trigger the optional-deps fallback; fix by
validating/parsing proxyUrl (new URL(proxyUrl) and computing proxyAuth/parsed)
before the try and let a thrown TypeError propagate (or handle it explicitly) so
invalid proxy values do not fall back to global fetch; then restrict the try
block to only the dynamic requires/agent creation (tunnel.httpsOverHttp and
node-fetch) and keep the existing fallback message for missing optional deps.

Comment on lines +104 to +106
private get isFoundry(): boolean {
return this.endpoint.includes("/anthropic");
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Use URL pathname (not full URL string) for Foundry detection.

Line 105 can misclassify non-Foundry endpoints because it checks "/anthropic" against the entire URL, including hostname/authority. That can route requests to the wrong API shape and headers.

Suggested fix
   private get isFoundry(): boolean {
-    return this.endpoint.includes("/anthropic");
+    const normalizedPath = new URL(this.endpoint).pathname.replace(/\/+$/, "");
+    return normalizedPath === "/anthropic" || normalizedPath.startsWith("/anthropic/");
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/providers/azure-openai.ts` around lines 104 - 106, The isFoundry getter
currently checks this.endpoint string for "/anthropic" which can match
hostnames; update the getter to parse the endpoint as a URL and test the
pathname (e.g., new URL(this.endpoint).pathname.includes("/anthropic")) so only
path segments are considered; handle invalid/relative URLs by falling back to
the original string check to avoid throwing. Apply this change to the private
get isFoundry() accessor referencing the endpoint property.

@rohitg00
Copy link
Copy Markdown
Owner

rohitg00 commented May 8, 2026

Reviewed — really thorough work, especially the Azure AI Foundry / Anthropic-via-Azure detection. The endpoint-path-as-routing-signal (/anthropic → Anthropic Messages API format with x-api-key + anthropic-version headers) is the right shape, and the corporate-proxy tunneling via tunnel-agent + node-fetch is exactly the pattern Node 18+ users hit when fetch ignores HTTPS_PROXY.

What I like:

  • node-fetch and tunnel-agent are in optionalDependencies, with a runtime check that prints a clear "install these to enable proxy support" hint. No bloat for users who don't need it.
  • README + env block + CHANGELOG entries all updated.
  • Handles both HTTPS_PROXY and https_proxy casing.
  • Carries the viewer layout fixes from fix(viewer): prevent tab bar crush and fix graph canvas overflow #218 — please confirm those are bundled intentionally (looks like they came along on the rebase).

Two coordination notes:

1. This supersedes #219.

You opened both. #220 has everything #219 has plus AI Foundry + proxy. I've asked over there which path you prefer; if you want to consolidate into this one, please close #219 with a quick comment and we'll review/merge #220 as the single Azure provider PR. Either way the Azure work lands attributed to you.

2. Endpoint-trailing-slash + path-detection ordering.

Just checked: endpoint.replace(/\/$/, "") happens before the .includes("/anthropic") check, which is fine. But if a user sets AZURE_OPENAI_ENDPOINT=https://foo.services.ai.azure.com/anthropic/ (trailing slash), the strip is correct. If they set .../anthropic/something, the includes still matches. Worth either documenting "endpoint must end at /anthropic for AI Foundry detection" in the env block, or tightening the check to endsWith("/anthropic"). Minor.

Approving the provider once the consolidation question on #219 is resolved.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants