Conversation
packages/ai-providers/server-ai-openai/src/ldai_openai/openai_agent_graph_runner.py
Outdated
Show resolved
Hide resolved
packages/ai-providers/server-ai-openai/src/ldai_openai/openai_agent_graph_runner.py
Show resolved
Hide resolved
226dfdf to
e57a147
Compare
886e3b7 to
a183f12
Compare
…aphRunner Implements PR 5 — ManagedAgentGraph + create_agent_graph(): ldai: - managed_agent_graph.py: ManagedAgentGraph wrapper holding AgentGraphRunner + AIGraphTracker; exposes run(), get_agent_graph_runner(), get_tracker() - LDAIClient.create_agent_graph(key, context, tools): resolves graph via agent_graph(), delegates to RunnerFactory, returns ManagedAgentGraph - Exports ManagedAgentGraph from top-level ldai package ldai_openai: - OpenAIAgentGraphRunner(AgentGraphRunner): builds agents via reverse_traverse using the openai-agents SDK; auto-tracks path, tool calls, handoffs, latency, invocation success/failure - OpenAIRunnerFactory.create_agent_graph(graph_def, tools) -> OpenAIAgentGraphRunner ldai_langchain: - LangGraphAgentGraphRunner(AgentGraphRunner): builds a LangGraph StateGraph via traverse(); auto-tracks latency and invocation success/failure - LangChainRunnerFactory.create_agent_graph(graph_def, tools) -> LangGraphAgentGraphRunner Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…n rollup Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
e57a147 to
be26d6d
Compare
packages/ai-providers/server-ai-langchain/src/ldai_langchain/langgraph_agent_graph_runner.py
Outdated
Show resolved
Hide resolved
| def invoke(state: WorkflowState) -> WorkflowState: | ||
| exec_path.append(node_key) | ||
| if not model: | ||
| return state |
There was a problem hiding this comment.
No-op node duplicates messages via operator.add reducer
Medium Severity
When a node has no model, invoke returns the full state dict as-is. Because the messages field uses Annotated[List[AnyMessage], operator.add] as its reducer, LangGraph merges the returned value by concatenating: current_messages + returned_messages. Since returned_messages IS current_messages, this doubles every message in the state. Returning {'messages': []} would be the correct no-op.
| else: | ||
| print(f" {name}: {repr(val)}") | ||
| except Exception as e: | ||
| print(f" {name}: (error reading: {e})") |
There was a problem hiding this comment.
Debugging function with print statements left in code
Low Severity
_log_run_result_shape is a debugging helper that uses print() to dump object attributes. It's never called — the only reference is a commented-out line (# _log_run_result_shape(result)). This is dead debugging code that likely wasn't intended for production.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
There are 3 total unresolved issues (including 2 from previous reviews).
Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, have a team admin enable autofix in the Cursor dashboard.
|
|
||
| model = None | ||
| if node_config.model: | ||
| lc_model = init_chat_model(model=node_config.model.name) |
There was a problem hiding this comment.
LangGraph runner ignores provider mapping and model parameters
Medium Severity
init_chat_model(model=node_config.model.name) passes only the model name, ignoring the provider and all model parameters (temperature, max_tokens, etc.). The existing create_langchain_model helper in the same package already handles provider mapping (e.g., Gemini → google-genai, Bedrock handling) and passes **parameters. This call bypasses all of that, so non-OpenAI providers will fail or be misrouted, and configured model parameters are silently dropped.


feat: Add OpenAIAgentGraphRunner support
feat: Add LangGraphAgentGraphRunner support
Note
Medium Risk
Introduces new agent-graph execution paths and provider integrations (LangGraph/OpenAI Agents) with dynamic tool wiring and metric tracking, which could affect runtime behavior across providers. Risk is moderate due to new async execution/exception handling and reliance on optional external dependencies.
Overview
Adds first-class managed agent graph execution to the server AI SDK via
ManagedAgentGraphandLDAIClient.create_agent_graph(), including usage tracking and a wrapper that delegatesrun()to a provider-specific runner.Extends the OpenAI and LangChain provider packages to support agent graphs:
OpenAIRunnerFactory.create_agent_graph()returns a newOpenAIAgentGraphRunner(OpenAI Agents SDK with handoffs/tools + token/latency tracking), andLangChainRunnerFactory.create_agent_graph()returns a newLangGraphAgentGraphRunner(LangGraph StateGraph compilation/invocation with tool binding + metrics/tool-call tracking). Exports and tests are added for both runners and the new managed API, including graceful failure behavior when optional dependencies aren’t installed.Written by Cursor Bugbot for commit b32f562. This will update automatically on new commits. Configure here.