Skip to content

feat: Add ManagedAgentGraph support#111

Open
jsonbailey wants to merge 3 commits intomainfrom
jb/aic-1664/managed-agent-graph
Open

feat: Add ManagedAgentGraph support#111
jsonbailey wants to merge 3 commits intomainfrom
jb/aic-1664/managed-agent-graph

Conversation

@jsonbailey
Copy link
Contributor

@jsonbailey jsonbailey commented Mar 25, 2026

feat: Add OpenAIAgentGraphRunner support
feat: Add LangGraphAgentGraphRunner support


Note

Medium Risk
Introduces new agent-graph execution paths and provider integrations (LangGraph/OpenAI Agents) with dynamic tool wiring and metric tracking, which could affect runtime behavior across providers. Risk is moderate due to new async execution/exception handling and reliance on optional external dependencies.

Overview
Adds first-class managed agent graph execution to the server AI SDK via ManagedAgentGraph and LDAIClient.create_agent_graph(), including usage tracking and a wrapper that delegates run() to a provider-specific runner.

Extends the OpenAI and LangChain provider packages to support agent graphs: OpenAIRunnerFactory.create_agent_graph() returns a new OpenAIAgentGraphRunner (OpenAI Agents SDK with handoffs/tools + token/latency tracking), and LangChainRunnerFactory.create_agent_graph() returns a new LangGraphAgentGraphRunner (LangGraph StateGraph compilation/invocation with tool binding + metrics/tool-call tracking). Exports and tests are added for both runners and the new managed API, including graceful failure behavior when optional dependencies aren’t installed.

Written by Cursor Bugbot for commit b32f562. This will update automatically on new commits. Configure here.

@jsonbailey jsonbailey changed the title feat: Add ManagedAgentGraph support feat: Add ManagedAgentGraph support (PR-6) Mar 25, 2026
@jsonbailey jsonbailey marked this pull request as ready for review March 25, 2026 22:05
@jsonbailey jsonbailey requested a review from a team as a code owner March 25, 2026 22:05
@jsonbailey jsonbailey force-pushed the jb/aic-1664/managed-agent-graph branch from 226dfdf to e57a147 Compare March 25, 2026 22:22
@jsonbailey jsonbailey changed the base branch from jb/aic-1664/runner-abcs to jb/aic-1664/graph-tracking-improvements March 25, 2026 22:23
@jsonbailey jsonbailey force-pushed the jb/aic-1664/graph-tracking-improvements branch from 886e3b7 to a183f12 Compare March 26, 2026 17:49
Base automatically changed from jb/aic-1664/graph-tracking-improvements to main March 26, 2026 18:51
@jsonbailey jsonbailey changed the title feat: Add ManagedAgentGraph support (PR-6) feat: Add ManagedAgentGraph support Mar 26, 2026
jsonbailey and others added 2 commits March 26, 2026 14:26
…aphRunner

Implements PR 5 — ManagedAgentGraph + create_agent_graph():

ldai:
- managed_agent_graph.py: ManagedAgentGraph wrapper holding AgentGraphRunner
  + AIGraphTracker; exposes run(), get_agent_graph_runner(), get_tracker()
- LDAIClient.create_agent_graph(key, context, tools): resolves graph via
  agent_graph(), delegates to RunnerFactory, returns ManagedAgentGraph
- Exports ManagedAgentGraph from top-level ldai package

ldai_openai:
- OpenAIAgentGraphRunner(AgentGraphRunner): builds agents via reverse_traverse
  using the openai-agents SDK; auto-tracks path, tool calls, handoffs,
  latency, invocation success/failure
- OpenAIRunnerFactory.create_agent_graph(graph_def, tools) -> OpenAIAgentGraphRunner

ldai_langchain:
- LangGraphAgentGraphRunner(AgentGraphRunner): builds a LangGraph StateGraph
  via traverse(); auto-tracks latency and invocation success/failure
- LangChainRunnerFactory.create_agent_graph(graph_def, tools) -> LangGraphAgentGraphRunner

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…n rollup

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
@jsonbailey jsonbailey force-pushed the jb/aic-1664/managed-agent-graph branch from e57a147 to be26d6d Compare March 26, 2026 19:39
def invoke(state: WorkflowState) -> WorkflowState:
exec_path.append(node_key)
if not model:
return state
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No-op node duplicates messages via operator.add reducer

Medium Severity

When a node has no model, invoke returns the full state dict as-is. Because the messages field uses Annotated[List[AnyMessage], operator.add] as its reducer, LangGraph merges the returned value by concatenating: current_messages + returned_messages. Since returned_messages IS current_messages, this doubles every message in the state. Returning {'messages': []} would be the correct no-op.

Fix in Cursor Fix in Web

else:
print(f" {name}: {repr(val)}")
except Exception as e:
print(f" {name}: (error reading: {e})")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Debugging function with print statements left in code

Low Severity

_log_run_result_shape is a debugging helper that uses print() to dump object attributes. It's never called — the only reference is a commented-out line (# _log_run_result_shape(result)). This is dead debugging code that likely wasn't intended for production.

Fix in Cursor Fix in Web

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Copy link

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

There are 3 total unresolved issues (including 2 from previous reviews).

Fix All in Cursor

Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, have a team admin enable autofix in the Cursor dashboard.


model = None
if node_config.model:
lc_model = init_chat_model(model=node_config.model.name)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LangGraph runner ignores provider mapping and model parameters

Medium Severity

init_chat_model(model=node_config.model.name) passes only the model name, ignoring the provider and all model parameters (temperature, max_tokens, etc.). The existing create_langchain_model helper in the same package already handles provider mapping (e.g., Gemini → google-genai, Bedrock handling) and passes **parameters. This call bypasses all of that, so non-OpenAI providers will fail or be misrouted, and configured model parameters are silently dropped.

Fix in Cursor Fix in Web

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant