feat: Introduce ManagedAgent and AgentRunner implementations#110
feat: Introduce ManagedAgent and AgentRunner implementations#110jsonbailey wants to merge 3 commits intomainfrom
Conversation
c3f2da2 to
bc8b945
Compare
packages/ai-providers/server-ai-langchain/src/ldai_langchain/langchain_runner_factory.py
Outdated
Show resolved
Hide resolved
| output=content, | ||
| raw=raw_response, | ||
| metrics=metrics, | ||
| ) |
There was a problem hiding this comment.
LangChain agent runner doesn't aggregate multi-turn token usage
Medium Severity
Unlike OpenAIAgentRunner, which accumulates total_input and total_output across all model invocations in the agentic loop, LangChainAgentRunner only extracts metrics from the final response. In multi-turn agent conversations with tool calls, token usage from intermediate model invocations is silently lost, leading to under-reported usage metrics.
There was a problem hiding this comment.
Not sure if that's true but seems worth investigating.
| "parameters": td.get("parameters", {"type": "object", "properties": {}}), | ||
| }, | ||
| }) | ||
| return tools |
There was a problem hiding this comment.
Identical _build_openai_tools duplicated across two runners
Low Severity
_build_openai_tools is identically implemented in both LangChainAgentRunner and OpenAIAgentRunner. This duplication means any future fix or format change needs to be applied in both places. This logic could live in a shared utility (e.g., in the SDK core or a shared helper) since both runners convert the same LD tool definition format to OpenAI function-calling format.
Additional Locations (1)
keelerm84
left a comment
There was a problem hiding this comment.
Looks like bugbot has some good feedback on this one.
| output=content, | ||
| raw=raw_response, | ||
| metrics=metrics, | ||
| ) |
There was a problem hiding this comment.
Not sure if that's true but seems worth investigating.
886e3b7 to
a183f12
Compare
feat: Add OpenAIAgentRunner with agentic tool-calling loop feat: Add LangChainAgentRunner with agentic tool-calling loop feat: Add OpenAIRunnerFactory.create_agent(config, tools) -> OpenAIAgentRunner feat: Add LangChainRunnerFactory.create_agent(config, tools) -> LangChainAgentRunner feat: Add ManagedAgent wrapper holding AgentRunner and LDAIConfigTracker feat: Add LDAIClient.create_agent() returning ManagedAgent
…ider helper tests feat: add TestGetAIUsageFromResponse and TestGetToolCallsFromResponse test coverage for LangChainHelper feat: add TestGetAIUsageFromResponse test coverage for OpenAIHelper fix: update ManagedAgent.invoke to use track_metrics_of_async
bc8b945 to
c1b87a6
Compare
packages/ai-providers/server-ai-langchain/src/ldai_langchain/langchain_runner_factory.py
Outdated
Show resolved
Hide resolved
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
There are 3 total unresolved issues (including 2 from previous reviews).
Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, have a team admin enable autofix in the Cursor dashboard.
| model_name = model_dict.get('name', '') | ||
| parameters = dict(model_dict.get('parameters') or {}) | ||
| tool_definitions = parameters.pop('tools', []) or [] | ||
| instructions = config.instructions or '' if hasattr(config, 'instructions') else '' |
There was a problem hiding this comment.
Operator precedence defeats hasattr guard on instructions
High Severity
Python's ternary if...else binds more tightly than or, so config.instructions or '' if hasattr(config, 'instructions') else '' is parsed as config.instructions or ('' if hasattr(...) else ''). This means config.instructions is always accessed first, completely bypassing the hasattr guard. If the config object lacks an instructions attribute, this raises AttributeError instead of falling back to ''. Parentheses around (config.instructions or '') are needed to match the intended semantics.


feat: Add OpenAIAgentRunner with agentic tool-calling loop
feat: Add LangChainAgentRunner with agentic tool-calling loop
feat: Add OpenAIRunnerFactory.create_agent(config, tools) -> OpenAIAgentRunner
feat: Add LangChainRunnerFactory.create_agent(config, tools) -> LangChainAgentRunner
feat: Add ManagedAgent wrapper holding AgentRunner and LDAIConfigTracker
feat: Add LDAIClient.create_agent() returning ManagedAgent
Requirements
Related issues
Provide links to any issues in this repository or elsewhere relating to this pull request.
Describe the solution you've provided
Provide a clear and concise description of what you expect to happen.
Describe alternatives you've considered
Provide a clear and concise description of any alternative solutions or features you've considered.
Additional context
Add any other context about the pull request here.
Note
Medium Risk
Adds new agent execution paths that invoke external tools and loop until completion; failures or mis-specified tool schemas could affect runtime behavior and token/usage reporting.
Overview
Introduces first-class agent support across the SDK:
LDAIClient.create_agent()now returns a newManagedAgentwrapper that runs anAgentRunnerwith automatic metric tracking.Adds provider-specific tool-calling runners (
OpenAIAgentRunner,LangChainAgentRunner) pluscreate_agent()factory methods to build them from agent config, instructions, andtoolsdefinitions/registry. Token-usage extraction in both providers now returnsNonewhen all counts are zero, with expanded tests covering usage precedence, tool-call extraction, agent loops, and error handling.Written by Cursor Bugbot for commit 90b548f. This will update automatically on new commits. Configure here.