I would like to implement tracing for an agent and am attempting to adhere to this OpenInference spec: https://github.com/Arize-ai/openinference/blob/main/spec/README.md
My assumption is that callbacks are the right way to do this: https://docs.activeagents.ai/docs/active-agent/callbacks
The around_generation and around_action provide these hooks/callbacks to trace the overall generation (agent) and individual actions (tool), but I cannot figure out a way to trace each individual LLM call.
I have a hunch this is covered in v1 #259 but I did a skim and haven't been able to objectively confirm.
For example:
- Say I have a simple agent with instructions + 1 tool
- I invoke
prompt = Agent.with(message:).prompt_context
- Then,
response = prompt.generate_now
Under the hood, my understanding is:
- LLM call (eg: OpenAI generation provider generate call)
- Assistant response to call tool
- 2nd LLM call with previous messages + tool call response
With the around_generation and around_action callbacks I can trace all 3 steps (entire generation) and the single tool call (middle point), but I cannot individual trace the first and third steps (1st LLM call & 2nd LLM call).
Am I missing an obvious way to already do this? Or, would adding callbacks to the generate providers generate calls make sense and unlock this? For example, I think I'm looking for around_generate (not to be confused with around_generation)?
My desired outcome trace:
- Agent generation trace
- LLM call span (missing)
- Tool call span
- LLM call span (missing)
I would like to implement tracing for an agent and am attempting to adhere to this OpenInference spec: https://github.com/Arize-ai/openinference/blob/main/spec/README.md
My assumption is that callbacks are the right way to do this: https://docs.activeagents.ai/docs/active-agent/callbacks
The
around_generationandaround_actionprovide these hooks/callbacks to trace the overall generation (agent) and individual actions (tool), but I cannot figure out a way to trace each individual LLM call.I have a hunch this is covered in v1 #259 but I did a skim and haven't been able to objectively confirm.
For example:
prompt = Agent.with(message:).prompt_contextresponse = prompt.generate_nowUnder the hood, my understanding is:
With the
around_generationandaround_actioncallbacks I can trace all 3 steps (entire generation) and the single tool call (middle point), but I cannot individual trace the first and third steps (1st LLM call & 2nd LLM call).Am I missing an obvious way to already do this? Or, would adding callbacks to the generate providers
generatecalls make sense and unlock this? For example, I think I'm looking foraround_generate(not to be confused witharound_generation)?My desired outcome trace: