Skip to content

LiteLlm drops Gemini grounding_metadata: _model_response_to_generate_content_response ignores response.vertex_ai_grounding_metadata #5659

@adityapandey216

Description

@adityapandey216

🔴 Required Information

Please ensure all items in this section are completed to allow for efficient
triaging. Requests without complete information may be rejected / deprioritized.
If an item is not applicable to you - please mark it as N/A

Describe the Bug:

When using LiteLlm to call Gemini models with grounding enabled (google_search tool, Vertex AI Search, etc.), the resulting LlmResponse.grounding_metadata is
always None. Downstream consumers (event.grounding_metadata, after_model_callback, after_agent_callback) therefore have no access to grounding
chunks/supports, breaking citation pipelines that rely on byte-index supports.

The native Gemini() model class propagates grounding_metadata correctly. Only the LiteLlm path is affected.

Steps to Reproduce:

  import asyncio
  from google.adk.agents import LlmAgent
  from google.adk.models.lite_llm import LiteLlm
  from google.adk.runners import InMemoryRunner
  from google.adk.tools import google_search
  from google.genai import types

  agent = LlmAgent(
      name="grounded_agent",
      model=LiteLlm(model="gemini/gemini-2.5-flash"),
      instruction="Answer using web search.",
      tools=[google_search],
  )

  async def main():
      runner = InMemoryRunner(agent=agent, app_name="repro")
      session = await runner.session_service.create_session(
          app_name="repro", user_id="u"
      )
      async for event in runner.run_async(
          user_id="u",
          session_id=session.id,
          new_message=types.Content(
              role="user",
              parts=[types.Part(text="When was Google Inc. founded?")],
          ),
      ):
          if event.grounding_metadata:
              print("HAS GROUNDING:", event.grounding_metadata.grounding_chunks)
          else:
              print("event.grounding_metadata is None")

  asyncio.run(main())

Expected: At least one event prints HAS GROUNDING: [...] with non-empty grounding_chunks and grounding_supports (matching the behavior of the same code
with model=Gemini(model="gemini-2.5-flash")).

Actual: Every event prints event.grounding_metadata is None. The Gemini API returned grounding (verifiable via litellm._turn_on_debug()), LiteLLM stored it
on the ModelResponse, but ADK silently discards it.

Environment Details:

  • google-adk: 1.25.1 (also reproduced against main — the conversion functions in lite_llm.py on main still contain zero references to grounding_metadata)
  • litellm: any recent version
  • Model: gemini/gemini-2.5-flash (also gemini/gemini-2.0-flash, vertex_ai/gemini-2.5-pro)
  • Python: 3.12

Model Information:

  • Are you using LiteLLM: Yes
  • Which model is being used: gemini-3.1-flash-preview

🟡 Optional Information

Providing this information greatly speeds up the resolution process.

Regression:
Did this work in a previous version of ADK? If so, which one?

Logs:
Please attach relevant logs. Wrap them in code blocks (```) or attach a
text file.

// Paste logs here

Screenshots / Video:
If applicable, add screenshots or screen recordings to help explain
your problem.

Additional Context:
Add any other context about the problem here.

Minimal Reproduction Code:
Please provide a code snippet or a link to a Gist/repo that isolates the issue.

// Code snippet here

How often has this issue occurred?:

  • Always (100%)
  • Often (50%+)
  • Intermittently (<50%)
  • Once / Rare

Metadata

Metadata

Assignees

Labels

models[Component] Issues related to model supporttools[Component] This issue is related to tools

Type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions