🔴 Required Information
Please ensure all items in this section are completed to allow for efficient
triaging. Requests without complete information may be rejected / deprioritized.
If an item is not applicable to you - please mark it as N/A
Describe the Bug:
When using LiteLlm to call Gemini models with grounding enabled (google_search tool, Vertex AI Search, etc.), the resulting LlmResponse.grounding_metadata is
always None. Downstream consumers (event.grounding_metadata, after_model_callback, after_agent_callback) therefore have no access to grounding
chunks/supports, breaking citation pipelines that rely on byte-index supports.
The native Gemini() model class propagates grounding_metadata correctly. Only the LiteLlm path is affected.
Steps to Reproduce:
import asyncio
from google.adk.agents import LlmAgent
from google.adk.models.lite_llm import LiteLlm
from google.adk.runners import InMemoryRunner
from google.adk.tools import google_search
from google.genai import types
agent = LlmAgent(
name="grounded_agent",
model=LiteLlm(model="gemini/gemini-2.5-flash"),
instruction="Answer using web search.",
tools=[google_search],
)
async def main():
runner = InMemoryRunner(agent=agent, app_name="repro")
session = await runner.session_service.create_session(
app_name="repro", user_id="u"
)
async for event in runner.run_async(
user_id="u",
session_id=session.id,
new_message=types.Content(
role="user",
parts=[types.Part(text="When was Google Inc. founded?")],
),
):
if event.grounding_metadata:
print("HAS GROUNDING:", event.grounding_metadata.grounding_chunks)
else:
print("event.grounding_metadata is None")
asyncio.run(main())
Expected: At least one event prints HAS GROUNDING: [...] with non-empty grounding_chunks and grounding_supports (matching the behavior of the same code
with model=Gemini(model="gemini-2.5-flash")).
Actual: Every event prints event.grounding_metadata is None. The Gemini API returned grounding (verifiable via litellm._turn_on_debug()), LiteLLM stored it
on the ModelResponse, but ADK silently discards it.
Environment Details:
google-adk: 1.25.1 (also reproduced against main — the conversion functions in lite_llm.py on main still contain zero references to grounding_metadata)
litellm: any recent version
- Model:
gemini/gemini-2.5-flash (also gemini/gemini-2.0-flash, vertex_ai/gemini-2.5-pro)
- Python: 3.12
Model Information:
- Are you using LiteLLM: Yes
- Which model is being used: gemini-3.1-flash-preview
🟡 Optional Information
Providing this information greatly speeds up the resolution process.
Regression:
Did this work in a previous version of ADK? If so, which one?
Logs:
Please attach relevant logs. Wrap them in code blocks (```) or attach a
text file.
Screenshots / Video:
If applicable, add screenshots or screen recordings to help explain
your problem.
Additional Context:
Add any other context about the problem here.
Minimal Reproduction Code:
Please provide a code snippet or a link to a Gist/repo that isolates the issue.
How often has this issue occurred?:
- Always (100%)
- Often (50%+)
- Intermittently (<50%)
- Once / Rare
🔴 Required Information
Please ensure all items in this section are completed to allow for efficient
triaging. Requests without complete information may be rejected / deprioritized.
If an item is not applicable to you - please mark it as N/A
Describe the Bug:
When using
LiteLlmto call Gemini models with grounding enabled (google_searchtool, Vertex AI Search, etc.), the resultingLlmResponse.grounding_metadataisalways
None. Downstream consumers (event.grounding_metadata,after_model_callback,after_agent_callback) therefore have no access to groundingchunks/supports, breaking citation pipelines that rely on byte-index supports.
The native
Gemini()model class propagatesgrounding_metadatacorrectly. Only theLiteLlmpath is affected.Steps to Reproduce:
Expected: At least one event prints
HAS GROUNDING: [...]with non-emptygrounding_chunksandgrounding_supports(matching the behavior of the same codewith
model=Gemini(model="gemini-2.5-flash")).Actual: Every event prints
event.grounding_metadata is None. The Gemini API returned grounding (verifiable vialitellm._turn_on_debug()), LiteLLM stored iton the
ModelResponse, but ADK silently discards it.Environment Details:
google-adk: 1.25.1 (also reproduced againstmain— the conversion functions inlite_llm.pyonmainstill contain zero references togrounding_metadata)litellm: any recent versiongemini/gemini-2.5-flash(alsogemini/gemini-2.0-flash,vertex_ai/gemini-2.5-pro)Model Information:
🟡 Optional Information
Providing this information greatly speeds up the resolution process.
Regression:
Did this work in a previous version of ADK? If so, which one?
Logs:
Please attach relevant logs. Wrap them in code blocks (```) or attach a
text file.
Screenshots / Video:
If applicable, add screenshots or screen recordings to help explain
your problem.
Additional Context:
Add any other context about the problem here.
Minimal Reproduction Code:
Please provide a code snippet or a link to a Gist/repo that isolates the issue.
How often has this issue occurred?: