-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Description
Description
Hi team,
I am using the Reflection Pattern workflow from the Agent Framework Python sample:
This workflow works reliably with gpt-4o, but after switching the model to gpt-4.1 / gpt-5, the reflection/feedback application step frequently returns non-JSON output (tabs \t and truncated JSON). Because I enforce structured output with Pydantic, the workflow fails during validation/parsing.
✅ Expected behavior:
The reflection step should return a valid JSON object only matching my Pydantic schema (TobeAzureResourceResponse).
No tabs, no markdown, no partial/truncated output.
Pydantic validation should succeed consistently.
✅ Screenshot / logs
(Attach screenshot)
You can see that the response starts as JSON, but "properties" becomes a long repeated \t sequence and the output is truncated.
Code Sample
✅ Pydantic schema used
class ExternalResourceNode(BaseModel):
model_config = {"extra": "forbid"}
type: str = Field(description="ARM resource type of the external/shared resource")
name: str = Field(description="Name resolved by environment")
connected_services: List[str] = Field(default_factory=list, description="List of Resource Ids that connect to external resource")
class TobeAzureResourceResponse(BaseModel):
model_config = {"extra": "forbid"}
id: str
name: str
type: str
kind: Optional[str] = None
location: Optional[str] = None
tags: Optional[Union[Json, str,None,Dict[str,str]]] = None
sku: Optional[Union[Json, str,None,Dict[str,str]]] = None
identity: Optional[Union[Json, str,None,Dict[str,str]]] = None
properties: Optional[Json] = None
environment_values: Optional[Union[Json, str,None,Dict[str,str]]] = None
external_resources: List[ExternalResourceNode] = Field(default_factory=list)Error Messages / Stack Traces
With gpt-4.1 / gpt-5, during feedback/reflection application, the model output contains many tab escape characters like \t\t\t\t..., especially under "properties":.
The JSON response is often truncated and not valid JSON.
Pydantic parsing/validation fails.Package Versions
agent-framework==1.0.0b251209
Python Version
Python 3.12.8
Additional Context
✅ Steps to reproduce:
Run the reflection pattern sample workflow (workflow_as_agent_reflection_pattern.py) in Python.
Replace the model from gpt-4o to gpt-4.1 (or gpt-5).
Use structured output enforcement with the schema above.
Trigger a case where reflection feedback is applied (2nd pass / correction pass).
Observe that the reflection output may contain many \t characters and becomes truncated, failing JSON parsing.
✅ Question / Request
Is there a recommended way in Agent Framework to enforce strict JSON-only structured output (especially for the reflection step) for gpt-4.1 / gpt-5?
If there are sample updates or best practices (e.g., JSON-only mode, output token sizing, or reflection prompts), guidance would help.