Skip to content

fix(models): surface error when model returns STOP with empty content#5636

Open
Oppong08 wants to merge 2 commits intogoogle:mainfrom
Oppong08:fix-empty-final-output-after-tool-call
Open

fix(models): surface error when model returns STOP with empty content#5636
Oppong08 wants to merge 2 commits intogoogle:mainfrom
Oppong08:fix-empty-final-output-after-tool-call

Conversation

@Oppong08
Copy link
Copy Markdown

@Oppong08 Oppong08 commented May 7, 2026

Tighten LlmResponse.create() so a Gemini candidate with empty parts and finish_reason=STOP no longer passes through as a successful empty response. It now routes to the error branch with error_code='MODEL_RETURNED_NO_CONTENT' and a descriptive error_message, so callers see an actionable error event instead of a silent empty final agent output. Reproduces against gemini-2.5-flash-lite when the second turn after a tool call returns zero output tokens.

Also broadens the skip-empty guard in BaseLlmFlow._postprocess_async to treat Content(parts=[]) as no-content (defense in depth) and updates the two existing tests that codified the old behavior.

Please ensure you have read the contribution guide before creating a pull request.

Link to Issue or Description of Change

1. Link to an existing issue (if applicable):

2. Or, if no issue exists, describe the change:

Problem:

With gemini-2.5-flash-lite and an LlmAgent that calls a tool, the run can sometimes terminate with final_output: "".

The reported flow is:

  1. The model returns a function_call, such as a python_executor tool call.
  2. ADK executes the tool successfully and emits the function-response event.
  3. The follow-up model response returns Content(role="model", parts=[]) with finish_reason=STOP and zero output tokens.
  4. ADK treats that empty model response as the final event, causing the agent's final output to become an empty string.

This happened because LlmResponse.create() accepted finish_reason=STOP as a successful response even when content.parts was empty. In addition, the skip-empty guard in BaseLlmFlow._postprocess_async only checked whether llm_response.content existed, so a Content object with parts=[] could still pass through as a final response.

Solution:

This PR tightens LlmResponse.create() so a Gemini candidate with empty parts and finish_reason=STOP no longer passes through as a successful empty response.

Instead, it routes to the error branch with:

  • error_code="MODEL_RETURNED_NO_CONTENT"
  • a descriptive error_message

This gives callers an actionable error event instead of a silent empty final agent output.

This PR also broadens the skip-empty guard in BaseLlmFlow._postprocess_async to treat Content(parts=[]) as no content unless an error is present. This acts as defense in depth and prevents empty content objects from being emitted as meaningful final responses.

This approach was preferred over adding retry behavior because it keeps the change small, avoids extra latency/cost, and surfaces the underlying model behavior clearly to callers. Non-STOP empty responses, such as MAX_TOKENS or SAFETY, continue to preserve their existing finish_reason as the error code.

Testing Plan

Unit Tests:

  • I have added or updated unit tests for my change.
  • All unit tests pass locally.

Added/updated coverage includes:

  • LlmResponse.create() returns error_code="MODEL_RETURNED_NO_CONTENT" when a candidate has finish_reason=STOP with empty parts.
  • LlmResponse.create() returns the same no-content error when candidate content is missing with finish_reason=STOP.
  • Non-empty content with finish_reason=STOP still succeeds.
  • Non-STOP empty responses preserve their existing finish reason as the error code.
  • BaseLlmFlow surfaces an error event for the post-tool empty response case instead of emitting a silent empty final event.
  • Existing tests that codified the old empty-response behavior were updated.

Passed locally:

pytest tests/unittests/models/test_llm_response.py \
  tests/unittests/flows/llm_flows/test_base_llm_flow.py \
  tests/unittests/utils/test_streaming_utils.py -q

- [ ] I have added or updated unit tests for my change.
- [ ] All unit tests pass locally.

_Please include a summary of passed `pytest` results._

**Manual End-to-End (E2E) Tests:**

_Please provide instructions on how to manually test your changes, including any
necessary setup or configuration. Please provide logs or screenshots to help
reviewers better understand the fix._

The original issue was reproduced from the reported model response shape, where the second model turn after a successful tool call returned zero output tokens with finish_reason=STOP and empty content.parts.
This PR verifies the behavior with unit-level regression coverage instead of relying on a live model call, since the original model behavior is nondeterministic.
Manual reproduction recipe matching the original report:
Define an LlmAgent using gemini-2.5-flash-lite, a python_executor-style tool, functionCallingConfig.mode=AUTO, and automatic function calling enabled.
Send a HumanEval-style Python code-completion prompt.
When the second model turn returns empty parts with finish_reason=STOP, ADK should now surface error_code="MODEL_RETURNED_NO_CONTENT" with a non-empty error message instead of silently returning final_output: "".

### Checklist

- [x] I have read the [CONTRIBUTING.md](https://github.com/google/adk-python/blob/main/CONTRIBUTING.md) document.
- [x] I have performed a self-review of my own code.
- [x] I have commented my code, particularly in hard-to-understand areas.
- [x] I have added tests that prove my fix is effective or that my feature works.
- [x] New and existing unit tests pass locally with my changes.
- [x] I have manually tested my changes end-to-end.
- [x] Any dependent changes have been merged and published in downstream modules.

### Additional context

_Add any other context or screenshots about the feature request here._
The originally reported response shape:

```json
{
  "role": "model",
  "text": "",
  "content": { "parts": [], "role": "model" },
  "raw_response": {
    "finish_reason": "STOP",
    "usage_metadata": { "candidates_token_count": 0 }
  }
}

Oppong08 added 2 commits May 7, 2026 16:27
Tighten LlmResponse.create() so a Gemini candidate with empty parts and
finish_reason=STOP no longer passes through as a successful empty response.
It now routes to the error branch with error_code='MODEL_RETURNED_NO_CONTENT'
and a descriptive error_message, so callers see an actionable error event
instead of a silent empty final agent output. Reproduces against
gemini-2.5-flash-lite when the second turn after a tool call returns zero
output tokens.

Also broadens the skip-empty guard in BaseLlmFlow._postprocess_async to
treat Content(parts=[]) as no-content (defense in depth) and updates the
two existing tests that codified the old behavior.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Empty final output after successful tool call when functionCallingConfig.mode=AUTO

1 participant