Overview
Filter Incomplete Turns is an LLM-powered feature that detects when a user’s conversational turn was incomplete (they were cut off or need time to think) and suppresses the bot’s response accordingly. Instead of responding to partial input, the bot waits for the user to continue, then automatically re-engages if they remain silent. This creates more natural conversations by:- Preventing the bot from responding to incomplete thoughts
- Giving users time to finish speaking without interruption
- Automatically prompting users to continue after pauses
How It Works
When enabled, the LLM outputs a turn completion marker as the first character of every response:| Marker | Meaning | Bot Behavior |
|---|---|---|
✓ | Complete - User finished their thought | Respond normally |
○ | Incomplete Short - User was cut off mid-sentence | Suppress response, wait 5s, then prompt |
◐ | Incomplete Long - User needs time to think | Suppress response, wait 10s, then prompt |
- Injects turn completion instructions into the LLM’s system prompt
- Detects markers in the LLM’s streaming response
- Suppresses bot speech for incomplete turns
- Starts a timeout based on the incomplete type
- Re-prompts the LLM when the timeout expires
Configuration
Enable the feature viaLLMUserAggregatorParams when creating an LLMContextAggregatorPair:
LLMUserAggregatorParams
Enable LLM-based turn completion detection. When True, the system
automatically injects turn completion instructions into the LLM context and
configures the LLM service to process turn markers.
Optional configuration object for customizing turn completion behavior. If not
provided, default values are used.
UserTurnCompletionConfig
UseUserTurnCompletionConfig to customize timeouts, prompts, and instructions:
Parameters
Seconds to wait after detecting
○ (incomplete short) before re-prompting the
LLM. Use shorter values for more responsive re-engagement.Seconds to wait after detecting
◐ (incomplete long) before re-prompting the
LLM. Use longer values to give users more time to think.System prompt sent to the LLM when the short timeout expires. Should instruct
the LLM to generate a brief, natural prompt encouraging the user to continue.
System prompt sent to the LLM when the long timeout expires. Should instruct
the LLM to generate a friendly check-in message.
Complete turn completion instructions appended to the system prompt. Override
this to customize how the LLM determines turn completeness.
Markers Explained
Complete (✓)
The user has provided enough information for a meaningful response:✓ marker tells the system to push the response normally. The marker itself is not spoken (marked with skip_tts).
Incomplete Short (○)
The user was cut off mid-sentence and will likely continue soon:○ marker suppresses the bot’s response entirely. After 5 seconds (configurable), the LLM is prompted to re-engage with something like “Go ahead, I’m listening.”
Incomplete Long (◐)
The user needs more time to think or explicitly asked for time:◐ marker also suppresses the response, but waits 15 seconds (configurable) before prompting. This handles cases like:
- “Hold on a second”
- “Let me think about that”
- “Hmm, that’s interesting…”
Usage Examples
Basic Usage
Enable turn completion with default settings:You don’t need to modify your system prompt. Turn completion instructions are
automatically appended when
filter_incomplete_user_turns is enabled.Custom Timeouts
Adjust timeouts for your use case:Custom Prompts
Customize what the LLM says when re-engaging:With Smart Turn Detection
Combine with smart turn detection for better end-of-turn detection:Transcripts
Turn completion markers are automatically stripped from assistant transcripts emitted via theon_assistant_turn_stopped event. Your transcript handlers will receive clean text without markers:
Supported LLM Services
Turn completion detection works with any LLM service that inherits fromLLMService:
- OpenAI (
OpenAILLMService) - Anthropic (
AnthropicLLMService) - Google Gemini (
GoogleLLMService) - AWS Bedrock (
AWSLLMService) - And other compatible services
Graceful Degradation
If the LLM fails to output a turn marker:- The system logs a warning indicating markers were expected but not found
- The buffered text is pushed normally to avoid losing the response
- The conversation continues without interruption
Related
- User Turn Strategies - Configure turn detection
- Smart Turn Detection - AI-powered end-of-turn detection
- Transcriptions - Working with conversation transcripts