Previously, when block streaming was disabled (the default), text generated
between tool calls would only appear after all tools completed. This was
because onBlockReply wasn't passed to the subscription when block streaming
was off, so flushBlockReplyBuffer() before tool execution did nothing.
Now onBlockReply is always passed, and when block streaming is disabled,
block replies are sent directly during tool flush. Directly sent payloads
are tracked to avoid duplicates in final payloads.
Also fixes a race condition where tool summaries could be emitted before
the typing indicator started by awaiting onAgentEvent in tool handlers.
Adds support for template variables in `messages.responsePrefix` that
resolve dynamically at runtime with the actual model used (including
after fallback).
Supported variables (case-insensitive):
- {model} - short model name (e.g., "claude-opus-4-5", "gpt-4o")
- {modelFull} - full model identifier (e.g., "anthropic/claude-opus-4-5")
- {provider} - provider name (e.g., "anthropic", "openai")
- {thinkingLevel} or {think} - thinking level ("high", "low", "off")
- {identity.name} or {identityName} - agent identity name
Example: "[{model} | think:{thinkingLevel}]" → "[claude-opus-4-5 | think:high]"
Variables show the actual model used after fallback, not the intended
model. Unresolved variables remain as literal text.
Implementation:
- New module: src/auto-reply/reply/response-prefix-template.ts
- Template interpolation in normalize-reply.ts via context provider
- onModelSelected callback in agent-runner-execution.ts
- Updated all 6 provider message handlers (web, signal, discord,
telegram, slack, imessage)
- 27 unit tests covering all variables and edge cases
- Documentation in docs/gateway/configuration.md and JSDoc
Fixes#923