Suppress conversational preambles in output #2
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR addresses the issue where the model prepends conversational explanations instead of outputting only the intended structured result (e.g. a Git commit message), even when explicitly instructed otherwise.
Fixes #1.
The root cause was fragmented and inconsistent prompt construction. Output-suppression rules were duplicated across multiple convention assets and combined via manual string concatenation, which reduced their effectiveness and made behavior sensitive to language and environment.
This change migrates prompt construction to a single template-based system. All output rules are centralized and applied consistently at the system level, with a clearly defined structure that separates instructions, context, diff, and expected output. This makes it harder for the model to ignore or reinterpret the constraints.
Key changes include:
With this structure, the model is consistently guided to start directly with the commit subject and suppress conversational preambles, regardless of terminal language or locale.