⚠️ Potential issue | 🟠 Major
Potential conflict between SYSTEM_PROMPT and BATCH_TRIAGE_PROMPT output formats.
The SYSTEM_PROMPT (used as the system message in evaluate_issues_batch) explicitly instructs the LLM to output a specific JSON structure with explanation, proposed_code, reasoning, and confidence_score fields. However, BATCH_TRIAGE_PROMPT asks for a completely different format: a JSON map of issue IDs to confidence scores.
This conflicting instruction may confuse the LLM or cause inconsistent responses. Consider either:
- Using a separate system prompt for batch triage, or
- Adding explicit override instructions in
BATCH_TRIAGE_PROMPT to clarify the expected output format takes precedence.
Additionally, the code block is hardcoded to ```python but this feature may be used on non-Python files. Consider using a generic code block or parameterizing the language.
Suggested fix for language flexibility
File Source Code:
-```python
+```
{source_code}
</details>
<details>
<summary>🤖 Prompt for AI Agents</summary>
Verify each finding against the current code and only fix it if needed.
In @refactron/llm/prompts.py around lines 96 - 115, The SYSTEM_PROMPT and
BATCH_TRIAGE_PROMPT conflict: update the batch flow used by
evaluate_issues_batch so the model receives a dedicated system message for batch
triage (or add a clear overriding sentence at the top of BATCH_TRIAGE_PROMPT)
that explicitly states the expected output is a flat JSON map of issue IDs to
confidence scores only; also make the source code fence language-agnostic in
BATCH_TRIAGE_PROMPT by replacing the hardcoded "python" with a generic ""
or parameterize the language placeholder so non-Python files are handled
correctly.
</details>
<!-- fingerprinting:phantom:medusa:ocelot -->
<!-- This is an auto-generated reply by CodeRabbit -->
_Originally posted by @coderabbitai[bot] in https://github.com/Refactron-ai/Refactron_lib/pull/106#discussion_r2912148199_
Potential conflict between SYSTEM_PROMPT and BATCH_TRIAGE_PROMPT output formats.
The
SYSTEM_PROMPT(used as the system message inevaluate_issues_batch) explicitly instructs the LLM to output a specific JSON structure withexplanation,proposed_code,reasoning, andconfidence_scorefields. However,BATCH_TRIAGE_PROMPTasks for a completely different format: a JSON map of issue IDs to confidence scores.This conflicting instruction may confuse the LLM or cause inconsistent responses. Consider either:
BATCH_TRIAGE_PROMPTto clarify the expected output format takes precedence.Additionally, the code block is hardcoded to
```pythonbut this feature may be used on non-Python files. Consider using a generic code block or parameterizing the language.Suggested fix for language flexibility
Verify each finding against the current code and only fix it if needed.
In
@refactron/llm/prompts.pyaround lines 96 - 115, The SYSTEM_PROMPT andBATCH_TRIAGE_PROMPT conflict: update the batch flow used by
evaluate_issues_batch so the model receives a dedicated system message for batch
triage (or add a clear overriding sentence at the top of BATCH_TRIAGE_PROMPT)
that explicitly states the expected output is a flat JSON map of issue IDs to
confidence scores only; also make the source code fence language-agnostic in
BATCH_TRIAGE_PROMPT by replacing the hardcoded "
python" with a generic ""or parameterize the language placeholder so non-Python files are handled
correctly.