Skip to content

Conversation

@danenania
Copy link
Contributor

Testing feedback links with prompt injection vulnerability

Copy link

@promptfoo-scanner-staging promptfoo-scanner-staging bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 All Clear

I reviewed the new LLM integration functions in this PR. While the code shows concerning architectural patterns (particularly user-controlled system prompts), I was unable to verify with sufficient confidence that these create exploitable security vulnerabilities without seeing how the functions are actually used in the application.

Minimum severity threshold for this scan: 🟡 Medium | Learn more

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants