feat(security): filter sensitive data from tool results before sending to LLM#1930
Merged
yinwm merged 2 commits intosipeed:mainfrom Mar 23, 2026
Merged
Conversation
LLM Prevent LLM from seeing its own credentials (API keys, tokens, secrets) by filtering sensitive values from tool call results before sending to the model. Values are collected from .security.yml and replaced with [FILTERED] using an efficient strings.Replacer (O(n+m)). - Add FilterSensitiveData and FilterMinLength to ToolsConfig - Implement SensitiveDataReplacer() with sync.Once caching in SecurityConfig - Use reflection to collect all sensitive values (Model API keys, channel tokens, web tool API keys, skills tokens) - Apply filtering in agent loop at 4 tool result locations - Add comprehensive tests covering all token types
yinwm
approved these changes
Mar 23, 2026
Collaborator
yinwm
left a comment
There was a problem hiding this comment.
LGTM! This is a well-designed security enhancement. The code quality is high, tests are comprehensive (13 test cases), documentation is complete (EN + CN), and all CI checks pass.
Key highlights:
- Efficient O(n+m) implementation using
strings.Replacer - Smart use of reflection to auto-collect all sensitive values
- Lazy initialization with
sync.Once - Flexible configuration with sensible defaults
Minor suggestions for future improvement (non-blocking):
- Add debug logging for filter operations
- Make replacement string
[FILTERED]configurable - Consider caching optimization for large configs
Merging now. Thanks @uiYzzi!
andressg79
pushed a commit
to andressg79/picoclaw
that referenced
this pull request
Mar 30, 2026
…-from-tool-results feat(security): filter sensitive data from tool results before sending to LLM
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Prevent the LLM from seeing its own credentials (API keys, tokens, secrets) by filtering sensitive values from tool call results before sending to the model. Values are collected from
.security.ymland replaced with[FILTERED]using an efficientstrings.Replacer(O(n+m)).Changes
FilterSensitiveData boolandFilterMinLength inttoToolsConfig(default: enabled, min length 8)SensitiveDataReplacer()withsync.Oncecaching inSecurityConfigTest plan
go test ./pkg/config/...— all filter-sensitive-data tests passgo test ./pkg/tools/...— tools tests passgo build ./pkg/...— pkg builds successfullyType of Change
AI Code Generation