Replies: 1 comment
-
|
How to determine if it's a sensitive file could be an issue, and our team will discuss it. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I’ve been testing A.I.G in an enterprise setup, and one thing I’m not sure how to evaluate is sensitive data leakage.
When an AI app has access to internal data, some of that information has different sensitivity levels, and not everything should show up in the model’s output.
It would be helpful if A.I.G could do some basic checks here — for example, warning when a response might include data that’s considered too sensitive for that workflow.
Just wondering if this kind of sensitive-data leakage assessment is something the maintainers are thinking about.
Beta Was this translation helpful? Give feedback.
All reactions