Skip to content

Mindgard/openai-llm-guard-proxy

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Docs

OpenAI Compatible API with full LLM-guard on both input and output

  • input_scanners = [Anonymize(vault), Toxicity(), TokenLimit(), PromptInjection()]
  • output_scanners = [Deanonymize(vault), NoRefusal(), Relevance(), Sensitive()]

Mess around with at your discretion

  • uv sync to install
  • have valid OPENAI_API_KEY in your env
  • python main.py to run (will download a lot (few GB) of assets needed by llm-guard)

Can point CLI at this with the following .toml:

target = "llm-guarded-model"
model_name="gpt-4o"
preset = "openai-compatible"
url = "http://localhost:8000/v1/chat/completions"
api-key = "blah"
system-prompt = 'You are a helpful LLM assistant!'

About

A mindgard CLI compatible OpenAI proxy with LLM-Guard input and output checking

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages