Skip to content

siwenwang0803/Proxene

Repository files navigation

Proxene – Developer-First AI Governance Proxy

Just point your LLM requests to Proxene for instant cost savings, PII protection, and compliance auditing—all debuggable with YAML/VS Code.

License: MIT Python 3.11+

🚀 Quick Start

# Install
pip install proxene

# Start proxy
proxene start --port 8081

# Point your LLM calls to Proxene
# Old: https://api.openai.com/v1/chat/completions
# New: http://localhost:8081/v1/chat/completions

✨ Features

  • Policy-as-Code - Define cost limits, PII detection, and routing rules in YAML
  • Cost Guard - Real-time token counting with per-request/minute/daily caps
  • PII Shield - Auto-detect and redact sensitive data (SSN, emails, credit cards)
  • Local-First - Debug policies with CLI replay, no cloud dependency
  • VS Code Integration - Live request monitoring and policy debugging

📋 Example Policy

# policies/default.yaml
cost_limits:
  max_per_request: 0.03
  daily_cap: 100.00

pii_detection:
  enabled: true
  action: redact  # or: block, warn

routing:
  - if: complexity < 3
    use: claude-3-haiku
  - else:
    use: gpt-4o

🛠️ Architecture

Your App → Proxene Proxy → LLM Provider
             ↓
         [Policy Engine]
         [Cost Guard]
         [PII Detector]
         [OTEL Logging]

📊 Observability

Built-in OpenTelemetry support for monitoring:

  • Request costs and token usage
  • PII detection hits
  • Policy violations
  • Model routing decisions

🤝 Contributing

We welcome contributions! Please see our Contributing Guide.

📄 License

MIT License - see LICENSE file


Built with ❤️ for developers who need simple, powerful AI governance

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •