Skip to content

stef41/vibesafex

Repository files navigation

vibesafex

CI Python 3.9+ License: Apache 2.0 PyPI Downloads

Stop shipping AI-generated code you haven't reviewed.

vibesafex catches the bugs your AI coding agent won't tell you about: hallucinated imports, hardcoded secrets, security vulnerabilities, and dead code.

Built for the vibe coding era. Works with code from Claude Code, Cursor, Copilot, Windsurf, and any AI coding tool.

Quick Start

pip install vibesafex
vibesafex scan .

What It Catches

Code Category Severity What
VS100-VS110 Security error eval(), exec(), shell=True, SQL injection, os.system(), unsafe YAML, weak hashes
VS200-VS210 Secrets error OpenAI/AWS/GitHub/Anthropic/Stripe API keys, private keys, JWTs, hardcoded credentials
VS300 Imports warning Hallucinated imports — packages that don't exist (AI's favorite mistake)
VS400-VS403 Dead Code warning Unused imports, unreachable code, empty except: pass, bare except
VS500-VS507 AI Patterns warning TODO/FIXME left by AI, placeholder functions, NotImplementedError stubs, mutable defaults, star imports

Usage

Scan a directory

vibesafex scan src/

Scan specific files

vibesafex scan main.py utils.py

Check code from stdin

echo 'x = eval(input())' | vibesafex check

JSON output (for CI/CD)

vibesafex scan . --format json

Filter by severity

vibesafex scan . --severity error          # Only errors
vibesafex scan . --fail-on warning         # Fail CI on warnings too

Python API

from vibesafex import scan_code, scan_file, scan_directory

# Scan a string
issues = scan_code('x = eval(input())')
for issue in issues:
    print(f"{issue.code}: {issue.message}")

# Scan a file
issues = scan_file("main.py")

# Scan a project
result = scan_directory("src/")
print(f"{result.error_count} errors found in {result.files_scanned} files")

Custom scanner configuration

from vibesafex import Scanner

scanner = Scanner(
    severity_threshold="warning",  # Skip info-level
    exclude_dirs={".venv", "migrations"},
)
result = scanner.scan_directory(".")

Example Output

vibesafex scan report

vibesafex checks

  ✗ main.py:5:0 [error] VS100: Use of eval() - potential code injection vulnerability
  ✗ main.py:8:0 [error] VS200: Possible OpenAI API key
  ⚠ main.py:12:0 [warning] VS300: Import 'magic_ai_lib' - package 'magic_ai_lib' not found (hallucinated import?)
  ⚠ main.py:15:0 [warning] VS501: Function 'process' has empty body (pass) - placeholder
  ℹ main.py:20:0 [info] VS500: TODO comment - AI may have left incomplete implementation

5 files scanned: 2 errors, 2 warnings, 1 info

Pre-commit Hook

# .pre-commit-config.yaml
repos:
  - repo: local
    hooks:
      - id: vibesafex
        name: vibesafex
        entry: vibesafex scan --fail-on error
        language: python
        types: [python]
        additional_dependencies: [vibesafex]

Why Not Just Use Ruff/Pylint?

vibesafex focuses specifically on AI-generated code patterns that traditional linters miss:

  • Hallucinated imports: AI confidently imports packages that don't exist. vibesafex checks against stdlib, installed packages, and 200+ known popular packages.
  • Secret leakage: AI copies real-looking API keys into code. vibesafex detects patterns for 12+ providers.
  • Placeholder code: AI leaves pass, ..., NotImplementedError stubs that slip through review.
  • AI anti-patterns: Mutable defaults, star imports, excessive Any — patterns AI generates more often than humans.

Use vibesafex alongside your existing linter, not instead of it.

See Also

Part of the stef41 LLM toolkit — open-source tools for every stage of the LLM lifecycle:

Project What it does
tokonomics Token counting & cost management for LLM APIs
datacrux Training data quality — dedup, PII, contamination
castwright Synthetic instruction data generation
datamix Dataset mixing & curriculum optimization
toksight Tokenizer analysis & comparison
trainpulse Training health monitoring
ckpt Checkpoint inspection, diffing & merging
quantbench Quantization quality analysis
infermark Inference benchmarking
modeldiff Behavioral regression testing
injectionguard Prompt injection detection

License

Apache 2.0

About

AI safety guardrails - content filtering, toxicity detection, and safety scoring.

Topics

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages