Skip to content

Latest commit

 

History

History
280 lines (217 loc) · 8.79 KB

File metadata and controls

280 lines (217 loc) · 8.79 KB

CLAUDE.md - SnailHunter AI Assistant Guide

Project Overview

SnailHunter is an AI-powered bug bounty hunting automation platform that orchestrates security scanning tools with intelligent analysis, false positive filtering, and report generation.

Core Philosophy

  • AI enhances traditional security tools rather than replacing them
  • Prioritize high-value targets (low competition + high bounty)
  • Quality over quantity in findings (confidence scoring, FP filtering)
  • Submission-ready reports with business impact focus

Architecture

Pipeline Stages

Scope Ingest → Reconnaissance → Discovery & Mapping → Vulnerability Scanning → AI Validation → Report Generation

1. Scope Ingest

  • Parse program rules from HackerOne/Bugcrowd JSON or manual input
  • Extract in-scope assets, exclusions, vulnerability types, payout tiers
  • Calculate target priority score using: Expected Value = P(valid) × P(unique) × Average Bounty

2. Reconnaissance

Phase Tools/Sources
Passive crt.sh, SecurityTrails, Wayback/GAU, GitHub dorking, Shodan/Censys, Google dorks
Semi-Passive DNS resolution, robots.txt/sitemap, SSL cert analysis, header analysis
Active httpx probe, port scanning, tech fingerprinting

3. Discovery & Mapping

  • Content Discovery: ffuf/feroxbuster, custom wordlists, Wayback URLs
  • Parameter Mining: Arjun/ParamSpider, JS file analysis, Wayback params
  • API Discovery: OpenAPI/Swagger, GraphQL introspection, API versioning

4. Vulnerability Scanning

Category Tools
Web Nuclei templates, SQLMap, Dalfox (XSS), Commix (Cmd Inj)
Cloud ScoutSuite, Prowler (AWS), CloudFox, metadata SSRF checks
Infrastructure Subdomain takeover, S3/GCS/Blob public checks, exposed services

5. AI Analysis & Validation

  • False Positive Filtering: Response context analysis, WAF signature detection, confidence scoring (0-1)
  • Vulnerability Chaining: Detect exploitable chains (SSRF→Metadata→Keys, IDOR→Admin→RCE, XSS→Session→ATO)
  • Severity Assessment: CVSS v3.1 calculation, business impact analysis
  • Duplicate Checking: Hacktivity API (if available), hash-based dedup

6. Report Generation

AI generates submission-ready reports with:

  • Clear, specific title
  • Severity + CVSS vector
  • Copy-paste ready Steps to Reproduce
  • Proof of Concept (curl commands, screenshots)
  • Business-focused impact ("So What?" framing)
  • Remediation recommendations

Output formats: Markdown, JSON, platform-specific


Project Structure (Planned)

TheMothership/
├── src/
│   ├── core/              # Core orchestration engine
│   │   ├── pipeline.py    # Main pipeline coordinator
│   │   ├── state.py       # State management
│   │   └── config.py      # Configuration handling
│   ├── stages/            # Pipeline stage implementations
│   │   ├── scope.py       # Scope ingestion
│   │   ├── recon.py       # Reconnaissance
│   │   ├── discovery.py   # Discovery & mapping
│   │   ├── scanning.py    # Vulnerability scanning
│   │   ├── validation.py  # AI validation & FP filtering
│   │   └── reporting.py   # Report generation
│   ├── tools/             # External tool wrappers
│   │   ├── nuclei.py
│   │   ├── ffuf.py
│   │   ├── httpx.py
│   │   └── ...
│   ├── ai/                # AI/LLM integration
│   │   ├── llm.py         # LLM client abstraction
│   │   ├── prompts/       # Prompt templates
│   │   └── chains.py      # Vulnerability chain detection
│   ├── models/            # Data models
│   │   ├── target.py
│   │   ├── finding.py
│   │   └── report.py
│   └── utils/             # Utilities
│       ├── rate_limit.py
│       └── cache.py
├── tests/
├── config/                # Default configurations
├── wordlists/             # Custom wordlists
├── templates/             # Report templates
├── data/                  # SQLite DB, scan results
└── docs/

Development Guidelines

Code Style

  • Python 3.11+ with type hints throughout
  • Use async/await for I/O-bound operations
  • Pydantic models for data validation
  • Structured logging with context

Key Conventions

  1. Tool Wrappers: Each external tool gets a wrapper class that:

    • Checks if tool is installed (graceful degradation)
    • Normalizes output to common data models
    • Handles timeouts and errors
    • Respects rate limits
  2. AI Integration:

    • Abstract LLM provider (support Anthropic, OpenAI, Ollama)
    • Cache LLM responses to manage costs
    • Use structured output (JSON mode) where possible
    • Prompt templates stored in src/ai/prompts/
  3. State Management:

    • SQLite for persistence (scans, findings, duplicates)
    • Each scan gets a unique ID
    • Resume capability for interrupted scans
  4. Error Handling:

    • Tools failing should not crash the pipeline
    • Log warnings and continue with available data
    • Mark findings with confidence based on validation success

Security Considerations

This tool is for authorized security testing only:

  • Only test targets you have explicit permission to test
  • Respect program scope and exclusions
  • Never store credentials in code
  • Rate limit requests to avoid DoS
  • All findings must be responsibly disclosed

Commands (Planned)

# Full pipeline scan
snailhunter scan --target example.com --program hackerone-program-name

# Individual stages
snailhunter recon --target example.com
snailhunter discover --target example.com
snailhunter validate --input findings.json

# Report generation
snailhunter report --finding FINDING_ID --format markdown

# Configuration
snailhunter config --set llm.provider=anthropic

Key Methodologies

OWASP Top 10 Coverage

  • Injection (SQLi, Command Injection, LDAP, XPath)
  • Broken Authentication
  • Sensitive Data Exposure
  • XXE
  • Broken Access Control (IDOR, privilege escalation)
  • Security Misconfiguration
  • XSS (Reflected, Stored, DOM-based)
  • Insecure Deserialization
  • Using Components with Known Vulnerabilities
  • Insufficient Logging & Monitoring

Cloud Attack Surfaces

  • AWS: Metadata service (169.254.169.254), IAM misconfig, S3 buckets, Lambda
  • GCP: Metadata service, service account keys, GCS buckets
  • Azure: IMDS, managed identities, blob storage

Vulnerability Chaining Patterns

Chain Steps
Cloud Takeover SSRF → Cloud Metadata → IAM Keys → Full Access
Account Takeover IDOR → Admin Access → User Data
Session Hijack XSS → Cookie Theft → Account Takeover
OAuth Bypass Open Redirect → OAuth Flow → Token Theft

AI Assistant Instructions

When working on this codebase:

  1. Security First: All code must follow secure coding practices. Never introduce vulnerabilities.

  2. Ethical Boundaries: This tool is for authorized testing only. Refuse to add features that:

    • Enable mass scanning without authorization
    • Bypass security controls maliciously
    • Exfiltrate data beyond PoC needs
    • Cause denial of service
  3. Testing: All new features need tests. Security tools must be reliable.

  4. Documentation: Update this file when architecture changes significantly.

  5. Dependencies: Prefer well-maintained security tools. Check CVEs before adding dependencies.

  6. LLM Prompts: When modifying AI prompts:

    • Test with multiple inputs
    • Ensure structured output format
    • Consider cost implications
    • Add prompt versioning

Dependencies & Tools

Python Packages (Core)

  • httpx - Async HTTP client
  • pydantic - Data validation
  • typer - CLI framework
  • rich - Terminal output
  • anthropic / openai - LLM clients
  • aiosqlite - Async SQLite

External Security Tools

  • nuclei - Template-based vulnerability scanner
  • ffuf - Web fuzzer
  • httpx - HTTP probe
  • subfinder - Subdomain discovery
  • amass - Attack surface mapping
  • sqlmap - SQL injection testing
  • dalfox - XSS scanner

Optional

  • prowler - AWS security assessment
  • scoutsuite - Multi-cloud security audit
  • gau - Wayback URL fetcher

Configuration

Environment variables:

SNAILHUNTER_LLM_PROVIDER=anthropic  # or openai, ollama
SNAILHUNTER_API_KEY=your-api-key
SNAILHUNTER_RATE_LIMIT=10           # requests per second per target
SNAILHUNTER_DB_PATH=./data/snailhunter.db

Contributing

  1. Fork the repository
  2. Create a feature branch
  3. Write tests for new functionality
  4. Ensure all tests pass
  5. Submit a pull request with clear description

License

[TBD - Recommend Apache 2.0 or MIT for security tools]


Last updated: 2026-01-24