SnailHunter is an AI-powered bug bounty hunting automation platform that orchestrates security scanning tools with intelligent analysis, false positive filtering, and report generation.
- AI enhances traditional security tools rather than replacing them
- Prioritize high-value targets (low competition + high bounty)
- Quality over quantity in findings (confidence scoring, FP filtering)
- Submission-ready reports with business impact focus
Scope Ingest → Reconnaissance → Discovery & Mapping → Vulnerability Scanning → AI Validation → Report Generation
- Parse program rules from HackerOne/Bugcrowd JSON or manual input
- Extract in-scope assets, exclusions, vulnerability types, payout tiers
- Calculate target priority score using:
Expected Value = P(valid) × P(unique) × Average Bounty
| Phase | Tools/Sources |
|---|---|
| Passive | crt.sh, SecurityTrails, Wayback/GAU, GitHub dorking, Shodan/Censys, Google dorks |
| Semi-Passive | DNS resolution, robots.txt/sitemap, SSL cert analysis, header analysis |
| Active | httpx probe, port scanning, tech fingerprinting |
- Content Discovery: ffuf/feroxbuster, custom wordlists, Wayback URLs
- Parameter Mining: Arjun/ParamSpider, JS file analysis, Wayback params
- API Discovery: OpenAPI/Swagger, GraphQL introspection, API versioning
| Category | Tools |
|---|---|
| Web | Nuclei templates, SQLMap, Dalfox (XSS), Commix (Cmd Inj) |
| Cloud | ScoutSuite, Prowler (AWS), CloudFox, metadata SSRF checks |
| Infrastructure | Subdomain takeover, S3/GCS/Blob public checks, exposed services |
- False Positive Filtering: Response context analysis, WAF signature detection, confidence scoring (0-1)
- Vulnerability Chaining: Detect exploitable chains (SSRF→Metadata→Keys, IDOR→Admin→RCE, XSS→Session→ATO)
- Severity Assessment: CVSS v3.1 calculation, business impact analysis
- Duplicate Checking: Hacktivity API (if available), hash-based dedup
AI generates submission-ready reports with:
- Clear, specific title
- Severity + CVSS vector
- Copy-paste ready Steps to Reproduce
- Proof of Concept (curl commands, screenshots)
- Business-focused impact ("So What?" framing)
- Remediation recommendations
Output formats: Markdown, JSON, platform-specific
TheMothership/
├── src/
│ ├── core/ # Core orchestration engine
│ │ ├── pipeline.py # Main pipeline coordinator
│ │ ├── state.py # State management
│ │ └── config.py # Configuration handling
│ ├── stages/ # Pipeline stage implementations
│ │ ├── scope.py # Scope ingestion
│ │ ├── recon.py # Reconnaissance
│ │ ├── discovery.py # Discovery & mapping
│ │ ├── scanning.py # Vulnerability scanning
│ │ ├── validation.py # AI validation & FP filtering
│ │ └── reporting.py # Report generation
│ ├── tools/ # External tool wrappers
│ │ ├── nuclei.py
│ │ ├── ffuf.py
│ │ ├── httpx.py
│ │ └── ...
│ ├── ai/ # AI/LLM integration
│ │ ├── llm.py # LLM client abstraction
│ │ ├── prompts/ # Prompt templates
│ │ └── chains.py # Vulnerability chain detection
│ ├── models/ # Data models
│ │ ├── target.py
│ │ ├── finding.py
│ │ └── report.py
│ └── utils/ # Utilities
│ ├── rate_limit.py
│ └── cache.py
├── tests/
├── config/ # Default configurations
├── wordlists/ # Custom wordlists
├── templates/ # Report templates
├── data/ # SQLite DB, scan results
└── docs/
- Python 3.11+ with type hints throughout
- Use
async/awaitfor I/O-bound operations - Pydantic models for data validation
- Structured logging with context
-
Tool Wrappers: Each external tool gets a wrapper class that:
- Checks if tool is installed (graceful degradation)
- Normalizes output to common data models
- Handles timeouts and errors
- Respects rate limits
-
AI Integration:
- Abstract LLM provider (support Anthropic, OpenAI, Ollama)
- Cache LLM responses to manage costs
- Use structured output (JSON mode) where possible
- Prompt templates stored in
src/ai/prompts/
-
State Management:
- SQLite for persistence (scans, findings, duplicates)
- Each scan gets a unique ID
- Resume capability for interrupted scans
-
Error Handling:
- Tools failing should not crash the pipeline
- Log warnings and continue with available data
- Mark findings with confidence based on validation success
This tool is for authorized security testing only:
- Only test targets you have explicit permission to test
- Respect program scope and exclusions
- Never store credentials in code
- Rate limit requests to avoid DoS
- All findings must be responsibly disclosed
# Full pipeline scan
snailhunter scan --target example.com --program hackerone-program-name
# Individual stages
snailhunter recon --target example.com
snailhunter discover --target example.com
snailhunter validate --input findings.json
# Report generation
snailhunter report --finding FINDING_ID --format markdown
# Configuration
snailhunter config --set llm.provider=anthropic- Injection (SQLi, Command Injection, LDAP, XPath)
- Broken Authentication
- Sensitive Data Exposure
- XXE
- Broken Access Control (IDOR, privilege escalation)
- Security Misconfiguration
- XSS (Reflected, Stored, DOM-based)
- Insecure Deserialization
- Using Components with Known Vulnerabilities
- Insufficient Logging & Monitoring
- AWS: Metadata service (169.254.169.254), IAM misconfig, S3 buckets, Lambda
- GCP: Metadata service, service account keys, GCS buckets
- Azure: IMDS, managed identities, blob storage
| Chain | Steps |
|---|---|
| Cloud Takeover | SSRF → Cloud Metadata → IAM Keys → Full Access |
| Account Takeover | IDOR → Admin Access → User Data |
| Session Hijack | XSS → Cookie Theft → Account Takeover |
| OAuth Bypass | Open Redirect → OAuth Flow → Token Theft |
When working on this codebase:
-
Security First: All code must follow secure coding practices. Never introduce vulnerabilities.
-
Ethical Boundaries: This tool is for authorized testing only. Refuse to add features that:
- Enable mass scanning without authorization
- Bypass security controls maliciously
- Exfiltrate data beyond PoC needs
- Cause denial of service
-
Testing: All new features need tests. Security tools must be reliable.
-
Documentation: Update this file when architecture changes significantly.
-
Dependencies: Prefer well-maintained security tools. Check CVEs before adding dependencies.
-
LLM Prompts: When modifying AI prompts:
- Test with multiple inputs
- Ensure structured output format
- Consider cost implications
- Add prompt versioning
httpx- Async HTTP clientpydantic- Data validationtyper- CLI frameworkrich- Terminal outputanthropic/openai- LLM clientsaiosqlite- Async SQLite
nuclei- Template-based vulnerability scannerffuf- Web fuzzerhttpx- HTTP probesubfinder- Subdomain discoveryamass- Attack surface mappingsqlmap- SQL injection testingdalfox- XSS scanner
prowler- AWS security assessmentscoutsuite- Multi-cloud security auditgau- Wayback URL fetcher
Environment variables:
SNAILHUNTER_LLM_PROVIDER=anthropic # or openai, ollama
SNAILHUNTER_API_KEY=your-api-key
SNAILHUNTER_RATE_LIMIT=10 # requests per second per target
SNAILHUNTER_DB_PATH=./data/snailhunter.db- Fork the repository
- Create a feature branch
- Write tests for new functionality
- Ensure all tests pass
- Submit a pull request with clear description
[TBD - Recommend Apache 2.0 or MIT for security tools]
Last updated: 2026-01-24