| description | The NeuroLink CLI provides a professional command-line interface for AI text generation, provider management, and workflow automation. |
|---|
import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem';
The NeuroLink CLI provides a professional command-line interface for AI text generation, provider management, and workflow automation.
The CLI is designed for:
- Developers who want to integrate AI into scripts and workflows
- Content creators who need quick AI text generation
- System administrators who manage AI provider configurations
- Researchers who experiment with different AI models and providers
# Text generation (primary commands)
neurolink generate "Your prompt here"
neurolink gen "Your prompt" # Short form
# Real-time streaming
neurolink stream "Tell me a story"
# Provider management
neurolink status # Check all providers
neurolink provider status --verbose # Detailed diagnostics# With analytics and evaluation
neurolink generate "Write code" --enable-analytics --enable-evaluation
# Custom provider and model
neurolink gen "Explain AI" --provider openai --model gpt-4
# Batch processing from a file
neurolink batch prompts.txt
# Output to file
neurolink generate "Documentation" --output result.md# Built-in tools (working)
neurolink generate "What time is it?" --debug
# Disable tools
neurolink generate "Pure text" --disable-tools
# MCP server management
neurolink mcp discover
neurolink mcp list
neurolink mcp install <server># Start server in foreground
neurolink serve --port 3000 --framework hono
# Background server management
neurolink server start --port 8080
neurolink server status
neurolink server stop
# Claude proxy + local telemetry
neurolink proxy setup
neurolink proxy status
neurolink proxy telemetry setup
neurolink proxy telemetry status
# View and manage routes
neurolink server routes
neurolink server routes --group agent --format json
# Configuration management
neurolink server config
neurolink server config --set defaultPort=8080- Commands Reference — Complete reference for all CLI commands, options, and parameters with detailed explanations.
- Examples — Practical examples and common usage patterns for different scenarios and workflows.
- Advanced Usage — Advanced features like batch processing, streaming, analytics, and custom configurations.
- Claude Proxy — Multi-account Claude proxy setup, lifecycle commands, routing, and local service management.
- Claude Proxy Observability — Local OpenObserve stack setup, dashboard import, and how to read proxy logs, traces, and metrics.
The CLI requires no installation for basic usage:
# Direct usage (recommended)
npx @juspay/neurolink generate "Hello, AI"
# Global installation (optional)
npm install -g @juspay/neurolink
neurolink generate "Hello, AI"The CLI automatically loads configuration from:
- Environment variables (
.envfile) - Command-line options
- Auto-detection of available providers
# Create .env file
echo 'OPENAI_API_KEY="sk-your-key"' > .env
echo 'GOOGLE_AI_API_KEY="AIza-your-key"' >> .env
# Test configuration
neurolink statusThe CLI includes several interactive and automation features:
:::tip[Auto-Provider Selection] NeuroLink automatically selects the best available provider based on configuration and performance. :::
:::info[Built-in Tools] All commands include 6 built-in tools by default: time, file operations, math calculations, and more. :::
:::note[Streaming Support] Real-time streaming displays results as they're generated, perfect for long-form content. :::
The CLI works seamlessly with:
- Shell scripts and automation
- CI/CD pipelines for automated content generation
- Git hooks for documentation updates
- Cron jobs for scheduled AI tasks
# General help
neurolink --help
# Command-specific help
neurolink generate --help
neurolink mcp --help
# Check provider status
neurolink status --verboseFor troubleshooting, see our Troubleshooting Guide or FAQ.