Real-time observability for AI agents. Track costs, monitor errors, replay prompts.
Website · Documentation · Get Started Free · @agentpulses
AgentPulse is a real-time monitoring and observability platform for AI agents. It captures every LLM API call your agent makes and gives you a dashboard to track costs, monitor errors, analyze token usage, and replay exact prompts.
The problem: AI agents make hundreds of LLM calls per session. Without observability, you're flying blind — burning money, missing errors, and unable to debug why your agent behaved a certain way.
The solution: Install the AgentPulse plugin alongside your agent. It captures every LLM call and streams it to your dashboard in real-time.
- Accurate Cost Tracking — Exact per-call costs parsed directly from LLM API responses (not estimates)
- 50+ Models Supported — OpenAI, Anthropic, MiniMax, Google Gemini, Mistral, Cohere, DeepSeek, Grok, Llama, and more
- Prompt Replay — See the exact prompts your agent sent and responses it received
- Error Monitoring — Catch rate limits, API failures, and auth errors before your users notice
- Smart Alerts — Get notified when costs spike, errors accumulate, or rate limits hit
- Model Analytics — Compare token usage, latency, and costs across providers
- AI Recommendations — Get suggestions to optimize costs and reduce errors
- Multi-agent Support — Monitor multiple agents from a single dashboard
- Mobile Responsive — Check your agents from your phone
npm install @agentpulse/agentpulseThen add to the top of your app:
const agentpulse = require('@agentpulse/agentpulse');
agentpulse.init({ apiKey: 'ap_...', agentName: 'my-bot' });
agentpulse.autoInstrument();
// All OpenAI / Anthropic calls are now tracked automaticallyOr install globally for the CLI:
sudo npm install -g @agentpulse/agentpulse
agentpulse init
agentpulse start -dSSH into your server and run:
sudo apt install -y pipx && pipx install "git+https://github.com/sru4ka/agentpulse.git#subdirectory=plugin" && pipx ensurepath && source ~/.bashrc
agentpulse init # paste your API key
agentpulse enable-proxy # enables prompt & response capture
agentpulse start -d # runs in background, tracks all LLM calls
source ~/.bashrc # loads proxy env vars into current shellThat's it. Sign up at agentpulses.com/signup to get your API key, then view your dashboard at agentpulses.com/dashboard.
Prompt capture: The
enable-proxystep sets up a local proxy (port 8787) that captures exact prompts and responses. Without it, you only get token counts and costs. After enabling, restart agentpulse and runsource ~/.bashrcin any open terminals.
Important: Do NOT use
pip install agentpulse— that's an unrelated PyPI package. Always install from the git URL above.
| Problem | Solution |
|---|---|
AgentPulse is not a constructor |
The SDK exports functions, not a class. Use agentpulse.init() not new AgentPulse() |
Cannot find module '@agentpulse/agentpulse' |
Run npm install @agentpulse/agentpulse in your project |
EACCES: permission denied on global install |
Use sudo npm install -g @agentpulse/agentpulse |
| No events on dashboard | Make sure init() is called before autoInstrument(), and your API key is correct |
agentpulse: command not found after global install |
Open a new terminal, or check npm root -g is in your PATH |
| Streaming calls not tracked | Streaming responses are not tracked by autoInstrument() — use agentpulse.track(response) manually |
| Problem | Solution |
|---|---|
agentpulse: command not found |
Run source ~/.bashrc or open a new terminal. Check pipx list |
| Dashboard shows 0 events | Run agentpulse status to confirm it's running, then agentpulse test |
| API key invalid / 401 | Re-run agentpulse init with a fresh key from Settings |
| Upgrade to latest | pipx upgrade agentpulse then agentpulse stop && agentpulse start -d |
AgentPulse extracts exact token counts and costs from API responses. Any model that returns usage data is supported automatically.
| Provider | Models | Cost Tracking |
|---|---|---|
| Anthropic | Claude Opus 4, Sonnet 4.5, Haiku 4, and older | Exact (from API response) |
| OpenAI | GPT-4o, GPT-4o-mini, o3, o1, GPT-4 Turbo | Exact (from API response) |
| MiniMax | MiniMax-M2.5, MiniMax-M1, Text-02 | Exact (from API response) |
| Gemini 2.0 Flash/Pro, 1.5 Pro/Flash | Exact (from API response) | |
| Mistral | Large, Small, Codestral, Mixtral | Exact (from API response) |
| DeepSeek | DeepSeek-R1, V3, Chat, Coder | Exact (from API response) |
| xAI | Grok-3, Grok-2 | Exact (from API response) |
| Cohere | Command R+, Command R | Exact (from API response) |
| Meta | Llama 3.3, 3.1 (via any provider) | Exact (from API response) |
| Amazon | Nova Pro, Lite, Micro | Exact (from API response) |
| Any other | OpenAI-compatible APIs | Exact if usage field present |
| Framework | Status | Integration |
|---|---|---|
| Any Node.js app | Full support | npm SDK (auto-instrument) |
| Any Python agent | Full support | Python SDK (auto-instrument) |
| OpenClaw | Full support | SDK or Daemon |
| LangChain | Full support | Python SDK |
| CrewAI | Full support | Python SDK |
| AutoGen | Full support | Python SDK |
┌─────────────┐ ┌──────────────────┐ ┌─────────────────┐
│ Your Agent │────▶│ AgentPulse SDK │────▶│ AgentPulse │
│ (any │ │ (Python / npm) │ │ Dashboard │
│ framework) │ │ │ │ (Next.js) │
│ │ │ Intercepts LLM │ │ │
│ OpenAI / │ │ SDK calls │ │ Real-time │
│ Anthropic / │ │ Exact tokens │ │ charts, costs, │
│ MiniMax ... │ │ POSTs to API │ │ prompt replay │
└─────────────┘ └──────────────────┘ └─────────────────┘
- Frontend: Next.js 16, React 19, Tailwind CSS 4, Recharts
- Backend: Next.js API routes, Supabase (PostgreSQL + Auth)
- Plugin: Python 3.10+ or Node.js 18+ (
@agentpulse/agentpulse), auto-instruments OpenAI & Anthropic SDKs - Payments: Stripe (credit card), Ethereum (crypto)
- Hosting: Vercel
# Clone the repo
git clone https://github.com/sru4ka/agentpulse.git
cd agentpulse
# Install dependencies
npm install
# Set up environment variables
cp .env.example .env.local
# Add your Supabase and Stripe keys
# Run development server
npm run devcd plugin
# Install in development mode
pip install -e .
# Run tests
agentpulse test
# Check status
agentpulse statusSend agent events to the dashboard.
{
"api_key": "your-api-key",
"agent_name": "my-agent",
"framework": "openclaw",
"events": [
{
"timestamp": "2026-02-16T14:32:01Z",
"provider": "anthropic",
"model": "claude-sonnet-4-5",
"input_tokens": 1200,
"output_tokens": 450,
"cost_usd": 0.0103,
"latency_ms": 2100,
"status": "success",
"prompt_messages": [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is the weather?"}
],
"response_text": "I don't have access to real-time weather data...",
"tools_used": ["web_search"],
"task_context": "weather-query"
}
]
}| Plan | Price | Agents | History | Features |
|---|---|---|---|---|
| Free | $0/mo | 1 | 7 days | Basic dashboard |
| Pro | $29/mo | 5 | 90 days | Alerts, prompt replay, recommendations |
| Team | $99/mo | 25 | 1 year | Team dashboard, API, webhooks |
| Enterprise | Custom | Unlimited | Unlimited | Custom integrations, dedicated support |
Pay with credit card (Stripe) or crypto (ETH for lifetime access).
Contributions are welcome! Please feel free to submit a Pull Request.
MIT License. See LICENSE for details.
Built for developers who run AI agents in production.
agentpulses.com ·
DM us on X