Currently the parser relies on pattern matching and heuristics. Add support for multiple LLM providers as the NLP backbone so teams can choose based on cost, latency, or preference.
Providers to support:
anthropic — Claude API (claude-haiku for speed, claude-sonnet for accuracy)
openai — GPT-4o / GPT-4o-mini
google — Gemini 1.5 Flash
mistral — Mistral Small (good for cost-sensitive deployments)
cohere — Command R (strong on structured extraction tasks)
Proposed interface:
const parser = new IntentParser({
provider: "anthropic", // swap to "openai" | "google" | "mistral" | "cohere"
apiKey: process.env.API_KEY,
model: "claude-haiku-4-5" // optional override
})
What this unlocks:
- Teams not on Anthropic can still use intent-parser
- Benchmark accuracy and latency across providers per corridor
- Fallback to a secondary provider if primary is down
- Cost optimization — route simple intents to cheaper models
Suggested approach:
- Abstract LLM calls behind a
ProviderAdapter interface
- Each provider gets its own adapter —
AnthropicAdapter, OpenAIAdapter etc.
- Keep regex heuristics as the fast path, LLM as the enrichment layer
- Add a
parsed_by field in response showing which provider and model was used
Currently the parser relies on pattern matching and heuristics. Add support for multiple LLM providers as the NLP backbone so teams can choose based on cost, latency, or preference.
Providers to support:
anthropic— Claude API (claude-haiku for speed, claude-sonnet for accuracy)openai— GPT-4o / GPT-4o-minigoogle— Gemini 1.5 Flashmistral— Mistral Small (good for cost-sensitive deployments)cohere— Command R (strong on structured extraction tasks)Proposed interface:
What this unlocks:
Suggested approach:
ProviderAdapterinterfaceAnthropicAdapter,OpenAIAdapteretc.parsed_byfield in response showing which provider and model was used