feat: Integrate Helicone Provider w LlamaIndex TS#2235
feat: Integrate Helicone Provider w LlamaIndex TS#2235H2Shami wants to merge 6 commits intorun-llama:mainfrom
Conversation
🦋 Changeset detectedLatest commit: 4d01837 The changes in this PR will be included in the next version bump. Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
There was a problem hiding this comment.
Pull request overview
This PR integrates the Helicone AI Gateway provider with LlamaIndex TypeScript, enabling users to route OpenAI-compatible requests through Helicone for observability, control, and provider routing.
Key Changes:
- Added a new
@llamaindex/heliconeprovider package that extends the OpenAI adapter - Implemented configuration for Helicone API key and base URL with environment variable support
- Added comprehensive tests and documentation with usage examples
Reviewed changes
Copilot reviewed 9 out of 11 changed files in this pull request and generated 7 comments.
Show a summary per file
| File | Description |
|---|---|
| tsconfig.json | Added TypeScript project reference for the new Helicone provider package |
| pnpm-lock.yaml | Added dependency entries for the Helicone provider in the workspace |
| packages/providers/helicone/tsconfig.json | TypeScript configuration for the Helicone provider package |
| packages/providers/helicone/package.json | Package manifest defining the Helicone provider module structure and dependencies |
| packages/providers/helicone/src/llm.ts | Core implementation of the Helicone class extending OpenAI with gateway-specific configuration |
| packages/providers/helicone/src/index.ts | Package entry point exporting the Helicone LLM |
| packages/providers/helicone/tests/index.test.ts | Unit tests covering basic functionality, configuration, and error handling |
| examples/package.json | Added Helicone provider as a dependency for the examples package |
| examples/models/helicone-basic.ts | Basic usage example demonstrating Helicone integration with custom headers |
| docs/src/content/docs/framework/modules/models/llms/helicone.mdx | Documentation page explaining installation, usage, and configuration options |
| .changeset/free-months-lick.md | Changeset entry documenting the integration of Helicone with LlamaIndex |
Files not reviewed (1)
- pnpm-lock.yaml: Language not supported
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
examples/package.json
Outdated
| "@llamaindex/openai": "^0.4.22", | ||
| "@llamaindex/ovhcloud": "^1.0.0", | ||
| "@llamaindex/perplexity": "^0.0.35", | ||
| "@llamaindex/helicone": "^0.1.0", |
There was a problem hiding this comment.
The version specified is "^0.1.0", but this package is being introduced in this PR and hasn't been published yet. This should use "workspace:*" like other provider packages in the monorepo (as seen in pnpm-lock.yaml line 412).
| "@llamaindex/helicone": "^0.1.0", | |
| "@llamaindex/helicone": "workspace:*", |
| '@llamaindex/helicone': | ||
| specifier: workspace:* | ||
| version: link:../packages/providers/helicone |
There was a problem hiding this comment.
The @llamaindex/helicone entry is not in alphabetical order. It should be placed after @llamaindex/groq (line 410) and before @llamaindex/huggingface (line 414) to maintain consistency with the rest of the file.
| // ---- OR ---- | ||
|
|
||
| // Use the Helicone AI Gateway | ||
| const llm = new Helicone({ | ||
| model: "gpt-4o-mini", | ||
| apiKey: "<helicone-api-key>", //Use if HELICONE_API_KEY isn't set in your env | ||
| }); | ||
|
|
||
| const res = await llm.chat({ | ||
| messages: [{ role: "user", content: "Hello from Helicone!" }], | ||
| }); | ||
|
|
||
| console.log("Helicone response:", res.message.content); | ||
| } |
There was a problem hiding this comment.
The comment "// ---- OR ----" on line 69 creates a code example that shows two different approaches but runs them sequentially in the same function, which is confusing. The second approach (starting at line 72) would duplicate the query execution. Consider restructuring this to show two separate, complete examples or clarify that only one approach should be used at a time.
| const document = new Document({ text: essay, id_: "essay" }); | ||
| const index = await VectorStoreIndex.fromDocuments([document]); | ||
| ``` | ||
|
|
||
| ## Query | ||
|
|
||
| ```ts | ||
| const queryEngine = index.asQueryEngine(); | ||
| const response = await queryEngine.query({ query: "What is the meaning of life?" }); | ||
| console.log(response.response); | ||
| ``` | ||
|
|
||
| ## Full Example | ||
|
|
||
| ```ts | ||
| import { Helicone } from "@llamaindex/helicone"; | ||
| import { Document, Settings, VectorStoreIndex } from "llamaindex"; | ||
|
|
||
|
|
||
| async function main() { | ||
| // Use the Helicone AI Gateway | ||
| Settings.llm = new Helicone({ | ||
| model: "gpt-4o-mini", | ||
| apiKey: "<helicone-api-key>", //Use if HELICONE_API_KEY isn't set in your env | ||
| }); | ||
|
|
||
| const document = new Document({ text: essay, id_: "essay" }); |
There was a problem hiding this comment.
The documentation references an essay variable on lines 63 and 37 that is never defined in the examples. This will cause the code to fail if users try to run it. Consider either importing/defining the essay variable or using a placeholder string with a comment indicating where users should provide their own text.
examples/package.json
Outdated
| "@llamaindex/openai": "^0.4.22", | ||
| "@llamaindex/ovhcloud": "^1.0.0", | ||
| "@llamaindex/perplexity": "^0.0.35", | ||
| "@llamaindex/helicone": "^0.1.0", |
There was a problem hiding this comment.
The @llamaindex/helicone dependency is not in alphabetical order. It should be placed after @llamaindex/groq (line 31) and before @llamaindex/huggingface (line 32) to maintain consistency with the rest of the dependencies.
| * Convenience function to create a new HeliconeLLM instance. | ||
| * @param init - Optional initialization parameters for the HeliconeLLM instance. | ||
| * @returns A new HeliconeLLM instance. |
There was a problem hiding this comment.
The documentation comment refers to "HeliconeLLM" but the class is actually named "Helicone". This inconsistency should be corrected for clarity.
| * Convenience function to create a new HeliconeLLM instance. | |
| * @param init - Optional initialization parameters for the HeliconeLLM instance. | |
| * @returns A new HeliconeLLM instance. | |
| * Convenience function to create a new Helicone instance. | |
| * @param init - Optional initialization parameters for the Helicone instance. | |
| * @returns A new Helicone instance. |
| { | ||
| "name": "@llamaindex/helicone", | ||
| "description": "Helicone AI Gateway Adapter for LlamaIndex", | ||
| "version": "0.1.0", |
There was a problem hiding this comment.
[nitpick] The package version is set to "0.1.0" but the changeset indicates this is a "major" release. For a major release of a new package starting from 0.1.0, the version should typically be "1.0.0" (if following semantic versioning for stable releases) or remain at "0.1.0" if treating as pre-1.0 (where breaking changes are allowed). Consider aligning the version with the changeset type or updating the changeset to "minor" if this is an initial pre-1.0 release.
| "version": "0.1.0", | |
| "version": "1.0.0", |
|
hey @marcusschiesser! I made a fix that should get the CI tests working. but it required to set the package version in The notion is that there's a CI in this repo that will release the package and update package versions and then publish it. am i mistaken? |
No description provided.