The official TypeScript SDK for LlamaParse - the enterprise platform for agentic OCR and document processing.
With this SDK, create powerful workflows across many features:
- Parse - Agentic OCR and parsing for 130+ formats
- Extract - Structured data extraction with custom schemas
- Classify - Document categorization with natural-language rules
- Agents - Deploy document agents as APIs
- Index - Document ingestion and embedding for RAG
npm install @llamaindex/llama-cloudimport LlamaCloud from '@llamaindex/llama-cloud';
const client = new LlamaCloud({
apiKey: process.env['LLAMA_CLOUD_API_KEY'], // This is the default and can be omitted
});
// Parse a document
const job = await client.parsing.create({
tier: 'agentic',
version: 'latest',
file_id: 'your-file-id',
});
console.log(job.id);import fs from 'fs';
import LlamaCloud from '@llamaindex/llama-cloud';
const client = new LlamaCloud();
// Upload using a file stream
await client.files.create({
file: fs.createReadStream('/path/to/document.pdf'),
purpose: 'purpose',
});
// Or using a File object
await client.files.create({
file: new File(['content'], 'document.txt'),
purpose: 'purpose',
});Use the Llama Cloud MCP Server to enable AI assistants to interact with the API:
When the API returns a non-success status code, an APIError subclass is thrown:
await client.pipelines.list({ project_id: 'my-project-id' }).catch((err) => {
if (err instanceof LlamaCloud.APIError) {
console.log(err.status); // 400
console.log(err.name); // BadRequestError
}
});| Status Code | Error Type |
|---|---|
| 400 | BadRequestError |
| 401 | AuthenticationError |
| 403 | PermissionDeniedError |
| 404 | NotFoundError |
| 422 | UnprocessableEntityError |
| 429 | RateLimitError |
| >=500 | InternalServerError |
| N/A | APIConnectionError |
The SDK automatically retries requests 2 times on connection errors, timeouts, rate limits, and 5xx errors. Requests timeout after 1 minute by default. Functions that combine multiple API calls (e.g. client.parsing.parse()) will have larger timeouts by default to account for the multiple requests and polling.
const client = new LlamaCloud({
maxRetries: 0, // Disable retries (default: 2)
timeout: 30 * 1000, // 30 second timeout (default: 1 minute)
});List methods support auto-pagination with for await...of:
for await (const run of client.extraction.runs.list({
extraction_agent_id: 'agent-id',
limit: 20,
})) {
console.log(run);
}Or fetch one page at a time:
let page = await client.extraction.runs.list({ extraction_agent_id: 'agent-id', limit: 20 });
for (const run of page.items) {
console.log(run);
}
while (page.hasNextPage()) {
page = await page.getNextPage();
}Configure logging via the LLAMA_CLOUD_LOG environment variable or the logLevel option:
const client = new LlamaCloud({
logLevel: 'debug', // 'debug' | 'info' | 'warn' | 'error' | 'off'
});- TypeScript >= 4.9
- Node.js 20+, Deno 1.28+, Bun 1.0+, or modern browsers
See CONTRIBUTING.md.