A Memory-Informed Retrieval-Augmented Generation (RAG) system with planned MongoDB Atlas Vector Search integration and VoyageAI for extreme context length evaluation.
This project evaluates Large Language Models' (LLMs) ability to process extremely long context windows (128k to 2M tokens) and demonstrates how intelligent semantic retrieval can improve inference accuracy while reducing costs.
Key Goals:
- Benchmark LLMs using the BABILong "needle-in-haystack" evaluation framework
- Implement semantic retrieval with VoyageAI embeddings and MongoDB Atlas Vector Search (planned)
- Compare model performance across context lengths (0k-128k tokens)
- Demonstrate cost-effective RAG strategies for production use
This repository contains two independent workstreams that share supporting code but have separate entry points and configurations:
| BABILong Benchmark | VoyageAI RAG Pipeline | |
|---|---|---|
| Purpose | Evaluate LLM accuracy across context lengths | Semantic retrieval with embedding + reranking |
| Entry point | resources/notebooks/benchmark.py |
main.py |
| Config | resources/notebooks/.env (needs OPENAI_API_KEY) |
.env at repo root (needs OPENAI_API_KEY + VOYAGE_API_KEY) |
| Output | CSVs, heatmaps, JSON in resources/notebooks/ |
Console output |
- Two-Stage Semantic Retrieval: VoyageAI embeddings (voyage-3.5) + reranking (rerank-2.5)
- Dual-Model Benchmarking: Side-by-side comparison of OpenAI models
- Flexible Context Lengths: Support for 0k to 128k token contexts
- Cost Analytics: Per-model cost tracking and optimization recommendations
- Visualization: Heatmaps and JSON exports for analysis
┌─────────────────────────────────────────────────────────────────┐
│ User Query │
└─────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ VoyageAI Embedding (voyage-3.5) │
└─────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ Cosine Similarity K-NN Retrieval │
└─────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ VoyageAI Reranking (rerank-2.5) │
└─────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────┐
│ OpenAI Response Generation │
└─────────────────────────────────────────────────────────────────┘
- Python 3.10+
- OpenAI API key
- VoyageAI API key (for the RAG pipeline)
# Clone the repository
git clone https://github.com/Zoetron-art/MongoDB-Agentic-context-window.git
cd MongoDB-Agentic-context-window
# Create a virtual environment
python -m venv myenv
# Activate it
# Linux/macOS:
source myenv/bin/activate
# Windows:
myenv\Scripts\activate
# Install dependencies
pip install -r requirements.txtThis project uses two separate .env files because the two workstreams have different API key requirements.
Root .env (for main.py — RAG pipeline):
OPENAI_API_KEY=sk-your-openai-key-here
VOYAGE_API_KEY=your-voyage-api-key-hereresources/notebooks/.env (for benchmark.py — benchmarks):
OPENAI_API_KEY=sk-your-openai-key-here# Run the main RAG pipeline
python main.py# Navigate to notebooks directory
cd resources/notebooks
# Run the benchmarking script
python benchmark.py
# Follow prompts to select:
# - Model 1 (e.g., gpt-4.1)
# - Model 2 (e.g., gpt-4o-mini)
# - Task (e.g., qa1)
# - Context lengths (e.g., 0-5 for 0k-16k)Comparison of gpt-4.1 vs gpt-4o-mini on qa1 (location tracking) across context lengths 0k–64k. Green cells indicate high accuracy; red/−1 cells indicate untested configurations.
| Model | 0k | 64k | 128k |
|---|---|---|---|
| gpt-4.1 | 100% | 91% | 87% |
| gpt-4o-mini | 100% | 87.5% | - |
- Perfect baseline: Both models achieve 100% accuracy at 0k (minimal context)
- Performance degradation: Accuracy decreases as context length increases
- gpt-4.1 advantage: Better long-context handling at 64k (91% vs 87.5%)
- Cost trade-off: gpt-4.1 is 40x more expensive than gpt-4o-mini
Individual per-model heatmaps are also available in resources/notebooks/media/heatmaps/ (7 PNGs total, including per-model breakdowns at different context ranges).
- Use gpt-4o-mini for 0k-16k testing (~$2-5 per full run)
- Reserve gpt-4.1 for production/critical 64k+ scenarios
MongoDB-Agentic-context-window/
├── main.py # Main RAG pipeline
├── rag_pipeline.ipynb # RAG pipeline notebook
├── requirements.txt # Python dependencies
├── README.md # This file
├── UPDATE.md # V2.0 update documentation
├── james-technical-doc.md # Technical specifications
├── James_orchestration_voyageAI_doc.md # VoyageAI integration docs
│
└── resources/
├── requirements.txt # Benchmark-specific dependencies
├── notebooks/
│ ├── benchmark.py # BABILong benchmarking script
│ ├── visualize_results.py # Standalone visualization (requires CSVs from a prior benchmark.py run)
│ ├── .env # API keys (create this)
│ ├── babilong_evals/ # Benchmark results
│ └── media/
│ ├── heatmaps/ # Visualization outputs (7 PNGs)
│ └── results/ # JSON exports
│
├── babilong/
│ ├── prompts.py # Task prompts (qa1-qa20)
│ ├── metrics.py # Evaluation metrics
│ └── babilong_utils.py # Dataset utilities
│
└── data/
└── tasks_1-20_v1-2.zip # bAbI dataset
Note:
visualize_results.pydepends on CSV output files generated bybenchmark.py. It will not work on a fresh clone — run a benchmark first.
| Document | Description |
|---|---|
| UPDATE.md | V2.0 features, benchmark results, usage guide |
| james-technical-doc.md | Technical specifications and project scope |
| James_orchestration_voyageAI_doc.md | VoyageAI integration and RAG pipeline details |
| resources/README.md | BABILong benchmark documentation |
| Category | Technology | Purpose |
|---|---|---|
| Embeddings | VoyageAI (voyage-3.5) | Semantic document/query embeddings |
| Reranking | VoyageAI (rerank-2.5) | Relevance refinement |
| LLM | OpenAI (GPT-4, GPT-4o) | Response generation |
| Vector Search | MongoDB Atlas (planned) | Semantic retrieval |
| Benchmarking | BABILong | Long-context evaluation |
| Data | HuggingFace Datasets | Dataset loading and caching |
| Visualization | Matplotlib, Seaborn | Heatmap generation |
voyageai>=0.3.0
openai>=2.0.0
numpy>=1.26.0
scikit-learn>=1.3.0
python-dotenv>=1.0.0
datasets>=2.19.0
pandas>=2.2.0
matplotlib>=3.10.0
seaborn>=0.13.0
tqdm>=4.66.0
langchain>=0.3.0
langchain-openai>=0.3.0
langchain-core>=0.3.0
langchain-community>=0.3.0
nltk>=3.8.0
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add some AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
This project is licensed under the MIT License - see below for details.
MIT License
Copyright (c) 2025 MongoDB Agentic Context Window Project
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
- BABILong Benchmark by RMT-team
- VoyageAI for embeddings and reranking
- OpenAI for LLM APIs
- MongoDB Atlas for vector search capabilities
