A simple tool designed to make efficient use of LLM's by leveraging Retrieval Augmented Generation in order to obtain relevant information from a provided corpus of text quickly, and reduce chances of hallucination.
- Create a virtual environment in a directory
mkdir <name> && cd <name>python3 -m venv .venv/<name>- activate with
source .venv/bin/activate
- Dependencies have beel listed in
requirements.txt - install them with
pip install -r requirements.txt
- Configure, the
prompt, corpus, model to be usedinconfig.toml
- Generate API keys to either OpenAI or Gemini
- Populate
.envwithOPENAI_API_KEYorGEMINI_API_KEY - perform
source .envbefore running the program
- run
python3 main.py - output written to
response.txt