GitHub Status:
GitLab Status (requires access and sign-in to code.ornl.gov):
Documentation: https://chathpc-app.readthedocs.io/
Internal Development Documentation: https://devdocs.ornl.gov/ChatHPC/ChatHPC-app
Internal Coverage Report: https://devdocs.ornl.gov/ChatHPC/ChatHPC-app/coverage
Creating a new ChatHPC application.
Table of Contents
For development in folder:
git clone git@github.com:ORNL/ChatHPC-app.git
cd ChatHPC-app
python3 -m venv --upgrade-deps --prompt $(basename $PWD) .venv
source .venv/bin/activate
pip install -e .For use in virtual environment:
python3 -m venv --upgrade-deps --prompt $(basename $PWD) .venv
source .venv/bin/activate
pip install git+ssh://git@github.com/ORNL/ChatHPC-app.gitPull the latest image from GitHub Container Registry:
docker pull ghcr.io/ornl/chathpc-app:latestRun the ChatHPC CLI:
# Show help
docker run --rm ghcr.io/ornl/chathpc-app:latest
# Run with your data (mount volumes as needed)
docker run --rm -v $(pwd):/data ghcr.io/ornl/chathpc-app:latest chathpc --config /data/config.json
# Interactive shell
docker run --rm -it ghcr.io/ornl/chathpc-app:latest /bin/bashTo use GPU acceleration with the Docker container, you need to install the NVIDIA Container Toolkit and pass GPU access to Docker.
Install NVIDIA Container Toolkit:
# Add NVIDIA package repositories
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
# Install nvidia-container-toolkit
sudo apt-get update
sudo apt-get install -y nvidia-container-toolkit
# Restart Docker daemon
sudo systemctl restart dockerRun with GPU access:
# Run with all GPUs
docker run --rm --gpus all ghcr.io/ornl/chathpc-app:latest
# Run with specific GPU(s)
docker run --rm --gpus '"device=0"' ghcr.io/ornl/chathpc-app:latest
docker run --rm --gpus '"device=0,1"' ghcr.io/ornl/chathpc-app:latest
# Run with GPU and mount data
docker run --rm --gpus all -v $(pwd):/data ghcr.io/ornl/chathpc-app:latest chathpc --config /data/config.json
# Verify GPU access inside container
docker run --rm --gpus all ghcr.io/ornl/chathpc-app:latest nvidia-smiNote: The Docker image includes CUDA libraries, but the host system must have compatible NVIDIA drivers installed.
Build the Docker image from the repository:
git clone git@github.com:ORNL/ChatHPC-app.git
cd ChatHPC-app
docker build -t chathpc-app .Run locally built image:
docker run --rm chathpc-applatest- Latest build from main branch<branch-name>- Latest build from specific branch<sha>- Specific commit SHA
Example:
docker pull ghcr.io/ornl/chathpc-app:main
docker pull ghcr.io/ornl/chathpc-app:abc1234Use hatch or install pre-commit inside python virtual environment.
hatch shellor
pip install pre-commitThen install the hooks.
pre-commit installNote: You might have to upgrade pre-commit.
pre-commit autoupdateNote: The markdown linter requires Ruby gem to be installed to auto-install and run mdl.
On Ubuntu this can be done with:
sudo apt install ruby-fullSee Creating a new ChatHPC application.
Get Help:
$ chathpc --help
Usage: chathpc [OPTIONS] COMMAND [ARGS]...
Options:
-h, --help Show this message and exit.
Commands:
config Print current config
run Interact with the model.
run-base Interact with the base model.
run-fine Interact with the finetuned model.
run-merged Interact with the merged model.
train Finetune the model.Run interactively:
chathpc runExample interactive session:
$ chathpc run
chathpc ()> /context
Context: Introduction to Kokkos programming model
chathpc (Introduction to Kokkos programming model)> Which kind of Kokkos views are?
<s> You are a powerful LLM model for Kokkos. Your job is to answer questions about Kokkos programming model. You are given a question and context regarding Kokkos programming model.
You must output the Kokkos question that answers the question.
### Input:
Which kind of Kokkos views are?
### Context:
Introduction to Kokkos programming model
### Response:
There are two different layouts; LayoutLeft and LayoutRight.
</s>
chathpc (Introduction to Kokkos programming model)> \byeTrain:
chathpc trainGet Help:
$ chathpc-json-to-md -h
usage: chathpc-json-to-md [-h] [--debug] [--log_level LOG_LEVEL] [--add_rating_template] [json]
Convert Json files to Markdown for ease of reading.
positional arguments:
json Json string or path to json file.
options:
-h, --help show this help message and exit
--debug Open debug port (5678).
--log_level LOG_LEVEL
Log level.
--add_rating_template
Add rating template to markdown.Example:
chathpc-json-to-md input.json > output.mdSee Upgrading locked package versions.
With an existing uv.lock file, uv will prefer the previously locked versions of packages when running uv sync and uv lock. Package versions will only change if the project's dependency constraints exclude the previous, locked version.
To upgrade all packages:
uv lock --upgradehatch shellhatch run testTo test on all python versions:
hatch run all:testRun tests and print the output.
hatch run test -v -shatch fmtUpdate default ruff rules:
hatch fmt --check --synchatch versionhatch version <new version>An automated script is provided to update the version using a date based version. This scripts will determine the next version to use and then update the version, update the changelog, and commit the changes. Lastly, it will tag the commit.
scripts/version_bump.pyDocumentation is built with mkdocs using the Read the Docs theme.
mkdocs new [dir-name]- Create a new project.mkdocs serve- Start the live-reloading docs server.mkdocs build- Build the documentation site.mkdocs -h- Print help message and exit.
Other useful commands:
mkdocs serve -a 0.0.0.0:8000- Serve with external access to the site. (Useful in ExCL to view using foxyproxy.)
View environment
hatch env show docsBuild documentation
hatch run docs:buildServe documentation
hatch run docs:serveor
hatch run docs:serve -a 0.0.0.0:8000