feat: Add support for Docker Compose models#392
feat: Add support for Docker Compose models#392mathieu-benoit merged 7 commits intoscore-spec:mainfrom
models#392Conversation
|
Please have a look. This is my first PR in this project.😊 |
|
Thanks, @UMMAN2005, sounds great! FYI: I'm fixing the CI failing for an unrelated reason of your PR. And the review of this PR will happen shortly. |
astromechza
left a comment
There was a problem hiding this comment.
The code looks fine, can you please update 06-resource-provisioning with the new model template bit and ensure the output is as expected? Also update the README with a description of the models field.
Signed-off-by: Umman Mammadov <ummanmemmedov2005@gmail.com>
|
Hi, @astromechza Could you please have a look? |
There was a problem hiding this comment.
Thanks, @UMMAN2005 for this implementation.
I just tested with this:
apiVersion: score.dev/v1b1
metadata:
name: open-webui
containers:
open-webui:
image: .
resources:
gemma3:
type: llm-model
params:
model: ai/gemma3:270M-UD-IQ2_XXSWith this provisioner:
- uri: template://dmr-llm-model
type: llm-model
description: Generates the LLM model via the Docker Model Runner (DMR) provider.
supported_params:
- model
outputs: |
model: {{ .Init.model }}
expected_outputs:
- model
init: |
model: {{ .Params.model | default "ai/smollm2:135M-Q4_0" }}
models: |
{{ .Id }}:
model: {{ .Init.model }}By running this commands:
./score-compose init \
--provisioners model.provisioners.yaml
./score-compose generate score.yaml \
--image ghcr.io/open-webui/open-webui:main-slim \
--publish 8080:open-webui:8080 \
--output compose.yamlIt's generating this compose.yaml:
name: score-compose
services:
open-webui-open-webui:
annotations:
compose.score.dev/workload-name: open-webui
hostname: open-webui
image: ghcr.io/open-webui/open-webui:main-slim
models:
open-webui-gemma3:
model: ai/gemma3:270M-UD-IQ2_XXSBut when running docker compose up -d, the local LLM models are not pulled (docker model list).
Here is what's missing in the generated compose.yaml file:
name: score-compose
services:
open-webui-open-webui:
annotations:
compose.score.dev/workload-name: open-webui
hostname: open-webui
image: ghcr.io/open-webui/open-webui:main-slim
+ models:
+ - open-webui-gemma3
models:
open-webui-gemma3:
model: ai/gemma3:270M-UD-IQ2_XXSReference: https://docs.docker.com/ai/compose/models-and-compose/#basic-model-definition.
And then when doing docker compose up -d, we can see that the local LLM model is pulled (docker model list):
MODEL NAME PARAMETERS QUANTIZATION ARCHITECTURE MODEL ID CREATED CONTEXT SIZE
gemma3:270M-UD-IQ2_XXS 268.10 M IQ4_NL gemma3 bae1ee104a16 3 months ago 165.54 MiB
@UMMAN2005, could you please look at how this could be injected?
@astromechza, maybe some advice to accomplish this part?
Signed-off-by: Umman Mammadov <ummanmemmedov2005@gmail.com>
|
Hi, @mathieu-benoit The changes that you mentioned are integrated and tested to be working. Please have a look. |
There was a problem hiding this comment.
@UMMAN2005, working as expected now, thanks!
score.yaml:
apiVersion: score.dev/v1b1
metadata:
name: open-webui
containers:
open-webui:
image: .
variables:
OPENAI_API_BASE_URL: "${resources.smollm2.url}"
WEBUI_NAME: "Hello, DMR with Score Compose!"
volumes:
/app/backend/data:
source: ${resources.data}
resources:
data:
type: volume
gemma3:
type: llm-model
params:
model: ai/gemma3:270M-UD-IQ2_XXS
smollm2:
type: llm-model
params:
model: ai/smollm2:135M-Q2_K
service:
ports:
tcp:
port: 8080
targetPort: 8080model.provisioners.yaml:
- uri: template://dmr-llm-model
type: llm-model
description: Generates the LLM model via the Docker Model Runner (DMR) provider.
supported_params:
- model
- context_size
outputs: |
model: {{ .Init.model }}
url: "http://172.17.0.1:12434/engines/v1/"
expected_outputs:
- model
- url
init: |
model: {{ .Params.model | default "ai/smollm2:135M-Q4_0" }}
models: |
{{ .Id }}:
model: {{ .Init.model }}
context_size: {{ .Params.context_size | default 2048 }}./score-compose init --provisioners model.provisioners.yaml
./score-compose generate score.yaml --image ghcr.io/open-webui/open-webui:main-slim --publish 8080:open-webui:8080 --output compose.yamlcompose.yaml generated:
name: score-compose
services:
open-webui-open-webui:
annotations:
compose.score.dev/workload-name: open-webui
environment:
OPENAI_API_BASE_URL: http://172.17.0.1:12434/engines/v1/
WEBUI_NAME: Hello, DMR with Score Compose!
hostname: open-webui
image: ghcr.io/open-webui/open-webui:main-slim
models:
open-webui.gemma3: {}
open-webui.smollm2: {}
ports:
- target: 8080
published: "8080"
volumes:
- type: volume
source: open-webui-data-K3WUDl
target: /app/backend/data
volumes:
open-webui-data-K3WUDl:
name: open-webui-data-K3WUDl
driver: local
labels:
dev.score.compose.res.uid: volume.default#open-webui.data
models:
open-webui.gemma3:
model: ai/gemma3:270M-UD-IQ2_XXS
context_size: 2048
open-webui.smollm2:
model: ai/smollm2:135M-Q2_K
context_size: 2048docker compose up -ddocker ps:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
bd92d6a541a8 ghcr.io/open-webui/open-webui:main-slim "bash start.sh" 7 minutes ago Up 7 minutes (healthy) 0.0.0.0:8080->8080/tcp, [::]:8080->8080/tcp score-compose-open-webui-open-webui-1
cb731dcfbe22 docker/model-runner:latest "/app/model-runner" 7 minutes ago Up 7 minutes 127.0.0.1:12434->12434/tcp, 172.17.0.1:12434->12434/tcp docker-model-runner
docker model list:
MODEL NAME PARAMETERS QUANTIZATION ARCHITECTURE MODEL ID CREATED CONTEXT SIZE
gemma3:270M-UD-IQ2_XXS 268.10 M IQ4_NL gemma3 bae1ee104a16 3 months ago 165.54 MiB
smollm2:135M-Q2_K 134.52 M Q2_K llama eba11bf8f361 7 months ago 82.41 MiB
🥳 🚀
There was a problem hiding this comment.
LGTM
@chris-stephenson or @astromechza, a second pair of eyes on the code please?
|
@chris-stephenson and @astromechza Kind reminder! |
astromechza
left a comment
There was a problem hiding this comment.
LGTM, thanks for updating the docs and tests @UMMAN2005. Much appreciated.
|
Who will merge this PR? |
|
Thank you for your contribution with this PR, @UMMAN2005, much appreciated! This feature is now included in this |
|
Furthermore, JFYI:
|
models

Description
This PR adds support for Docker Compose's
modelsfeature at the root level, as introduced in the Docker Compose specification for AI/ML workloads.What does this PR do?
Implements support for the
modelssection in Docker Compose, allowing provisioners to define AI/ML models that can be referenced by services.Changes:
ComposeModelsfield toProvisionOutputstruct incore.goApplyToStateAndProjectmodelstemplate support to template provisioner (templateprov)This enables provisioners to generate compose files with models like:
Closes #367
Types of changes
Bug fix (non-breaking change which fixes an issue)
New feature (non-breaking change which adds functionality)
Breaking change (fix or feature that would cause existing functionality to change)
New chore (expected functionality to be implemented)
Checklist:
My change requires a change to the documentation.
I've signed off with an email address that matches the commit author.