Skip to content

Commit 99f1680

Browse files
baonudesifeizhaiBBufmickqian
authored andcommitted
[diffusion] doc: add vae path to cli doc#14004 (sgl-project#14355)
Co-authored-by: BBuf <1182563586@qq.com> Co-authored-by: Mick <mickjagger19@icloud.com>
1 parent c06dde6 commit 99f1680

File tree

1 file changed

+1
-0
lines changed
  • python/sglang/multimodal_gen/docs

1 file changed

+1
-0
lines changed

python/sglang/multimodal_gen/docs/cli.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,7 @@ The SGLang-diffusion CLI provides a quick way to access the inference pipeline f
1313
### Server Arguments
1414

1515
- `--model-path {MODEL_PATH}`: Path to the model or model ID
16+
- `--vae-path {VAE_PATH}`: Path to a custom VAE model or HuggingFace model ID (e.g., `fal/FLUX.2-Tiny-AutoEncoder`). If not specified, the VAE will be loaded from the main model path.
1617
- `--num-gpus {NUM_GPUS}`: Number of GPUs to use
1718
- `--tp-size {TP_SIZE}`: Tensor parallelism size (only for the encoder; should not be larger than 1 if text encoder offload is enabled, as layer-wise offload plus prefetch is faster)
1819
- `--sp-size {SP_SIZE}`: Sequence parallelism size (typically should match the number of GPUs)

0 commit comments

Comments
 (0)