Skip to content

Releases: ggml-org/llama.cpp

b8933

25 Apr 21:18
dcad77c

Choose a tag to compare

b8931

25 Apr 14:54
9725a31

Choose a tag to compare

b8929

25 Apr 09:31
9d34231

Choose a tag to compare

llama-quant : default ftype param Q5_1 --> Q8_0 (#20828)

Change the default ftype in llama_model_quantize_params from
LLAMA_FTYPE_MOSTLY_Q5_1 to LLAMA_FTYPE_MOSTLY_Q8_0.

In case some external program naively uses the default quantization
params, we should probably default to a known-good type like Q8_0 rather
than Q5_1, which is rather old.

macOS/iOS:

Linux:

Android:

Windows:

openEuler:

b8927

25 Apr 08:57
eddd7a1

Choose a tag to compare

b8926

25 Apr 08:12
dd2914d

Choose a tag to compare

b8925

24 Apr 23:39
0adede8

Choose a tag to compare

b8924

24 Apr 23:15
361fe72

Choose a tag to compare

b8922

24 Apr 19:58
13d36cf

Choose a tag to compare

ggml-webgpu: enable FLASH_ATTN_EXT on browser without subgroup matrix (#22199)

  • ggml-webgpu: add tile flash attention fallback

  • ggml-webgpu: add new fields and discard usage of mnk for tile version

  • ggml-webgpu: modify the vec path to discard the mnk parameter

  • ggml-webgpu: enable flash attention vec and tile version for broswer

  • ggml-webgpu: stagging KV for flash attention tile version

  • formatting

  • turn on subgroup uniformity check

  • remove Q_TILE as it is always 1 for vec path

  • make row_max and exp_sum to local register

  • make different bindings with same underlying buffer to have the same usage flags

  • move path selection into the shader library and have the host consume a single flash-attn decision object.

  • turn off skip_validation and address buffer overlapping when nwg==1

  • formatting

  • merge binding when kv overlap

macOS/iOS:

Linux:

Android:

Windows:

openEuler:

b8920

24 Apr 15:24
15fa3c4

Choose a tag to compare

b8919

24 Apr 14:25
dc80c52

Choose a tag to compare