Skip to content

Releases: tc-mb/llama.cpp

b5607

09 Jun 04:41
056eb74
Compare
Choose a tag to compare
CANN: Enable labeler for Ascend NPU (#13914)

b5165

22 Apr 08:38
1d735c0
Compare
Choose a tag to compare
ggml : add SSE 4.2 and x64 base variant for CPUs without AVX (#12871)

* ggml : add SSE 4.2 variant for CPUs without AVX

* ggml : add x64 base ABI variant

b5145

17 Apr 09:04
12b1750
Compare
Choose a tag to compare
opencl: fix incorrect local_size index in profiling log (#12868)

b5129

14 Apr 08:25
Compare
Choose a tag to compare
sync : ggml

ggml-ci

b4974

27 Mar 09:28
Compare
Choose a tag to compare
sync : ggml

ggml-ci

b4909

18 Mar 08:43
fd123cf
Compare
Choose a tag to compare
Vulkan: Default to 1GB allocations instead of 4GB to avoid fragmentat…

b4869

11 Mar 06:34
2c9f833
Compare
Choose a tag to compare
mat vec double buffer (#12188)

b4466

13 Jan 04:04
924518e
Compare
Choose a tag to compare
Reset color before we exit (#11205)

We don't want colors to leak post termination of llama-run.

Signed-off-by: Eric Curtin <[email protected]>

b4263

04 Dec 10:44
253b7fd
Compare
Choose a tag to compare
Fix HF repo commit to clone lora test models (#10649)

b4049

08 Nov 07:48
76c6e7f
Compare
Choose a tag to compare
server : minor UI fix (#10207)