Releases: tc-mb/llama.cpp
Releases · tc-mb/llama.cpp
b5607
b5165
ggml : add SSE 4.2 and x64 base variant for CPUs without AVX (#12871) * ggml : add SSE 4.2 variant for CPUs without AVX * ggml : add x64 base ABI variant
b5145
opencl: fix incorrect local_size index in profiling log (#12868)
b5129
sync : ggml ggml-ci
b4974
sync : ggml ggml-ci
b4909
Vulkan: Default to 1GB allocations instead of 4GB to avoid fragmentat…
b4869
mat vec double buffer (#12188)
b4466
Reset color before we exit (#11205) We don't want colors to leak post termination of llama-run. Signed-off-by: Eric Curtin <[email protected]>
b4263
Fix HF repo commit to clone lora test models (#10649)
b4049
server : minor UI fix (#10207)