Skip to content

Actions: ZipXuan/llama.cpp

Server

Actions

Loading...
Loading

Show workflow options

Create status badge

Loading
8 workflow runs
8 workflow runs

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

llama : add OLMo November 2024 support (#10394)
Server #8: Commit a88ad00 pushed by ZipXuan
November 19, 2024 10:54 10m 47s master
November 19, 2024 10:54 10m 47s
sycl: Revert MUL_MAT_OP support changes (#10385)
Server #7: Commit 557924f pushed by ZipXuan
November 19, 2024 04:13 7m 16s master
November 19, 2024 04:13 7m 16s
musa: enable building fat binaries, enable unified memory, and disabl…
Server #6: Commit c35e586 pushed by ZipXuan
September 23, 2024 02:56 25m 35s master
September 23, 2024 02:56 25m 35s
CUDA: enable Gemma FA for HIP/Pascal (#9581)
Server #5: Commit a5b57b0 pushed by ZipXuan
September 22, 2024 12:02 28m 46s master
September 22, 2024 12:02 28m 46s
llama: remove redundant loop when constructing ubatch (#9574)
Server #4: Commit ecd5d6b pushed by ZipXuan
September 22, 2024 02:54 11m 36s master
September 22, 2024 02:54 11m 36s
July 29, 2024 13:48 8m 4s
[SYCL] add conv support (#8688)
Server #2: Commit 0832de7 pushed by ZipXuan
July 29, 2024 07:51 9m 19s master
July 29, 2024 07:51 9m 19s
server : handle content array in chat API (#8449)
Server #1: Commit 4e24cff pushed by ZipXuan
July 13, 2024 07:17 16m 46s master
July 13, 2024 07:17 16m 46s