Skip to content

Actions: ZipXuan/llama.cpp

Publish Docker image

Actions

Loading...
Loading

Show workflow options

Create status badge

Loading
8 workflow runs
8 workflow runs

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

llama : add OLMo November 2024 support (#10394)
Publish Docker image #8: Commit a88ad00 pushed by ZipXuan
November 19, 2024 10:54 13m 44s master
November 19, 2024 10:54 13m 44s
sycl: Revert MUL_MAT_OP support changes (#10385)
Publish Docker image #7: Commit 557924f pushed by ZipXuan
November 19, 2024 04:13 9m 5s master
November 19, 2024 04:13 9m 5s
musa: enable building fat binaries, enable unified memory, and disabl…
Publish Docker image #6: Commit c35e586 pushed by ZipXuan
September 23, 2024 02:56 25m 44s master
September 23, 2024 02:56 25m 44s
CUDA: enable Gemma FA for HIP/Pascal (#9581)
Publish Docker image #5: Commit a5b57b0 pushed by ZipXuan
September 22, 2024 12:02 16m 39s master
September 22, 2024 12:02 16m 39s
llama: remove redundant loop when constructing ubatch (#9574)
Publish Docker image #4: Commit ecd5d6b pushed by ZipXuan
September 22, 2024 02:54 9m 31s master
September 22, 2024 02:54 9m 31s
cuda : organize vendor-specific headers into vendors directory (#8746)
Publish Docker image #3: Commit 439b3fc pushed by ZipXuan
July 29, 2024 13:48 8m 35s master
July 29, 2024 13:48 8m 35s
[SYCL] add conv support (#8688)
Publish Docker image #2: Commit 0832de7 pushed by ZipXuan
July 29, 2024 07:51 23m 26s master
July 29, 2024 07:51 23m 26s
server : handle content array in chat API (#8449)
Publish Docker image #1: Commit 4e24cff pushed by ZipXuan
July 13, 2024 07:17 8m 10s master
July 13, 2024 07:17 8m 10s