Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Bug: ggml-4-x86-cuda-v100 CI Failure probably caused by #10318 #10356

Closed
@FirstTimeEZ

Description

@FirstTimeEZ

What happened?

CI failure is probably caused by #10318

Name and Version

https://github.com/ggml-org/ci/tree/results/llama.cpp/cf/32a9b93ad859ea592c31785b6bd3b4b2121463/ggml-4-x86-cuda-v100

What operating system are you seeing the problem on?

No response

Relevant log output

�[1;32mOK�[0m
  MUL_MAT(type_a=f16,type_b=f16,m=16,n=1,k=256,bs=[1,1],nr=[1,1],per=[0,1,2,3]): ggml_backend_cuda_graph_compute: disabling CUDA graphs due to GPU architecture
ggml_backend_cuda_graph_compute: disabling CUDA graphs due to GPU architecture
ggml_backend_cuda_graph_compute: disabling CUDA graphs due to GPU architecture
/home/ggml/work/llama.cpp/ggml/src/ggml-cuda/mmv.cu:156: GGML_ASSERT(src1->type == GGML_TYPE_F32) failed

Metadata

Metadata

Assignees

No one assigned

    Labels

    bug-unconfirmedlow severityUsed to report low severity bugs in llama.cpp (e.g. cosmetic issues, non critical UI glitches)

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions