Thanks to visit codestin.com
Credit goes to github.com

Skip to content

ggml: unify backend logging mechanism #9709

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 14 commits into from
Oct 3, 2024

Conversation

bandoti
Copy link
Collaborator

@bandoti bandoti commented Oct 1, 2024

This change updates the GGML logging mechanism so all back-ends may share the same logger. Please see issue #9706 and some initial discussion #8830.

Although there is some minor duplication in the llama.cpp (llama_logger_state) and ggml.c (ggml_logger_state), it is required in order to keep these libraries internal details isolated.

@github-actions github-actions bot added Nvidia GPU Issues specific to Nvidia GPUs ggml changes relating to the ggml tensor library for machine learning Apple Metal https://en.wikipedia.org/wiki/Metal_(API) labels Oct 1, 2024
@bandoti bandoti changed the title ggml: unify backend logging mechanism (#9706) ggml: unify backend logging mechanism Oct 1, 2024
@bandoti
Copy link
Collaborator Author

bandoti commented Oct 3, 2024

@slaren
In light of #9707 changes to the logging backend, I will need your advice on what needs to change or not in this PR. Should I simply rip out all the logging mechanisms added in #9707?

@slaren
Copy link
Member

slaren commented Oct 3, 2024

Yes, just remove the set_log_callback function ptr from ggml_backend_reg_i and all its uses. I wasn't sure when this PR would be complete so I left it there, but it can removed if it is no longer necessary.

@bandoti
Copy link
Collaborator Author

bandoti commented Oct 3, 2024

Sounds good. I'll brandish my hatchet (emacs) and start chopping away! 😅

@slaren slaren requested a review from ggerganov October 3, 2024 15:14
@slaren
Copy link
Member

slaren commented Oct 3, 2024

Let's merge this before adapting more backends to the registry interface to avoid creating more conflicts.

@slaren slaren merged commit d6fe7ab into ggml-org:master Oct 3, 2024
53 checks passed
dsx1986 pushed a commit to dsx1986/llama.cpp that referenced this pull request Oct 29, 2024
* Add scaffolding for ggml logging macros

* Metal backend now uses GGML logging

* Cuda backend now uses GGML logging

* Cann backend now uses GGML logging

* Add enum tag to parameters

* Use C memory allocation funcs

* Fix compile error

* Use GGML_LOG instead of GGML_PRINT

* Rename llama_state to llama_logger_state

* Prevent null format string

* Fix whitespace

* Remove log callbacks from ggml backends

* Remove cuda log statement
arthw pushed a commit to arthw/llama.cpp that referenced this pull request Nov 15, 2024
* Add scaffolding for ggml logging macros

* Metal backend now uses GGML logging

* Cuda backend now uses GGML logging

* Cann backend now uses GGML logging

* Add enum tag to parameters

* Use C memory allocation funcs

* Fix compile error

* Use GGML_LOG instead of GGML_PRINT

* Rename llama_state to llama_logger_state

* Prevent null format string

* Fix whitespace

* Remove log callbacks from ggml backends

* Remove cuda log statement
arthw pushed a commit to arthw/llama.cpp that referenced this pull request Nov 18, 2024
* Add scaffolding for ggml logging macros

* Metal backend now uses GGML logging

* Cuda backend now uses GGML logging

* Cann backend now uses GGML logging

* Add enum tag to parameters

* Use C memory allocation funcs

* Fix compile error

* Use GGML_LOG instead of GGML_PRINT

* Rename llama_state to llama_logger_state

* Prevent null format string

* Fix whitespace

* Remove log callbacks from ggml backends

* Remove cuda log statement
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Apple Metal https://en.wikipedia.org/wiki/Metal_(API) ggml changes relating to the ggml tensor library for machine learning Nvidia GPU Issues specific to Nvidia GPUs
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants