Thread-based log handler for better performance
Do not have your python program slowed down by your logging.
Most of the time we log on disk, and all I/O is time consuming. By leveraging a thread, it is possible to speed up your python application considerably.
import logging
from backgroundlog.handlers.thread_handler import ThreadHandler
# Setting up the logging thread handler
file_handler = logging.FileHandler('/var/log/myapp.log', mode="a", encoding="utf-8")
thread_handler = ThreadHandler(file_handler)
# Creating a new logger
bg_logger = logging.getLogger('bg_logger')
bg_logger.setLevel(logging.INFO)
# Adding the thread handler
bg_logger.addHandler(thread_handler)
# Using the logger
bg_logger.info('This is a log message')from backgroundlog.handlers.thread_handler import ThreadHandler
thread_handler = ThreadHandler(file_handler, queue_size=5000)By default, the queue size is 1000.
When putting the records in the queue, it could reach the queue size. We provide a way to deal with this issue: set a blocking policy by logging record level, and in the case of a non-blocking policy, the record will be discarded and we will increment a dropped log record.
from backgroundlog.handlers.thread_handler import ThreadHandler
from logging import CRITICAL, ERROR, INFO
thread_handler = ThreadHandler(file_handler, blocking_levels={INFO, ERROR, CRITICAL})from backgroundlog.handlers.thread_handler import ThreadHandler
from logging import CRITICAL, ERROR
thread_handler = ThreadHandler(file_handler, blocking_levels={ERROR, CRITICAL})from backgroundlog.handlers.thread_handler import ThreadHandler
from logging import CRITICAL
thread_handler = ThreadHandler(file_handler, blocking_levels={CRITICAL})from backgroundlog.handlers.thread_handler import ThreadHandler
thread_handler = ThreadHandler(file_handler, blocking_levels=None)By default, the error and critical records are blocking, the rest are not.
We have done several local testing with different logging handlers. See the file run_performance_comparison.py for a full catalog of the performance tests we run.
We used two versions of Python depending on if they had or not the global interpret lock:
- Python 3.15.3 (with GIL)
- Python 3.15.3t (without GIL)
All tests are 100_000 iterations of creating the same logging message, and were run in a Macbook Pro M1 with 16 GB of RAM.
| Logging Handler | Spent Time | vs. Baseline | |
|---|---|---|---|
| Mean Time (ms) | Std Dev (ms) | ||
| StreamHandler | 0.685 | 0.006 | baseline |
| FileHandler | 0.685 | 0.018 | +0.03% |
| ThreadHandler (StreamHandler) | 0.487 | 0.03 | -28.911% |
| ThreadHandler (FileHandler) | 0.475 | 0.002 | -30.66% |
There is a ~30% of improvement when running the thread handler.
| Logging Handler | Spent Time | vs. Baseline | |
|---|---|---|---|
| Mean Time (ms) | Std Dev (ms) | ||
| StreamHandler | 0.539 | 0.004 | baseline |
| FileHandler | 0.545 | 0.013 | +1.109% |
| ThreadHandler (StreamHandler) | 0.344 | 0.002 | -36.301% |
| ThreadHandler (FileHandler) | 0.339 | 0.001 | -37.118% |
There is a ~36% of improvement when running the thread handler. +6% with respect of the Python version with GIL.
Not blocking the main flow of your program by making use of backgroundlog gives you a 30% speed gain.
This package has no dependencies.
Minimum version support is 3.10.
MIT license, but if you need any other contact me.