Thanks to visit codestin.com
Credit goes to github.com

Skip to content

thecfu/scraparr

scraparr-logo Scraparr

A Exporter for the *arr Suite

Project Status: Active Pylint


Scraparr is a Prometheus exporter for the *arr suite (Sonarr, Radarr, Lidarr, etc.). It provides metrics that can be scraped by Prometheus to monitor and visualize the health and performance of your *arr applications.

Features

  • Exposes detailed *arr metrics
  • Easy integration with Prometheus
  • Lightweight and efficient
  • Built for extensibility

Installation

Local Setup

  1. Clone this repository:
git clone https://github.com/thecfu/scraparr.git
cd scraparr/src
  1. Install dependencies:
pip install -r scraparr/requirements.txt
  1. Setup Env Vars: See sample.env for the Env-Vars you can set them up via:
export SONARR_URL=http://localhost:8989
  1. Run the exporter:
python -m scraparr.scraparr

Docker Setup (Recommended)

You can either Clone the Repo and build the Docker Image locally or you can use the Image published in the Github Registry You can also check the Docker-Compose.

Github Registry: docker run -v ./config.yaml:/scraparr/config/config.yaml -p 7100:7100 ghcr.io/thecfu/scraparr

Docker Hub: docker run -v ./config.yaml:/scraparr/config/config.yaml -p 7100:7100 thegameprofi/scraparr

Note

If your using any v1 Version check the Readme of the v1 Branch

Note

If you want to access new features before they are released, use the main tag.

Kubernetes (Community-Maintained)

Deployment on Kubernetes is possible via the imgios/scraparr Helm Chart, which simplifies the process into two steps:

  1. Add the imgios/scraparr Helm Repository:
$ helm repo add imgios https://imgios.github.io/scraparr
  1. Run the installation command:
$ helm install <release-name> imgios/scraparr \
--namespace scraparr \
--create-namespace \
--values values.yaml

See the Helm Chart repository README for details on deployment and how to fill the values.

Unraid Template (Community-Maintained)

A Unraid Template is available in the Repo of jordan-dalby: https://github.com/jordan-dalby/unraidtemplates

Note: This template is approved by TheCfU but is not monitored or maintained by us.

Configuration

Note

If your using any v1 Version check the Readme of the v1 Branch

Warning

If using the Docker Variant you need to use the IP or configure & use the extra_host host.docker.internal:host-gateway

Scraparr can be configured either by using a config.yaml file or by setting environment variables.
For environment variables, please refer to the sample.env file. You can set them directly as environment options or create an .env file and import it using your container host.

Important

The environment variables don't support the configuration of Multiple Instances to use them you need to switch to the config

Make sure the configuration specifies the URLs and API keys for the *arr services you want to monitor.

Config.yaml

Template for Service inside the config.yaml:

sonarr:
  url: http://sonarr:8989
  api_key: key
  # alias: sonarr # Optional to Differentiate between multiple Services
  # api_version: v3 # Optional to use a different API Version
  # interval: 30 # Optional to set a different Interval in Seconds
  # detailed: true  # Get Data per Series

Multiple Instances

To Configure multiple Instances of the same Service you can configure them like this:

Caution

When using multiple Instances of the same Service you need to use the alias, else the metrics are getting overwritten

sonarr:
  - url: http://sonarr:8989
    api_key: key
    alias: sonarr1
  - url: http://sonarr2:8989
    api_key: key
    alias: sonarr2

Usage

Once the service is running, it will expose metrics at http://localhost:7100/metrics (default port). You can configure Prometheus to scrape these metrics by adding the following job to your Prometheus configuration:

scrape_configs:
  - job_name: 'scraparr'
    static_configs:
      - targets: ['localhost:7100']

Grafana Dashboards

The Main Dashboard is also available under Grafana Dashboards: Scraparr

For example Grafana Dashboards have a look at Dashboards

Contributing

Contributions are welcome! Please feel free to open an issue or submit a pull request. Make sure to follow the contribution guidelines

Important

Please fork from the dev branch to include any un-released changes.

You can setup a local testing env using the compose-dev.yaml, which mounts a config.save.yaml and the src folder inside the container so you are not required to build it each time.
Please note that if you change something in the requirements.txt you need to build it locally.
If you want to verify your code before pushing to prevent the Pipeline to fail. Please use the linter service inside the compose File which build and runs a pylint against the code.

# remove the build arg if not needed
docker compose --file compose-dev.yaml up --build

πŸš€ Stay Connected

License

This project is licensed under the GNU General Public License v3.0. See the LICENSE file for details.


___________.__           _________   _____ ____ ___ 
\__    ___/|  |__   ____ \_   ___ \_/ ____\    |   \
  |    |   |  |  \_/ __ \/    \  \/\   __\|    |   /
  |    |   |   Y  \  ___/\     \____|  |  |    |  / 
  |____|   |___|  /\___  >\______  /|__|  |______/  
                \/     \/        \/                 

About

Scraparr is a Prometheus Exporter for various components of the *arr Suite

Topics

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Packages