Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

@Xmaster6y
Copy link
Owner

@Xmaster6y Xmaster6y commented Oct 8, 2025

What does this PR do?

Key insights about the PR.

Linked Issues

  • Closes #?
  • #?

Summary by Sourcery

Add support for a new optional callback init_attr_cache_in across attribution methods to enable custom initialization of input cache before attribution.

New Features:

  • Introduce init_attr_cache_in parameter in the base attribution helper and all concrete methods to allow users to provide custom cache-initialization logic.

Enhancements:

  • Invoke init_attr_cache_in in the attribution function to preprocess the input cache and enforce that it returns a valid TensorDict.

@sourcery-ai
Copy link
Contributor

sourcery-ai bot commented Oct 8, 2025

Reviewer's guide (collapsed on small PRs)

Reviewer's Guide

Introduced a new optional init_attr_cache_in callback parameter across all attribution methods and integrated its application into the core attributor pipeline with type validation.

Sequence diagram for the new init_attr_cache_in callback in the attribution pipeline

sequenceDiagram
    participant "AttributionCommon"
    participant "init_attr_cache_in (callback)"
    participant "TensorDict"
    participant "additional_init_tensors"
    "AttributionCommon"->>"TensorDict": Get cache_in
    alt init_attr_cache_in is provided
        "AttributionCommon"->>"init_attr_cache_in (callback)": Call with cache_in, additional_init_tensors
        "init_attr_cache_in (callback)"-->>"AttributionCommon": Return new cache_in
        "AttributionCommon"->>"TensorDict": Validate type of cache_in
    end
    "AttributionCommon"->>"TensorDict": Continue attribution process with cache_in
Loading

Class diagram for updated attribution method classes with init_attr_cache_in

classDiagram
    class AttributionCommon {
        - target_modules: Optional[List[str]]
        - init_attr_targets: Optional[Callable]
        - init_attr_inputs: Optional[Callable]
        - init_attr_cache_in: Optional[Callable]
        - init_attr_grads: Optional[Callable]
        - additional_init_keys: Optional[List[UnraveledKey]]
        - output_grad_callbacks: Optional[Dict[str, Callable]]
        - cache_callback: Optional[Callable]
        ...
    }
    class GradCAM {
        - target_modules: Optional[List[str]]
        - init_attr_targets: Optional[Callable]
        - init_attr_inputs: Optional[Callable]
        - init_attr_cache_in: Optional[Callable]
        - init_attr_grads: Optional[Callable]
        - additional_init_keys: Optional[List[UnraveledKey]]
        - output_grad_callbacks: Optional[Dict[str, Callable]]
        ...
    }
    class GuidedBackpropagation {
        - target_modules: Optional[List[str]]
        - init_attr_targets: Optional[Callable]
        - init_attr_inputs: Optional[Callable]
        - init_attr_cache_in: Optional[Callable]
        - init_attr_grads: Optional[Callable]
        - additional_init_keys: Optional[List[UnraveledKey]]
        - output_grad_callbacks: Optional[Dict[str, Callable]]
        ...
    }
    class IntegratedGradients {
        - target_modules: Optional[List[str]]
        - init_attr_targets: Optional[Callable]
        - init_attr_inputs: Optional[Callable]
        - init_attr_cache_in: Optional[Callable]
        - init_attr_grads: Optional[Callable]
        - additional_init_keys: Optional[List[UnraveledKey]]
        - output_grad_callbacks: Optional[Dict[str, Callable]]
        ...
    }
    class LRP {
        - target_modules: Optional[List[str]]
        - init_attr_targets: Optional[Callable]
        - init_attr_inputs: Optional[Callable]
        - init_attr_cache_in: Optional[Callable]
        - init_attr_grads: Optional[Callable]
        - additional_init_keys: Optional[List[UnraveledKey]]
        - output_grad_callbacks: Optional[Dict[str, Callable]]
        ...
    }
    class Saliency {
        - target_modules: Optional[List[str]]
        - init_attr_targets: Optional[Callable]
        - init_attr_inputs: Optional[Callable]
        - init_attr_cache_in: Optional[Callable]
        - init_attr_grads: Optional[Callable]
        - additional_init_keys: Optional[List[UnraveledKey]]
        - output_grad_callbacks: Optional[Dict[str, Callable]]
        ...
    }
    GradCAM --|> AttributionCommon
    GuidedBackpropagation --|> AttributionCommon
    IntegratedGradients --|> AttributionCommon
    LRP --|> AttributionCommon
    Saliency --|> AttributionCommon
Loading

File-Level Changes

Change Details Files
Extend attribution constructors to accept init_attr_cache_in and propagate it.
  • Add init_attr_cache_in parameter to all __init__ signatures
  • Assign self._init_attr_cache_in in the common gradient helper
  • Pass init_attr_cache_in through super() calls in each attribution implementation
src/tdhook/attribution/gradient_helpers/common.py
src/tdhook/attribution/grad_cam.py
src/tdhook/attribution/guided_backpropagation.py
src/tdhook/attribution/integrated_gradients.py
src/tdhook/attribution/lrp.py
src/tdhook/attribution/saliency.py
Integrate and validate init_attr_cache_in in the attributor function.
  • Invoke self._init_attr_cache_in(cache_in, additional_init_tensors) when provided
  • Ensure the returned value is a TensorDict or raise ValueError
src/tdhook/attribution/gradient_helpers/common.py

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey there - I've reviewed your changes and they look great!


Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

@codecov
Copy link

codecov bot commented Oct 8, 2025

Codecov Report

❌ Patch coverage is 50.00000% with 3 lines in your changes missing coverage. Please review.
✅ Project coverage is 96.28%. Comparing base (cc34bff) to head (e2a3311).
⚠️ Report is 1 commits behind head on main.

Files with missing lines Patch % Lines
src/tdhook/attribution/gradient_helpers/common.py 50.00% 3 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main      #21      +/-   ##
==========================================
- Coverage   96.42%   96.28%   -0.15%     
==========================================
  Files          30       30              
  Lines        1904     1909       +5     
==========================================
+ Hits         1836     1838       +2     
- Misses         68       71       +3     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@Xmaster6y Xmaster6y merged commit 4290509 into main Oct 8, 2025
5 of 7 checks passed
@Xmaster6y Xmaster6y deleted the cache branch October 8, 2025 14:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants