Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

@gitttt-1234
Copy link
Collaborator

@gitttt-1234 gitttt-1234 commented May 3, 2024

Add VideoReader module to run inference on videos.
Issue #26

Summary by CodeRabbit

  • New Features

    • Updated testing environments to include the latest operating systems for enhanced compatibility.
    • Introduced new pose estimation models with advanced configurations to improve accuracy.
    • Added functions for image conversion and resizing to provide more flexibility in data processing.
  • Bug Fixes

    • Adjusted data processing pipelines to ensure correct output shapes and data integrity.
  • Documentation

    • Updated configuration documentation to reflect new features and settings for inference, data handling, and model training.

gitttt-1234 and others added 30 commits December 28, 2023 18:37
* Add swint

* Add param count

* Update model_trainer.py

* Add swintv1

* Change default width

* modify inference class

* Update model_trainer.py

* Update model_trainer.py

* Update xavier init

* Update bias init in xavier

* Add steps per epoch

* Add tests for swint

* Format files

* Fix instance cropper test

* Fix to sigma in confmaps

* Add Centroid model pipeline (#42)

* Add Centroid model pipeline

* Auto select max instances

* Add test cases

* Format files

* Fix centroid crop inference

* Update CI (#43)

* Initial CI update

* Update deps

* Add Cycler from torchdata to remove torchdata dep

* Fix import

* Fix ruff and lint

* Fix wandb test

* Fix attrs

* Ignore vendored torchdata module

* Add Mac

* Update mac conda versions

* More conda

* Typo: exclude cycleR from coverage

* More conda packages

* Even more conda packages

---------

Co-authored-by: Talmo Pereira <[email protected]>

---------

Co-authored-by: Talmo Pereira <[email protected]>
@codecov
Copy link

codecov bot commented May 3, 2024

Codecov Report

Attention: Patch coverage is 96.55172% with 10 lines in your changes are missing coverage. Please review.

Project coverage is 96.73%. Comparing base (f093ce2) to head (7a88cd3).
Report is 1 commits behind head on main.

Files Patch % Lines
sleap_nn/inference/inference.py 94.59% 8 Missing ⚠️
sleap_nn/data/normalization.py 91.66% 1 Missing ⚠️
sleap_nn/data/providers.py 97.67% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main      #45      +/-   ##
==========================================
+ Coverage   96.64%   96.73%   +0.09%     
==========================================
  Files          23       26       +3     
  Lines        1818     2423     +605     
==========================================
+ Hits         1757     2344     +587     
- Misses         61       79      +18     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@gitttt-1234 gitttt-1234 changed the title Divya/video reader Add VideoReader May 3, 2024
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 20

Out of diff range and nitpick comments (15)
tests/data/test_augmentation.py (1)

Line range hint 10-10: Add a docstring to test_uniform_noise to explain its purpose and usage.

tests/data/test_instance_cropping.py (1)

Line range hint 10-10: Add a docstring to test_make_centered_bboxes to explain its purpose and usage.

tests/architectures/test_model.py (3)

Line range hint 98-98: Add a docstring to the test_get_head function to explain its purpose and usage.


Line range hint 118-118: Add a docstring to the first instance of test_unet_model to explain its purpose and usage.


Line range hint 118-118: Add a docstring to the second instance of test_unet_model to explain its purpose and usage.

sleap_nn/data/providers.py (1)

Line range hint 30-74: Consider adding type hints for better code clarity and static analysis.

- def __init__(
+ def __init__(self: LabelsReader,
sleap_nn/data/confidence_maps.py (1)

Line range hint 154-203: Add documentation for public methods to improve code maintainability.

+    """Generate confidence maps for each example."""
sleap_nn/data/pipelines.py (1)

Line range hint 129-173: Add error handling for data provider failures to improve robustness.

+    try:
+        $$$
+    except Exception as e:
+        handle_error(e)
tests/fixtures/datasets.py (1)

Line range hint 42-258: Refactor configuration to use environment variables for paths to improve flexibility.

-                    "labels_path": f"{sleap_data_dir}/minimal_instance.pkg.slp",
+                    "labels_path": os.getenv('LABELS_PATH', f"{sleap_data_dir}/minimal_instance.pkg.slp"),
tests/test_model_trainer.py (1)

Line range hint 234-291: Add a docstring to test_topdown_centered_instance_model to explain its purpose and functionality.

+    """Test TopDownCenteredInstanceModel configurations and output."""
sleap_nn/evaluation.py (4)

Line range hint 36-72: Consider refactoring to reduce complexity and improve efficiency, especially in handling user_labels_only.


Line range hint 92-176: Improve handling of missing data to ensure robustness in compute_oks.


Line range hint 178-260: Consider using more efficient data structures for handling instances in match_instances to improve performance.


Line range hint 412-563: Improve exception handling in the Evaluator class to provide more specific error messages and recovery options.

sleap_nn/inference/inference.py (1)

[!TIP]
Codebase Verification

Several files in the codebase still lack unit tests for the updated _predict_generator method.

  • The method modifications to handle both LabelsReader and VideoReader are not covered by unit tests, which could lead to untested behavior in production.
Analysis chain

Line range hint 113-175: The _predict_generator method has been significantly modified to handle different data providers. Ensure that these changes are reflected in the unit tests to cover both LabelsReader and VideoReader scenarios.

Scripts Executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Description: Verify if there are unit tests covering both data providers in `_predict_generator`.

# Test: Search for unit tests related to `_predict_generator` handling both `LabelsReader` and `VideoReader`. Expect: Matches.
rg --lang python '_predict_generator' --files-with-matches

Length of output: 467



Script:

#!/bin/bash
# Description: Verify if there are unit tests covering both data providers in `_predict_generator`.

# Test: Search for unit tests related to `_predict_generator` handling both `LabelsReader` and `VideoReader`. Expect: Matches.
rg '_predict_generator' --files-with-matches

Length of output: 76

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

Out of diff range and nitpick comments (7)
tests/data/test_instance_cropping.py (1)

Line range hint 10-10: Add a docstring to explain the purpose and functionality of the function.

tests/test_model_trainer.py (3)

Line range hint 22-59: Add a docstring to explain the purpose and functionality of the function.


321-321: Add a docstring to explain the purpose and functionality of the function.


233-233: Add a docstring to explain the purpose and functionality of the function.

tests/data/test_pipelines.py (2)

60-60: Add a docstring to explain the purpose and functionality of the function.


Line range hint 195-195: Add a docstring to explain the purpose and functionality of the function.

tests/inference/test_inference.py (1)

Line range hint 1-1: Please add a module-level docstring to describe the purpose and scope of this test module.

@gitttt-1234 gitttt-1234 changed the base branch from main to divya/convnext May 4, 2024 18:55
Base automatically changed from divya/convnext to main May 7, 2024 17:37
@talmo talmo changed the base branch from main to divya/inference May 7, 2024 17:39
@talmo talmo changed the base branch from divya/inference to main May 7, 2024 17:39
@talmo talmo linked an issue May 7, 2024 that may be closed by this pull request
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

Out of diff range and nitpick comments (5)
tests/inference/test_inference.py (2)

Line range hint 1-1: Add a module-level docstring to describe the purpose and scope of this test module.


41-41: Ensure there is a blank line after the function docstring for better readability and adherence to PEP 257.

sleap_nn/inference/inference.py (3)

221-221: Add a blank line before the "Attributes" section for better readability.


808-808: Add a blank line after the "Returns" section for better readability.


813-813: Add a blank line before the "Notes" section for better readability.

Review Details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits Files that changed from the base of the PR and between 691f545 and ca1b802.
Files selected for processing (8)
  • docs/config.md (1 hunks)
  • sleap_nn/data/normalization.py (2 hunks)
  • sleap_nn/data/providers.py (4 hunks)
  • sleap_nn/inference/inference.py (20 hunks)
  • tests/data/test_instance_centroids.py (1 hunks)
  • tests/data/test_providers.py (2 hunks)
  • tests/data/test_resizing.py (1 hunks)
  • tests/inference/test_inference.py (15 hunks)
Files skipped from review as they are similar to previous changes (1)
  • sleap_nn/data/normalization.py
Additional Context Used
LanguageTool (68)
docs/config.md (68)

Near line 6: Loose punctuation mark.
Context: ...ing a data pipeline. - 2. model_config: Initialise the sleap-nn backbone and he...


Near line 7: Loose punctuation mark.
Context: ...e and head models. - 3. trainer_config: Hyperparameters required to train the m...


Near line 8: Loose punctuation mark.
Context: ... with Lightning. - 4. inference_config: Inference related configs. Note:...


Near line 12: Loose punctuation mark.
Context: ... for val_data_loader. - data_config: - provider: (str) Provider class...


Near line 13: You might be missing the article “the” here.
Context: ...iles. Only "LabelsReader" supported for training pipeline. - pipeline: (str) Pipel...


Near line 17: Possible missing article found.
Context: ...he image has 3 channels (RGB image). If input has only one channel when this ...


Near line 18: Possible missing article found.
Context: ... is set to True, then the images from single-channel is replicated along the...


Near line 19: You might be missing the article “the” here.
Context: ...s replicated along the channel axis. If input has three channels if this is s...


Near line 19: Possible missing comma found.
Context: ...ng the channel axis. If input has three channels if this is set to False, then w...


Near line 20: You might be missing the article “a” here.
Context: ... to False, then we convert the image to grayscale (single-channel) image. ...


Near line 26: Loose punctuation mark.
Context: ...Default: None. - preprocessing: - anchor_ind: (int) Index...


Near line 27: Possible missing comma found.
Context: ...can significantly improve topdown model accuracy as they benefit from a consistent geome...


Near line 29: Possible missing comma found.
Context: ...space. Larger values are easier to learn but are less precise with respect to the pe...


Near line 29: ‘with respect to’ might be wordy. Consider a shorter alternative.
Context: ...re easier to learn but are less precise with respect to the peak coordinate. This spread is in ...


Near line 30: Loose punctuation mark.
Context: ...ion. - augmentation_config: - random crop: (Dict[...


Near line 48: After the expression ‘for example’ a comma is usually used.
Context: ...rizontal and vertical translations. For example translate=(a, b), then horizontal shift...


Near line 53: A determiner appears to be missing. Consider inserting it.
Context: ...float) min-max value of mixup strength. Default is 0-1. Default: None. ...


Near line 58: Loose punctuation mark.
Context: ... to train structure) - model_config: - init_weight: (str) model weigh...


Near line 59: You might be missing the article “the” here.
Context: ...niform initialization and "xavier" uses Xavier initialization method. - `pre_train...


Near line 61: Loose punctuation mark.
Context: ...win_B_Weights"]. - backbone_config: - backbone_type: (str) Backbo...


Near line 64: A determiner appears to be missing. Consider inserting it.
Context: ...nnels: (int) Number of input channels. Default is 1. - kernel_size`: (int...


Near line 65: A determiner appears to be missing. Consider inserting it.
Context: ...int) Size of the convolutional kernels. Default is 3. - filters: (int) Ba...


Near line 66: A determiner appears to be missing. Consider inserting it.
Context: ... Base number of filters in the network. Default is 32 - filters_rate: (fl...


Near line 68: A determiner appears to be missing. Consider inserting it.
Context: ...: (int) Number of downsampling blocks. Default is 4. - up_blocks`: (int) ...


Near line 69: A determiner appears to be missing. Consider inserting it.
Context: ...er of upsampling blocks in the decoder. Default is 3. - convs_per_block: ...


Near line 70: A determiner appears to be missing. Consider inserting it.
Context: ...mber of convolutional layers per block. Default is 2. - backbone_config: (for...


Near line 75: A determiner appears to be missing. Consider inserting it.
Context: ...onvolutional kernels in the stem layer. Default is 4. - stem_patch_stride...


Near line 76: A determiner appears to be missing. Consider inserting it.
Context: ...Convolutional stride in the stem layer. Default is 2. - in_channels: (int...


Near line 77: A determiner appears to be missing. Consider inserting it.
Context: ...nnels: (int) Number of input channels. Default is 1. - kernel_size`: (int...


Near line 78: A determiner appears to be missing. Consider inserting it.
Context: ...int) Size of the convolutional kernels. Default is 3. - filters: (int) Ba...


Near line 79: A determiner appears to be missing. Consider inserting it.
Context: ... Base number of filters in the network. Default is 32 - filters_rate: (fl...


Near line 81: A determiner appears to be missing. Consider inserting it.
Context: ...er of upsampling blocks in the decoder. Default is 3. - convs_per_block: ...


Near line 82: A determiner appears to be missing. Consider inserting it.
Context: ...mber of convolutional layers per block. Default is 2. - backbone_config: (for...


Near line 83: A determiner appears to be missing. Consider inserting it.
Context: ... - backbone_config: (for SwinT. Default is Tiny architecture.) - ...


Near line 85: A determiner appears to be missing. Consider inserting it.
Context: ...em_stride: (int) Stride for the patch. Default is 2. - embed_dim`: (int) ...


Near line 90: A determiner appears to be missing. Consider inserting it.
Context: ...nnels: (int) Number of input channels. Default is 1. - kernel_size`: (int...


Near line 91: A determiner appears to be missing. Consider inserting it.
Context: ...int) Size of the convolutional kernels. Default is 3. - filters_rate: (fl...


Near line 93: A determiner appears to be missing. Consider inserting it.
Context: ...er of upsampling blocks in the decoder. Default is 3. - convs_per_block: ...


Near line 94: A determiner appears to be missing. Consider inserting it.
Context: ...mber of convolutional layers per block. Default is 2. - head_configs - `h...


Near line 99: Possible missing comma found.
Context: ...can significantly improve topdown model accuracy as they benefit from a consistent geome...


Near line 100: Possible missing comma found.
Context: ...space. Larger values are easier to learn but are less precise with respect to the pe...


Near line 100: ‘with respect to’ might be wordy. Consider a shorter alternative.
Context: ...re easier to learn but are less precise with respect to the peak coordinate. This spread is in ...


Near line 111: You might be missing the article “the” here.
Context: ...ds on the set value for num_workers. If value of num_workers=0 default is None. Other...


Near line 111: You might be missing the article “the” here.
Context: ...orkers=0 default is None. Otherwise, if value of num_workers > 0 default is 2). -...


Near line 114: Possible typo: you repeated a word
Context: ...ease note that the monitors are checked every every_n_epochs epochs. if save_top_k >= 2 and...


Near line 114: Possible typo: you repeated a word
Context: ... the monitors are checked every every_n_epochs epochs. if save_top_k >= 2 and the callback is...


Near line 116: Unpaired symbol: ‘'’ seems to be missing
Context: ...l contain the metric name. For example, filename='checkpoint_{epoch:02d}-{acc:02.0f} with...


Near line 117: Possible missing comma found.
Context: ... - monitor: (str) Quantity to monitor for e.g., "val_loss". When None, this saves...


Near line 144: Possible missing comma found.
Context: ...onitored has stopped decreasing; in max mode it will be reduced when the quantity mo...


Near line 148: Possible missing comma found.
Context: ...tience`: (int) Number of epochs with no improvement after which learning rate will be reduc...


Near line 150: Possible missing comma found.
Context: ...arning rate of all param groups or each group respectively. Default: 0. - `inferen...


Near line 152: Loose punctuation mark.
Context: ...ely. Default: 0. - inference_config: - device: (str) Device on which t...


Near line 154: Loose punctuation mark.
Context: ... "ideep", "hip", "msnpu"). - data: - path: (str) Path to .slp ...


Near line 160: Possible missing article found.
Context: ...he image has 3 channels (RGB image). If input has only one channel when this ...


Near line 161: Possible missing article found.
Context: ... is set to True, then the images from single-channel is replicated along the...


Near line 162: You might be missing the article “the” here.
Context: ...s replicated along the channel axis. If input has three channels if this is s...


Near line 162: Possible missing comma found.
Context: ...ng the channel axis. If input has three channels if this is set to False, then w...


Near line 163: You might be missing the article “a” here.
Context: ... to False, then we convert the image to grayscale (single-channel) image. ...


Near line 166: You might be missing the article “the” here.
Context: ...ta_loader) This section is used only if provider is LabelsReader. - `video_loa...


Near line 167: You might be missing the article “the” here.
Context: ...er. - video_loader: (only if provider is VideoReader). - batch...


Near line 171: You might be missing the article “the” here.
Context: ...d index of the frames to read. If None, length of the video is set as the ...


Near line 173: Loose punctuation mark.
Context: ... the default. - preprocessing: - anchor_ind: (int) Inde...


Near line 174: Possible missing comma found.
Context: ...can significantly improve topdown model accuracy as they benefit from a consistent geome...


Near line 177: Loose punctuation mark.
Context: ...atial resolution. - peak_threshold: float between 0 and 1. Minimum confid...


Near line 177: The modal verb ‘will’ requires the verb’s base form.
Context: ...hold. Peaks with values below this will ignored. - integral_refinement: If None...


Near line 178: Loose punctuation mark.
Context: ...ill ignored. - integral_refinement: If None, returns the grid-aligned pea...


Near line 179: Loose punctuation mark.
Context: ... regression. - integral_patch_size: Size of patches to crop around each rou...


Near line 180: Loose punctuation mark.
Context: ... integer scalar. - return_confmaps: If True, predicted confidence maps wi...

Ruff (7)
sleap_nn/inference/inference.py (3)

221-221: Missing blank line before section ("Attributes")


808-808: Missing blank line after section ("Returns")


813-813: Missing blank line before section ("Notes")

tests/data/test_instance_centroids.py (1)

1-1: Missing docstring in public module

tests/data/test_providers.py (1)

1-1: Missing docstring in public module

tests/data/test_resizing.py (1)

1-1: Missing docstring in public module

tests/inference/test_inference.py (1)

1-1: Missing docstring in public module

Additional comments not posted (16)
tests/data/test_resizing.py (1)

12-12: Updated docstring provides better clarity on what the test covers.

tests/data/test_instance_centroids.py (1)

11-11: Updated docstring provides better clarity on what the test covers.

tests/data/test_providers.py (2)

3-3: Import of VideoReader is correctly added to support new functionality.


25-25: Introduction of test_videoreader_provider effectively tests the new VideoReader class.

sleap_nn/data/providers.py (2)

44-44: Proper handling of instance keys in LabelsReader enhances flexibility in data processing.


192-193: Refactoring error handling to use logging improves the production readiness of the VideoReader.

tests/inference/test_inference.py (7)

153-176: Ensure that the VideoReader is properly integrated and tested in different scenarios as intended.


Line range hint 205-298: The tests for TopDownInferenceModel are comprehensive, covering various scenarios and configurations.


Line range hint 328-377: The tests for FindInstancePeaksGroundTruth effectively validate the model's accuracy using ground truth data.


Line range hint 384-406: The tests for FindInstancePeaks are thorough, covering different configurations and validating the presence of expected output keys.


Line range hint 428-497: The tests for SingleInstanceInferenceModel are comprehensive, ensuring the model handles single instance predictions correctly.


Line range hint 502-614: The tests for SingleInstancePredictor cover various data providers and configurations, ensuring the predictor's robustness.


Line range hint 615-660: The tests for CentroidCrop effectively validate the model's ability to handle centroid data and produce expected outputs.

sleap_nn/inference/inference.py (3)

Line range hint 27-45: Ensure that the model configuration loading and predictor instantiation are robust against potential file path or configuration errors.


Line range hint 673-883: Review the initialization and error handling in the make_pipeline method to ensure robustness, especially in the handling of the provider attribute and the initialization of data loaders.

Verification successful

The review of the make_pipeline method in the TopDownPredictor class has been completed based on the provided code snippet and the additional context from the shell script output. The method appears to handle different provider types (LabelsReader and VideoReader) and initializes data pipelines accordingly. It also includes error handling for cases where the necessary configurations or models are not loaded, which is crucial for robustness.

Here are the key observations:

  • The method checks the provider attribute and initializes different pipelines based on whether the provider is LabelsReader or VideoReader.
  • For LabelsReader, it sets up a data loading pipeline that returns a Torch DataLoader.
  • For VideoReader, it does not return a pipeline but instead starts a thread for reading video frames.
  • The method includes error handling to ensure that both models are loaded when predicting on non-ground-truth data, which is a good practice for robustness.

Given these observations, the method seems to be implemented with appropriate checks and balances to handle different scenarios effectively.

Final Conclusion:
The implementation of the make_pipeline method in the TopDownPredictor class is robust and handles different scenarios appropriately, as per the requirements specified in the review comment.

Scripts Executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Description: Verify the robustness of the `make_pipeline` method.

# Test: Check for proper error handling and initialization logic.
rg --type py 'make_pipeline' -A 20 -B 20

Length of output: 9920


Line range hint 219-239: Ensure that the documentation for the CentroidCrop class accurately reflects its functionality and usage, especially the return_crops attribute.

Verification successful

The search results from the rg command indicate that the return_crops attribute is indeed used within the CentroidCrop class and its functionality is documented in the code. The attribute controls whether the output dictionary includes the cropped image data (instance_image). This behavior is consistent across various parts of the codebase, including test scripts and the main inference module.

The documentation provided in the snippet from the sleap_nn/inference/inference.py file matches the actual usage of the return_crops attribute in the code. The attribute is used to conditionally add the cropped images to the output dictionary, which is aligned with the description in the documentation.

Given this information, it appears that the documentation for the return_crops attribute in the CentroidCrop class accurately reflects its implementation and usage in the codebase.

Final Conclusion:
The review comment concerning the accuracy of the documentation for the return_crops attribute in the CentroidCrop class is verified to be correct. The attribute is both implemented in the code and described accurately in the documentation.

Scripts Executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Description: Verify the accuracy of the documentation for the `CentroidCrop` class.

# Test: Search for mentions of `return_crops` in the documentation and code.
rg --type py 'return_crops' -C 5

Length of output: 9523

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

Out of diff range and nitpick comments (2)
sleap_nn/inference/inference.py (2)

Line range hint 28-52: Consider handling the case where model_names might not contain expected head types.

+        if not any(head in model_names for head in ["SingleInstanceConfmapsHead", "CentroidConfmapsHead", "CenteredInstanceConfmapsHead"]):
+            raise ValueError("None of the expected head types found in model configurations.")

Line range hint 113-179: Ensure proper error handling and resource management in the video reading loop.

+        try:
+            # Attempt to start the video reader and process frames
+            self.reader.start()
+            done = False
+            while not done:
+                # Process each frame
+                ...
+        except Exception as e:
+            self.logger.error(f"Error in VideoReader: {e}")  # Improved error logging
+            raise
+        finally:
+            self.reader.join()  # Ensure the reader thread is properly joined
Review Details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits Files that changed from the base of the PR and between ca1b802 and 353ef12.
Files selected for processing (3)
  • sleap_nn/inference/inference.py (20 hunks)
  • tests/data/test_providers.py (2 hunks)
  • tests/inference/test_inference.py (15 hunks)
Additional Context Used
Ruff (6)
sleap_nn/inference/inference.py (4)

221-221: Missing blank line before section ("Attributes")


424-424: No blank lines allowed after function docstring (found 1)


808-808: Missing blank line after section ("Returns")


813-813: Missing blank line before section ("Notes")

tests/data/test_providers.py (1)

1-1: Missing docstring in public module

tests/inference/test_inference.py (1)

1-1: Missing docstring in public module

Additional comments not posted (5)
tests/data/test_providers.py (1)

Line range hint 3-23: The test_providers function correctly tests the LabelsReader module. Good use of assertions to validate the functionality.

tests/inference/test_inference.py (4)

153-199: The test_topdown_predictor function correctly tests the TopDownPredictor class for running inference on centroid and centered instance models. Good use of assertions to validate the functionality.


Line range hint 206-299: The test_topdown_inference_model function correctly tests the TopDownInferenceModel class for centroid and centered model inferences. Good use of assertions to validate the functionality.


Line range hint 329-378: The test_find_instance_peaks_groundtruth function correctly tests the FindInstancePeaksGroundTruth class for running inference on centroid model without centered instance model. Good use of assertions to validate the functionality.


Line range hint 385-498: The test_find_instance_peaks function correctly tests the FindInstancePeaks class to run inference on the Centered instance model. Good use of assertions to validate the functionality.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 13

Out of diff range and nitpick comments (5)
sleap_nn/data/pipelines.py (1)

12-12: Ensure imports are organized and unused imports are removed to maintain code cleanliness.

tests/test_model_trainer.py (2)

Line range hint 70-70: Add a docstring to test_trainer to explain its purpose and functionality.

+    """Test the training process with different configurations."""

Line range hint 233-233: Add a docstring to test_topdown_centered_instance_model to describe its testing scope.

+    """Test the TopDownCenteredInstanceModel with various configurations."""
tests/data/test_pipelines.py (1)

Line range hint 1-1: Add a docstring to the module to describe its purpose and contents.

+    """Module tests for data pipelines in SLEAP."""
tests/inference/test_inference.py (1)

Line range hint 1-1: Add a module-level docstring to describe the purpose and contents of this test module.

+ """Module to test inference functionalities in the sleap_nn project."""
Review Details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits Files that changed from the base of the PR and between 353ef12 and 9be464b.
Files selected for processing (20)
  • docs/config.md (1 hunks)
  • sleap_nn/architectures/convnext.py (3 hunks)
  • sleap_nn/architectures/swint.py (3 hunks)
  • sleap_nn/data/instance_cropping.py (3 hunks)
  • sleap_nn/data/pipelines.py (10 hunks)
  • sleap_nn/data/providers.py (4 hunks)
  • sleap_nn/data/resizing.py (3 hunks)
  • sleap_nn/inference/inference.py (34 hunks)
  • sleap_nn/inference/peak_finding.py (1 hunks)
  • sleap_nn/model_trainer.py (1 hunks)
  • tests/assets/minimal_instance/training_config.yaml (4 hunks)
  • tests/assets/minimal_instance_centroid/training_config.yaml (4 hunks)
  • tests/data/test_confmaps.py (3 hunks)
  • tests/data/test_instance_cropping.py (4 hunks)
  • tests/data/test_pipelines.py (20 hunks)
  • tests/data/test_providers.py (2 hunks)
  • tests/data/test_resizing.py (1 hunks)
  • tests/fixtures/datasets.py (5 hunks)
  • tests/inference/test_inference.py (16 hunks)
  • tests/test_model_trainer.py (3 hunks)
Files skipped from review as they are similar to previous changes (5)
  • sleap_nn/architectures/convnext.py
  • sleap_nn/architectures/swint.py
  • sleap_nn/inference/peak_finding.py
  • sleap_nn/model_trainer.py
  • tests/assets/minimal_instance_centroid/training_config.yaml
Additional Context Used
LanguageTool (39)
docs/config.md (39)

Near line 6: Loose punctuation mark.
Context: ...ing a data pipeline. - 2. model_config: Initialise the sleap-nn backbone and he...


Near line 7: Loose punctuation mark.
Context: ...e and head models. - 3. trainer_config: Hyperparameters required to train the m...


Near line 8: Loose punctuation mark.
Context: ... with Lightning. - 4. inference_config: Inference related configs. Note:...


Near line 10: Possible missing comma found.
Context: ...ta_config is used for validation set as well with the key: val. Similarly, the str...


Near line 12: Loose punctuation mark.
Context: ... for val_data_loader. - data_config: - provider: (str) Provider class...


Near line 17: Possible missing article found.
Context: ...he image has 3 channels (RGB image). If input has only one channel when this ...


Near line 18: Possible missing article found.
Context: ... is set to True, then the images from single-channel is replicated along the...


Near line 19: Possible missing article found.
Context: ...s replicated along the channel axis. If input has three channels if this is s...


Near line 19: Possible missing comma found.
Context: ...ng the channel axis. If input has three channels if this is set to False, then w...


Near line 27: Loose punctuation mark.
Context: ...e same factor. - preprocessing: - anchor_ind: (int) Index...


Near line 28: Possible missing comma found.
Context: ...can significantly improve topdown model accuracy as they benefit from a consistent geome...


Near line 30: Possible missing comma found.
Context: ...space. Larger values are easier to learn but are less precise with respect to the pe...


Near line 30: ‘with respect to’ might be wordy. Consider a shorter alternative.
Context: ...re easier to learn but are less precise with respect to the peak coordinate. This spread is in ...


Near line 31: Loose punctuation mark.
Context: ...ion. - augmentation_config: - random crop: (Dict[...


Near line 49: After the expression ‘for example’ a comma is usually used.
Context: ...rizontal and vertical translations. For example translate=(a, b), then horizontal shift...


Near line 59: Loose punctuation mark.
Context: ... to train structure) - model_config: - init_weight: (str) model weigh...


Near line 62: Loose punctuation mark.
Context: ...win_B_Weights"]. - backbone_config: - backbone_type: (str) Backbo...


Near line 102: Possible missing comma found.
Context: ...can significantly improve topdown model accuracy as they benefit from a consistent geome...


Near line 103: Possible missing comma found.
Context: ...space. Larger values are easier to learn but are less precise with respect to the pe...


Near line 103: ‘with respect to’ might be wordy. Consider a shorter alternative.
Context: ...re easier to learn but are less precise with respect to the peak coordinate. This spread is in ...


Near line 113: Possible missing article found.
Context: ...he batch size. If False and the size of dataset is not divisible by the batch size, the...


Near line 117: Possible typo: you repeated a word
Context: ...ease note that the monitors are checked every every_n_epochs epochs. if save_top_k >= 2 and...


Near line 117: Possible typo: you repeated a word
Context: ... the monitors are checked every every_n_epochs epochs. if save_top_k >= 2 and the callback is...


Near line 119: Unpaired symbol: ‘'’ seems to be missing
Context: ...l contain the metric name. For example, filename='checkpoint_{epoch:02d}-{acc:02.0f} with...


Near line 147: Possible missing comma found.
Context: ...onitored has stopped decreasing; in max mode it will be reduced when the quantity mo...


Near line 151: Possible missing comma found.
Context: ...tience`: (int) Number of epochs with no improvement after which learning rate will be reduc...


Near line 155: Loose punctuation mark.
Context: ...ely. Default: 0. - inference_config: - device: (str) Device on which t...


Near line 157: Loose punctuation mark.
Context: ... "ideep", "hip", "msnpu"). - data: - path: (str) Path to .slp ...


Near line 164: Possible missing article found.
Context: ...he image has 3 channels (RGB image). If input has only one channel when this ...


Near line 165: Possible missing article found.
Context: ... is set to True, then the images from single-channel is replicated along the...


Near line 166: Possible missing article found.
Context: ...s replicated along the channel axis. If input has three channels if this is s...


Near line 166: Possible missing comma found.
Context: ...ng the channel axis. If input has three channels if this is set to False, then w...


Near line 177: Loose punctuation mark.
Context: ... the default. - preprocessing: - anchor_ind: (int) Inde...


Near line 178: Possible missing comma found.
Context: ...can significantly improve topdown model accuracy as they benefit from a consistent geome...


Near line 181: Loose punctuation mark.
Context: ...atial resolution. - peak_threshold: float between 0 and 1. Minimum confid...


Near line 181: The modal verb ‘will’ requires the verb’s base form.
Context: ...hold. Peaks with values below this will ignored. - integral_refinement: If None...


Near line 182: Loose punctuation mark.
Context: ...ill ignored. - integral_refinement: If None, returns the grid-aligned pea...


Near line 183: Loose punctuation mark.
Context: ... regression. - integral_patch_size: Size of patches to crop around each rou...


Near line 184: Loose punctuation mark.
Context: ... integer scalar. - return_confmaps: If True, predicted confidence maps wi...

Ruff (16)
sleap_nn/inference/inference.py (1)

729-729: No blank lines allowed after function docstring (found 1)

tests/data/test_confmaps.py (3)

1-1: Missing docstring in public module


18-18: Missing docstring in public function


69-69: Missing docstring in public function

tests/data/test_instance_cropping.py (2)

1-1: Missing docstring in public module


10-10: Missing docstring in public function

tests/data/test_pipelines.py (3)

1-1: Missing docstring in public module


95-95: Missing docstring in public function


238-238: Missing docstring in public function

tests/data/test_providers.py (1)

1-1: Missing docstring in public module

tests/data/test_resizing.py (1)

1-1: Missing docstring in public module

tests/fixtures/datasets.py (1)

40-40: Missing docstring in public function

tests/inference/test_inference.py (1)

1-1: Missing docstring in public module

tests/test_model_trainer.py (3)

70-70: Missing docstring in public function


233-233: Missing docstring in public function


322-322: Missing docstring in public function

Additional comments not posted (18)
tests/data/test_providers.py (1)

24-24: New assertion correctly checks the max_height_and_width property.

tests/data/test_resizing.py (3)

10-41: New test cases correctly handle dimension constraints in SizeMatcher.


43-57: New test cases correctly test the functionality of Resizer.


60-81: New test cases correctly test the functionality of PadToStride.

tests/data/test_instance_cropping.py (2)

Line range hint 28-70: New test cases correctly test the functionality of InstanceCropper.


71-86: New test cases correctly test the functionality of InstanceCropper with resizing and padding.

tests/data/test_confmaps.py (2)

Line range hint 1-34: New test cases correctly test the functionality of ConfidenceMapGenerator.


Line range hint 35-121: New test cases correctly test the functionality of MultiConfidenceMapGenerator with and without centroids.

tests/assets/minimal_instance/training_config.yaml (1)

Line range hint 1-217: Configuration settings are correctly specified and cover various aspects of the training process.

sleap_nn/data/instance_cropping.py (1)

Line range hint 1-113: InstanceCropper class is well-implemented and the methods are correctly specified.

sleap_nn/data/providers.py (1)

Line range hint 1-209: LabelsReader and VideoReader classes are well-implemented and the methods are correctly specified.

sleap_nn/data/resizing.py (1)

Line range hint 1-208: Resizer, PadToStride, and SizeMatcher classes are well-implemented and the methods are correctly specified.

tests/fixtures/datasets.py (1)

33-36: Ensure error handling for file not found exceptions.

sleap_nn/data/pipelines.py (2)

Line range hint 29-71: Verify the logic in make_training_pipeline to ensure that data transformations and augmentations are correctly applied based on the configuration.

Verification successful

The make_training_pipeline method is indeed present in the sleap_nn/data/pipelines.py file, as confirmed by the rg search results. The method is designed to create a training pipeline with input data only, and it involves several transformations and augmentations based on the configuration provided.

Given this confirmation, the logic within the make_training_pipeline method appears to be aligned with the initial review comment, which asked to verify that data transformations and augmentations are correctly applied based on the configuration. The method uses various components like Normalizer, SizeMatcher, Resizer, KorniaAugmenter, and InstanceCropper, all of which are configured based on data_config.

Since the method's implementation details align with the expected behavior described in the review comment, the verification is successful.

Final Response:

The make_training_pipeline method in sleap_nn/data/pipelines.py correctly implements the data transformations and augmentations as per the provided configuration. The method's logic has been verified and found to be consistent with the expectations.

  • File: sleap_nn/data/pipelines.py
  • Method: make_training_pipeline
Scripts Executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Description: Verify the logic in `make_training_pipeline`.

# Test: Check the method's implementation details. Expect: Correct application of transformations.
ast-grep --lang python --pattern $'class TopdownConfmapsPipeline {
  $$$
  make_training_pipeline($_, $_) {
    $$$
  }
  $$$  
}'

Length of output: 476



Script:

#!/bin/bash
# Description: Re-verify the logic in `make_training_pipeline` using the correct class name and refined pattern.

# Test: Check the method's implementation details. Expect: Correct application of transformations.
ast-grep --lang python --pattern $'class DataPipeline {
  $$$
  def make_training_pipeline(self, data_provider) {
    $$$
  }
  $$$  
}'

Length of output: 482



Script:

#!/bin/bash
# Description: Use `rg` to search for the implementation of `make_training_pipeline` to confirm its existence and structure.

# Test: Search for the method's implementation details in the codebase.
rg "def make_training_pipeline" -A 20 -C 3 --type py

Length of output: 1332


Line range hint 125-185: Check for potential data races in make_training_pipeline due to shared mutable state across threads.

Verification successful

The extracted implementation of the make_training_pipeline method from the sleap_nn/data/pipelines.py file shows that the method primarily deals with creating and configuring instances of data processing classes (Normalizer, SizeMatcher, Resizer, etc.). Each instance is configured using parameters that are either passed directly to the method or are attributes of the data_config object.

From the provided code snippet, it appears that the method does not use any shared mutable state that could lead to data races. Each call to make_training_pipeline creates new instances of data processing objects, and there is no indication of shared mutable objects being modified. The data_config object is used to read configuration values, but there is no evidence that it is being modified within the method, which would be a typical source of data races.

Given this analysis, it seems that the concern about potential data races due to shared mutable state across threads in the make_training_pipeline method is unfounded based on the current implementation.

- The method uses local variables and parameters to configure data processing objects.
- The `data_config` object is accessed but not modified, which should not lead to data races.
Scripts Executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Description: Check for potential data races.

# Test: Search for shared mutable state usage. Expect: No shared mutable state.
rg --type python $'make_training_pipeline'

Length of output: 72



Script:

#!/bin/bash
# Description: Re-check for potential data races by searching for shared mutable state in the `make_training_pipeline` method.

# Corrected Test: Search for shared mutable state usage in Python files. Expect: No shared mutable state.
rg --type py 'make_training_pipeline'

Length of output: 1173



Script:

#!/bin/bash
# Description: Extract the implementation of `make_training_pipeline` to check for shared mutable state.

# Extract the method implementation from the file.
ast-grep --lang python --pattern $'def make_training_pipeline($_, $_) { $$$ }' sleap_nn/data/pipelines.py

Length of output: 571



Script:

#!/bin/bash
# Description: Correct the pattern syntax and re-attempt to extract the implementation of `make_training_pipeline`.

# Corrected pattern to extract the method implementation.
ast-grep --lang python --pattern $'def make_training_pipeline($$$) { $$$ }' sleap_nn/data/pipelines.py

Length of output: 565



Script:

#!/bin/bash
# Description: Use `rg` to directly extract the implementation of `make_training_pipeline` from the file.

# Use rg to extract the method implementation directly.
rg --type py --context 20 'def make_training_pipeline' sleap_nn/data/pipelines.py

Length of output: 4052

tests/inference/test_inference.py (3)

40-65: Ensure the docstring format adheres to PEP 257 conventions. There should be a blank line between the summary line and the description, and the closing quotes should be on a separate line.

-    """Returns data loader, trained torch model and FindInstancePeaks layer to test InferenceModels."""
+    """Returns data loader, trained torch model and FindInstancePeaks layer to test InferenceModels.
+
+    """

143-184: Ensure that the configuration updates are correctly reverted in the finally block to maintain state consistency.


Line range hint 215-308: The test function correctly sets up the inference model and performs assertions to validate the outputs. Ensure that the error messages in the assertions are descriptive and helpful for debugging.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 9

Out of diff range and nitpick comments (1)
sleap_nn/data/instance_cropping.py (1)

Line range hint 83-83: Consider adding a check to ensure instances and centroids keys exist in the input dictionary before deleting them.

+            if "instances" in ex and "centroids" in ex:
+                del ex["instances"]
+                del ex["centroids"]
Review Details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits Files that changed from the base of the PR and between 9be464b and 385cfa5.
Files selected for processing (7)
  • sleap_nn/data/instance_cropping.py (3 hunks)
  • sleap_nn/data/pipelines.py (10 hunks)
  • sleap_nn/data/resizing.py (3 hunks)
  • sleap_nn/inference/inference.py (36 hunks)
  • tests/data/test_instance_cropping.py (3 hunks)
  • tests/data/test_pipelines.py (20 hunks)
  • tests/inference/test_inference.py (16 hunks)
Files skipped from review as they are similar to previous changes (2)
  • sleap_nn/data/pipelines.py
  • sleap_nn/inference/inference.py
Additional Context Used
Ruff (6)
tests/data/test_instance_cropping.py (2)

1-1: Missing docstring in public module


10-10: Missing docstring in public function

tests/data/test_pipelines.py (3)

1-1: Missing docstring in public module


95-95: Missing docstring in public function


310-310: Missing docstring in public function

tests/inference/test_inference.py (1)

1-1: Missing docstring in public module

Additional comments not posted (4)
tests/inference/test_inference.py (4)

144-185: The changes for testing the VideoReader provider in test_topdown_predictor look good and follow the existing pattern. Ensure that the video file path used in the test is correct and accessible.


218-239: The changes for testing the VideoReader provider in test_topdown_inference_model look good and follow the existing pattern. Ensure that the video file path used in the test is correct and accessible.


Line range hint 443-513: The changes for testing the VideoReader provider in test_single_instance_inference_model look good and follow the existing pattern. Ensure that the video file path used in the test is correct and accessible.


Line range hint 518-628: The changes for testing the VideoReader provider in test_single_instance_predictor look good and follow the existing pattern. Ensure that the video file path used in the test is correct and accessible.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Review Details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits Files that changed from the base of the PR and between 385cfa5 and 7a88cd3.
Files selected for processing (11)
  • docs/config.md (1 hunks)
  • sleap_nn/data/normalization.py (3 hunks)
  • sleap_nn/data/pipelines.py (10 hunks)
  • sleap_nn/inference/inference.py (37 hunks)
  • tests/architectures/test_model.py (1 hunks)
  • tests/architectures/test_swint.py (1 hunks)
  • tests/data/test_augmentation.py (2 hunks)
  • tests/data/test_confmaps.py (4 hunks)
  • tests/data/test_pipelines.py (20 hunks)
  • tests/fixtures/datasets.py (5 hunks)
  • tests/test_model_trainer.py (3 hunks)
Files not reviewed due to errors (6)
  • tests/architectures/test_swint.py (no review received)
  • tests/data/test_confmaps.py (no review received)
  • tests/architectures/test_model.py (no review received)
  • tests/test_model_trainer.py (no review received)
  • tests/data/test_pipelines.py (no review received)
  • sleap_nn/inference/inference.py (no review received)
Files skipped from review as they are similar to previous changes (3)
  • sleap_nn/data/normalization.py
  • sleap_nn/data/pipelines.py
  • tests/fixtures/datasets.py
Additional Context Used
LanguageTool (33)
docs/config.md (33)

Near line 6: Loose punctuation mark.
Context: ... four main sections: - 1. data_config: Creating a data pipeline. - 2. `model_...


Near line 8: Loose punctuation mark.
Context: ...ng a data pipeline. - 2. model_config: Initialise the sleap-nn backbone and he...


Near line 10: Loose punctuation mark.
Context: ... and head models. - 3. trainer_config: Hyperparameters required to train the m...


Near line 12: Loose punctuation mark.
Context: ...with Lightning. - 4. inference_config: Inference related configs. Note:...


Near line 16: Loose punctuation mark.
Context: ... for val_data_loader. - data_config: - provider: (str) Provider class...


Near line 19: Loose punctuation mark.
Context: ...CentroidConfmapsPipeline". - train: - labels_path: (str) Path to ...


Near line 21: Possible missing article found.
Context: ...he image has 3 channels (RGB image). If input has only one channel when this ...


Near line 22: Possible missing article found.
Context: ... is set to True, then the images from single-channel is replicated along the...


Near line 31: Loose punctuation mark.
Context: ...e same factor. - preprocessing: - anchor_ind: (int) Index...


Near line 32: Possible missing comma found.
Context: ...can significantly improve topdown model accuracy as they benefit from a consistent geome...


Near line 34: ‘with respect to’ might be wordy. Consider a shorter alternative.
Context: ...re easier to learn but are less precise with respect to the peak coordinate. This spread is in ...


Near line 35: Loose punctuation mark.
Context: ...ion. - augmentation_config: - random crop: (Dict[...


Near line 63: Loose punctuation mark.
Context: ... to train structure) - model_config: - init_weight: (str) model weigh...


Near line 66: Loose punctuation mark.
Context: ...win_B_Weights"]. - backbone_config: - backbone_type: (str) Backbo...


Near line 106: Possible missing comma found.
Context: ...can significantly improve topdown model accuracy as they benefit from a consistent geome...


Near line 107: ‘with respect to’ might be wordy. Consider a shorter alternative.
Context: ...re easier to learn but are less precise with respect to the peak coordinate. This spread is in ...


Near line 117: Possible missing article found.
Context: ...he batch size. If False and the size of dataset is not divisible by the batch size, the...


Near line 121: Possible typo: you repeated a word
Context: ...ease note that the monitors are checked every every_n_epochs epochs. if save_top_k >= 2 and...


Near line 121: Possible typo: you repeated a word
Context: ... the monitors are checked every every_n_epochs epochs. if save_top_k >= 2 and the callback is...


Near line 124: Possible missing comma found.
Context: ... - monitor: (str) Quantity to monitor for e.g., "val_loss". When None, this saves...


Near line 151: Possible missing comma found.
Context: ...onitored has stopped decreasing; in max mode it will be reduced when the quantity mo...


Near line 155: Possible missing comma found.
Context: ...tience`: (int) Number of epochs with no improvement after which learning rate will be reduc...


Near line 157: Possible missing comma found.
Context: ...arning rate of all param groups or each group respectively. Default: 0. - `inferen...


Near line 159: Loose punctuation mark.
Context: ...ely. Default: 0. - inference_config: - device: (str) Device on which t...


Near line 161: Loose punctuation mark.
Context: ... "ideep", "hip", "msnpu"). - data: - path: (str) Path to .slp ...


Near line 168: Possible missing article found.
Context: ...he image has 3 channels (RGB image). If input has only one channel when this ...


Near line 169: Possible missing article found.
Context: ... is set to True, then the images from single-channel is replicated along the...


Near line 181: Loose punctuation mark.
Context: ... the default. - preprocessing: - anchor_ind: (int) Inde...


Near line 182: Possible missing comma found.
Context: ...can significantly improve topdown model accuracy as they benefit from a consistent geome...


Near line 185: Loose punctuation mark.
Context: ...atial resolution. - peak_threshold: float between 0 and 1. Minimum confid...


Near line 186: Loose punctuation mark.
Context: ... be ignored. - integral_refinement: If None, returns the grid-aligned pea...


Near line 187: Loose punctuation mark.
Context: ... regression. - integral_patch_size: Size of patches to crop around each rou...


Near line 188: Loose punctuation mark.
Context: ... integer scalar. - return_confmaps: If True, predicted confidence maps wi...

Ruff (9)
tests/architectures/test_model.py (3)

1-1: Missing docstring in public module


99-99: Missing docstring in public function


119-119: Missing docstring in public function

tests/architectures/test_swint.py (1)

1-1: Missing docstring in public module

tests/data/test_augmentation.py (1)

1-1: Missing docstring in public module

tests/data/test_confmaps.py (1)

1-1: Missing docstring in public module

tests/data/test_pipelines.py (1)

1-1: Missing docstring in public module

tests/test_model_trainer.py (2)

70-70: Missing docstring in public function


233-233: Missing docstring in public function

Additional comments not posted (2)
tests/data/test_augmentation.py (2)

11-11: The function test_uniform_noise is well-structured and covers various aspects of the RandomUniformNoise module. Good job!


39-39: The function test_kornia_augmentation is well-structured and covers various augmentations provided by the KorniaAugmenter module. Ensure to add a docstring to explain its purpose and usage.

@gitttt-1234 gitttt-1234 merged commit c02e6d6 into main May 14, 2024
@gitttt-1234 gitttt-1234 deleted the divya/video_reader branch May 29, 2024 19:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Inference-specialized VideoReader

3 participants