-
Couldn't load subscription status.
- Fork 6
Add option to reuse np chunks #136
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughThe changes introduce a new boolean attribute Changes
Sequence Diagram(s)sequenceDiagram
participant MT as ModelTrainer
participant DS as Dataset Instance
MT->>DS: Instantiate dataset(use_existing_chunks)
alt use_existing_chunks == false
DS->>DS: Call _fill_cache() to load data
else use_existing_chunks == true
DS-->>MT: Skip _fill_cache(), use existing chunks
end
MT->>MT: Continue with training setup
Possibly related PRs
Suggested reviewers
Poem
✨ Finishing Touches
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🔭 Outside diff range comments (1)
sleap_nn/training/model_trainer.py (1)
223-230: Verify that np_chunks_path exists when use_existing_np_chunks is True.When
use_existing_np_chunksis True but there's insufficient memory for in-memory caching, the code creates new paths for chunks without checking if they exist. This could lead to FileNotFoundError.Add a check to verify that the chunks path exists when
use_existing_np_chunksis True:if total_cache_memory > available_memory: + if self.use_existing_np_chunks: + if not (Path("./train_chunks").exists() and Path("./val_chunks").exists()): + raise FileNotFoundError( + "Chunks directories not found at ./train_chunks and ./val_chunks. " + "Set use_existing_np_chunks=False to generate new chunks." + ) self.data_pipeline_fw = "torch_dataset_np_chunks" self.np_chunks = True self.train_np_chunks_path = Path("./train_chunks") self.val_np_chunks_path = Path("./val_chunks") print( - f"Insufficient memory for in-memory caching. `npz` files will be created." + f"Insufficient memory for in-memory caching. Using existing `npz` files." + if self.use_existing_np_chunks + else f"Insufficient memory for in-memory caching. `npz` files will be created." )🧰 Tools
🪛 Ruff (0.8.2)
229-229: f-string without any placeholders
Remove extraneous
fprefix(F541)
🧹 Nitpick comments (4)
sleap_nn/data/custom_datasets.py (2)
50-50: Document the new attribute in the BaseDataset class docstring.The new
use_existing_chunksattribute is added but not documented in the class docstring.Add the following line to the docstring:
np_chunks_path: Path to save the `.npz` chunks. If `None`, current working dir is used. + use_existing_chunks: If `True`, use existing chunks in the `np_chunks_path` instead of generating new ones.
144-151: Add logging when reusing or generating chunks.The code silently reuses or generates chunks without informing the user. Adding logging would improve user experience.
Add logging statements to inform the user about chunk reuse or generation:
if self.np_chunks: + if self.use_existing_chunks: + print(f"Reusing existing chunk at {f_name}") + else: + print(f"Generating new chunk at {f_name}") sample["image"] = self.transform_to_pil(sample["image"].squeeze(dim=0)) for k, v in sample.items(): if k != "image" and isinstance(v, torch.Tensor): sample[k] = v.numpy() f_name = f"{self.np_chunks_path}/sample_{lf_idx}.npz" np.savez_compressed(f_name, **sample)sleap_nn/training/model_trainer.py (2)
79-80: Document the new parameter in the ModelTrainer class docstring.The new
use_existing_np_chunksparameter is added but not documented in the class docstring.Add the following line to the docstring:
np_chunks_path: Path to save `.npz` chunks created with `torch_dataset_np_chunks` data pipeline framework. + use_existing_np_chunks: If `True`, use existing train and val chunks in the `np_chunks_path` instead of generating new ones.
757-768: Add warning when deleting chunks with use_existing_np_chunks=True.The code deletes the chunks after training without warning the user, even when
use_existing_np_chunksis True. This could be unexpected as the user might want to reuse these chunks later.Add a warning when deleting chunks that were reused:
if self.np_chunks and delete_np_chunks_after_training: + if self.use_existing_np_chunks: + print("Warning: Deleting reused chunks after training. Set delete_np_chunks_after_training=False to keep them.") if (self.train_np_chunks_path).exists(): shutil.rmtree( (self.train_np_chunks_path).as_posix(), ignore_errors=True, )
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
sleap_nn/data/custom_datasets.py(15 hunks)sleap_nn/training/model_trainer.py(10 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (4)
- GitHub Check: Tests (macos-14, Python 3.9)
- GitHub Check: Tests (windows-latest, Python 3.9)
- GitHub Check: Tests (ubuntu-latest, Python 3.9)
- GitHub Check: Lint
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #136 +/- ##
==========================================
+ Coverage 96.64% 97.24% +0.60%
==========================================
Files 23 40 +17
Lines 1818 4141 +2323
==========================================
+ Hits 1757 4027 +2270
- Misses 61 114 +53 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
sleap_nn/training/model_trainer.py(12 hunks)tests/training/test_model_trainer.py(1 hunks)
🧰 Additional context used
🪛 Ruff (0.8.2)
tests/training/test_model_trainer.py
331-331: pytest.raises(Exception) should be considered evil
(B017)
⏰ Context from checks skipped due to timeout of 90000ms (4)
- GitHub Check: Tests (macos-14, Python 3.9)
- GitHub Check: Tests (windows-latest, Python 3.9)
- GitHub Check: Tests (ubuntu-latest, Python 3.9)
- GitHub Check: Lint
🔇 Additional comments (1)
sleap_nn/training/model_trainer.py (1)
360-365: LGTM! Good improvement in pin_memory configuration.The change to derive pin_memory from config instead of hardcoding it improves flexibility.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (2)
tests/training/test_model_trainer.py (1)
330-350: 🛠️ Refactor suggestionEnhance test coverage for reusing NumPy chunks.
The test cases can be improved in several ways:
- Use specific exception types instead of bare
Exception- Add test cases for both success and failure scenarios
- Verify the exact error message
- Add test cases for validation chunks
Apply this diff to improve the test cases:
- ##### test for reusing np chunks path - with pytest.raises(Exception): - model_trainer = ModelTrainer( - config, - data_pipeline_fw="torch_dataset_np_chunks", - np_chunks_path=tmp_path, - use_existing_np_chunks=True, - ) - - Path.mkdir(Path(tmp_path) / "train_chunks", parents=True) - file_path = Path(tmp_path) / "train_chunks" / "sample.npz" - np.savez_compressed(file_path, {1: 10}) - - with pytest.raises(Exception): - model_trainer = ModelTrainer( - config, - data_pipeline_fw="torch_dataset_np_chunks", - np_chunks_path=tmp_path, - use_existing_np_chunks=True, - ) + ##### test for reusing np chunks path + # Test failure case: non-existent chunks + with pytest.raises(FileNotFoundError, match=r"There are no numpy chunks in the path:.*"): + model_trainer = ModelTrainer( + config, + data_pipeline_fw="torch_dataset_np_chunks", + np_chunks_path=tmp_path, + use_existing_np_chunks=True, + ) + + # Test failure case: missing validation chunks + train_chunks_path = Path(tmp_path) / "train_chunks" + train_chunks_path.mkdir(parents=True) + file_path = train_chunks_path / "sample.npz" + np.savez_compressed(file_path, {"data": np.zeros((10, 10))}) + + with pytest.raises(FileNotFoundError, match=r"There are no numpy chunks in the path:.*"): + model_trainer = ModelTrainer( + config, + data_pipeline_fw="torch_dataset_np_chunks", + np_chunks_path=tmp_path, + use_existing_np_chunks=True, + ) + + # Test success case: both train and validation chunks exist + val_chunks_path = Path(tmp_path) / "val_chunks" + val_chunks_path.mkdir(parents=True) + val_file_path = val_chunks_path / "sample.npz" + np.savez_compressed(val_file_path, {"data": np.zeros((10, 10))}) + + model_trainer = ModelTrainer( + config, + data_pipeline_fw="torch_dataset_np_chunks", + np_chunks_path=tmp_path, + use_existing_np_chunks=True, + ) + assert model_trainer.use_existing_np_chunks is True🧰 Tools
🪛 Ruff (0.8.2)
331-331:
pytest.raises(Exception)should be considered evil(B017)
343-343:
pytest.raises(Exception)should be considered evil(B017)
sleap_nn/training/model_trainer.py (1)
103-119: 🛠️ Refactor suggestionImprove error handling when checking for existing chunks.
The current implementation can be improved by:
- Using a more specific exception type
- Extracting the validation logic into a helper method
- Using a more specific file pattern check
Apply this diff to improve error handling:
- if self.use_existing_np_chunks: - if not ( - self.train_np_chunks_path.exists() - and self.train_np_chunks_path.is_dir() - and any(self.train_np_chunks_path.glob("*.npz")) - ): - raise Exception( - f"There are no numpy chunks in the path: {self.train_np_chunks_path}" - ) - if not ( - self.val_np_chunks_path.exists() - and self.val_np_chunks_path.is_dir() - and any(self.val_np_chunks_path.glob("*.npz")) - ): - raise Exception( - f"There are no numpy chunks in the path: {self.val_np_chunks_path}" - ) + if self.use_existing_np_chunks: + self._validate_chunks_path(self.train_np_chunks_path, "train") + self._validate_chunks_path(self.val_np_chunks_path, "validation") + + def _validate_chunks_path(self, path: Path, split: str) -> None: + """Validate that the chunks path exists and contains .npz files. + + Args: + path: Path to the chunks directory + split: Name of the split (train/validation) for error messages + + Raises: + FileNotFoundError: If the path doesn't exist or contain .npz files + """ + if not path.exists(): + raise FileNotFoundError( + f"The {split} chunks directory does not exist: {path}" + ) + if not path.is_dir(): + raise FileNotFoundError( + f"The {split} chunks path is not a directory: {path}" + ) + if not any(p.suffix == '.npz' for p in path.iterdir()): + raise FileNotFoundError( + f"No .npz files found in the {split} chunks directory: {path}" + )
🧹 Nitpick comments (1)
sleap_nn/training/model_trainer.py (1)
80-81: Enhance parameter documentation.The docstring could provide more details about the parameter's behavior and requirements.
Apply this diff to improve the documentation:
- use_existing_np_chunks: Use existing train and val chunks in the `np_chunks_path`. + use_existing_np_chunks: If True, use existing train and validation chunks from the + `np_chunks_path` instead of generating new ones. Both train_chunks and val_chunks + directories must exist and contain valid .npz files. Raises FileNotFoundError if + the directories don't exist or don't contain .npz files.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
sleap_nn/training/model_trainer.py(12 hunks)tests/training/test_model_trainer.py(1 hunks)
🧰 Additional context used
🪛 Ruff (0.8.2)
tests/training/test_model_trainer.py
331-331: pytest.raises(Exception) should be considered evil
(B017)
343-343: pytest.raises(Exception) should be considered evil
(B017)
⏰ Context from checks skipped due to timeout of 90000ms (4)
- GitHub Check: Tests (macos-14, Python 3.9)
- GitHub Check: Tests (windows-latest, Python 3.9)
- GitHub Check: Tests (ubuntu-latest, Python 3.9)
- GitHub Check: Lint
🔇 Additional comments (3)
tests/training/test_model_trainer.py (2)
360-365: LGTM!The implementation correctly handles the pin_memory configuration with proper null checks and a sensible default value.
260-347: LGTM!The use_existing_chunks parameter is consistently propagated to all dataset constructors, maintaining uniformity across different dataset types.
🧰 Tools
🪛 Ruff (0.8.2)
331-331:
pytest.raises(Exception)should be considered evil(B017)
343-343:
pytest.raises(Exception)should be considered evil(B017)
sleap_nn/training/model_trainer.py (1)
360-365: LGTM!The implementation correctly handles the pin_memory configuration with proper null checks and a sensible default value.
This PR adds an option to re-use existing numpy chunks instead of creating new
.npzfiles if we're training on the same dataset.Summary by CodeRabbit