You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Update and rename 2022-6-22-introducing-torchx-fbgemm-and-other-library-updates-in-pytorch-1-12.md to 2022-6-28-introducing-torchx-fbgemm-and-other-library-updates-in-pytorch-1-12.md
Copy file name to clipboardExpand all lines: _posts/2022-6-28-introducing-torchx-fbgemm-and-other-library-updates-in-pytorch-1-12.md
+6-6
Original file line number
Diff line number
Diff line change
@@ -5,13 +5,13 @@ author: Team PyTorch
5
5
featured-img: ''
6
6
---
7
7
8
-
We are bringing a number of improvements to the current PyTorch domain libraries, alongside the PyTorch 1.12 release. These updates demonstrate our focus on developing common and extensible APIs across all domains to make it easier for our community to build ecosystem projects on PyTorch.
8
+
We are bringing a number of improvements to the current PyTorch libraries, alongside the [PyTorch 1.12 release](https://github.com/pytorch/pytorch/releases/tag/v1.12.0). These updates demonstrate our focus on developing common and extensible APIs across all domains to make it easier for our community to build ecosystem projects on PyTorch.
9
9
10
10
Summary:
11
11
-**TorchVision** - Added multi-weight support API, new architectures, model variants, and pretrained weight. See the release notes [here](https://github.com/pytorch/vision/releases).
12
12
-**TorchAudio** - Introduced beta features including a streaming API, a CTC beam search decoder, and new beamforming modules and methods. See the release notes [here](https://github.com/pytorch/audio/releases).
13
13
-**TorchText** - Extended support for scriptable BERT tokenizer and added datasets for GLUE benchmark. See the release notes [here](https://github.com/pytorch/text/releases).
14
-
-**TorchRec** EmbeddingModule benchmarks, examples for TwoTower Retrieval model and sequential embedding, and demonstrated integration with production components. See the release notes [here](https://github.com/pytorch/torchrec/releases).
14
+
-**TorchRec**- Added EmbeddingModule benchmarks, examples for TwoTower Retrieval, inference and sequential embeddings, metrics, improved planner and demonstrated integration with production components. See the release notes [here](https://github.com/pytorch/torchrec/releases).
15
15
-**TorchX** - Launch PyTorch trainers developed on local workspaces onto five different types of schedulers. See the release notes [here](https://github.com/pytorch/torchx/blob/main/CHANGELOG.md?plain=1#L3).
16
16
-**FBGemm** - Added and improved kernels for Recommendation Systems inference workloads, including table batched embedding bag, jagged tensor operations, and other special-case optimizations.
17
17
@@ -216,7 +216,7 @@ StreamReader is TorchAudio’s new I/O API. It is backed by FFmpeg†, and allow
216
216
- Handle input forms, such as local files, network protocols, microphones, webcams, screen captures and file-like objects
217
217
- Iterate over and decode chunk-by-chunk, while changing the sample rate or frame rate
218
218
- Apply audio and video filters, such as low-pass filter and image scaling
219
-
- Decode video with NVidia's hardware-based decoder (NVDEC)
219
+
- Decode video with Nvidia's hardware-based decoder (NVDEC)
220
220
221
221
For usage details, please check out the [documentation](https://pytorch.org/audio/0.12.0/io.html#streamreader) and tutorials:
222
222
-[Media Stream API - Pt.1](https://pytorch.org/audio/0.12.0/tutorials/streaming_api_tutorial.html)
@@ -237,9 +237,9 @@ For usage details, please check out the [documentation](https://pytorch.org/audi
237
237
238
238
### (BETA) New Beamforming Modules and Methods
239
239
240
-
LiveTo improve flexibility in usage, the release adds two new beamforming modules under torchaudio.transforms: [SoudenMVDR](https://pytorch.org/audio/0.12.0/transforms.html#soudenmvdr) and [RTFMVDR](https://pytorch.org/audio/0.12.0/transforms.html#rtfmvdr). The main differences from the [torchaudio.transforms.MVDR](https://pytorch.org/audio/0.11.0/transforms.html#mvdr) module are:
240
+
To improve flexibility in usage, the release adds two new beamforming modules under torchaudio.transforms: [SoudenMVDR](https://pytorch.org/audio/0.12.0/transforms.html#soudenmvdr) and [RTFMVDR](https://pytorch.org/audio/0.12.0/transforms.html#rtfmvdr). The main differences from [MVDR](https://pytorch.org/audio/0.11.0/transforms.html#mvdr) are:
241
241
- Use power spectral density (PSD) and relative transfer function (RTF) matrices as inputs instead of time-frequency masks. The module can be integrated with neural networks that directly predict complex-valued STFT coefficients of speech and noise
242
-
- Add 'reference_channel' as an input argument in the forward method, to allow users to select the reference channel in model training or dynamically change the reference channel in inference
242
+
- Add \'reference_channel\' as an input argument in the forward method, to allow users to select the reference channel in model training or dynamically change the reference channel in inference
243
243
244
244
Besides the two modules, new function-level beamforming methods are added under torchaudio.functional. These include:
@@ -268,7 +268,7 @@ We increased the number of datasets in TorchText from 22 to 30 by adding the rem
268
268
269
269
### Scriptable BERT Tokenizer
270
270
271
-
TorchText has extended the support for scriptable tokenizes by adding wordpiece tokenizer used in BERT. It is one of the commonly used algorithms for splitting input text into sub-words units and was introduced in [Japanese and Korean Voice Search (Schuster et al., 2012)](https://static.googleusercontent.com/media/research.google.com/ja//pubs/archive/37842.pdf).
271
+
TorchText has extended support for scriptable tokenizer by adding the WordPiece tokenizer used in BERT. It is one of the commonly used algorithms for splitting input text into sub-words units and was introduced in [Japanese and Korean Voice Search (Schuster et al., 2012)](https://static.googleusercontent.com/media/research.google.com/ja//pubs/archive/37842.pdf).
272
272
273
273
TorchScriptabilty support would allow users to embed the BERT text-preprocessing natively in C++ without needing the support of python runtime. As TorchText now supports the CMAKE build system to natively link torchtext binaries with application code, users can easily integrate BERT tokenizers for deployment needs.
0 commit comments