Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Releases: sb-ai-lab/RePlay

v0.20.0

17 Oct 15:03

Choose a tag to compare

Changelog

v0.20.0 (03/10/2025)

  • Highlights
  • Backwards Incompatible Changes
  • New Features
  • Improvements
  • Bug fixes

Highlights

We are excited to announce the release of RePlay 0.20.0!
In this update, we added Python 3.12 support, removed Python 3.8 support, conditional imports, minimized the number of dependencies to install, and updated the list of extra dependencies.

Backwards Incompatible Changes

The release did not break backward compatibility in terms of code. But backward compatibility is broken in terms of library dependencies. For example, if you trained a model using RePlay version 0.19.0, then you can easily load the weights of this model in current release, but you will have to update the list of dependencies.

New Features

Python 3.12 support and discontinuation of support for Python 3.8

We keep up with the times and understand the importance of new technologies - they bring new opportunities in increasing performance and scaling solutions. Therefore, we are pleased to announce that the release fully supports Python 3.12.

In addition, the library is discontinuing support for Python 3.8.

New version of dependencies and Conditional imports

The library is used in several modes - research and industrial solutions. In industrial solutions, it is very important to meet the requirements of performance, the size of docker images and, as a result, the number of dependencies.

We understand that the library must be very flexible for use in all modes. Therefore, we have updated the list of dependencies to the minimum required to install the core version of the RePlay. Now, in order to use the library-specific functionality, the user must install the necessary dependencies themselves.

Dependencies on optuna, nmslib, and hnwsib have been removed from the core version of the library. If necessary, install the following packages yourself:

  • optuna - to optimize the parameters of the non-neural models
  • nmslib, hnswlib - to use the ANN algorithm
  • torch, lightning - for using neural models. Please note that you can install these dependencies via extra [torch]
  • pyspark - for processing large amounts of data. Please note that you can install these dependencies via extra [spark].

We check that there are versions of dependencies that enable the full functionality of the library.

Improvements

Updating the list of extra dependencies

The release removes the possibility of installing with extra torch-openvino and all. In other words, you will no longer be able to do:

pip install replay-rec[torch-openvino]
# or
pip install replay-rec[all]

The release only supports installation with extra torch and spark.
Note: if you are installing a library with an extra torch and want the torch to be CPU-only, then you need to add an extra index --extra-index-url https://download.pytorch.org/whl/cpu.

If you want to install a library with both extras, then just list them separated by commas:

pip install replay-rec[torch, spark]

Bug fixes

[Experimental] Adapting the DDPG algorithm to versions of torch 2.6.0 and higher

v0.19.0

26 May 11:40

Choose a tag to compare

RePlay 0.19.0 Release notes

  • Highlights
  • Backwards Incompatible Changes
  • New Features
  • Improvements
  • Bug fixes

Highlights

In this release, we have added ScalableCrossEntropyLoss and ConsecutiveDuplicatesFilter. This release brings a lot of improvements and bug fixes - see the respective sections!

Backwards Incompatible Changes

This release entails changes that are not backward compatible with previous versions of RePLay. We have changed the architecture of Bert4Rec model in order to speed up, so in this release you will not be able to load the weights of the model trained using previous versions.

New Features

ScalableCrossEntropyLoss for SasRec model

We added ScalableCrossEntropyLoss - new and innovative approximation of CrossEntropyLoss aimed at solving the problem of GPU memory lack when learning on a large item catalogs. Reference article may be found at https://arxiv.org/pdf/2409.18721.

ConsecutiveDuplicatesFilter

We added a new filter - ConsecutiveDuplicatesFilter - that allows to remove duplacate interactions from sequential datasets.

Improvements

SequenceEncodingRule speedup on PySpark

We accelerated transform() method of SequenceEncodingRule when applying it to PySpark dataframes.

Updating the maximum supported version of PyTorch

We updated maximum supported version of PyTorch, so now it is possible to install RePlay with PyTorch < 3.0.0.

Speedup sequential models

Firstly, we replaced self-made LayerNorm and GELU layers in Bert4Rec for PyTorch built-in implementations. Secondly, we added CE_restricted loss for Bert4Rec that works like CrossEntropyLoss, but uses some features of the Bert4Rec archtecture to speed up calculations (sparsification - the limitation of masks, based on the tokens that will be predicted). Thidrly, we replaced some computationally inefficient operations for faster analogues in SasRec and Bert4Rec.

Bug fixes

Fix error with accessing object fields in TensorSchema

We fixed an issue when it was not possible to train a sequential model when Hydra and MlFlows are installed with RePlay. It was caused by accessing object fields using wrong names in TensorSchema.

Fix unexpected type casts in LabelEncodingRule with Pandas.

We detected unexpected type casts in transform() method when using Pandas dataframes with LabelEncodingRule and fixed this behaviour.

Fix bugs in Surprisal metric calculation

We fixed incorrect Surprisal behavior with cold items on Polars and missing users on Pandas.

v0.18.1

14 Mar 14:00

Choose a tag to compare

RePlay 0.18.1 Release notes

  • Highlights
  • Backwards Incompatible Changes
  • New Features
  • Improvements

Highlights

We are excited to announce the release of RePlay 0.18.1!
In this update we added a candidates_to_score support in transformers and implementation of compiled sequential models for optimized inference on CPU accelerating the model output generation by several times. Next is a support for categorical and numerical arrays to the input of transformers. And the last thing worth mentioning is a Discretizer. For smaller features and improvements, see the respective sections.

Backwards Incompatible Changes

No changes.

New Features

Compiled sequential models

We added an implementation of Bert4RecCompiled and SasRecCompiled - entities that allow to do fast and CPU-optimized inference without the need for a GPU. Using the compiled model, you can do inference 2-5 times faster than using PyTorch model (depends on system configuration). The base model is transformed into an ONNX graph representation during compilation and then converted into the format of the compilation engine. Right now there is only one compilation engine - OpenVino, but the list could be easily extended due to the flexible architecture. We have made the dependency on packages required for compilation optional so that they are only installed when needed - use the torch-openvino flag when installing if you need to compile the model.

An example of compilation and efficient inference can be found in the SasRec and Bert4Rec example notebooks.

Candidates for transformers inference

We added a possibility to infere a transformer not only for all items but also for a given subset of candidates. All implementations (Bert4Rec and SasRec implementations on PyTorch, Lightning and OpenVino) can now calculate predictions using candidates_to_score.

An example of usage can be found in the SasRec and Bert4Rec example notebooks.

List of tokens supporting

We added supporting of the CATEGORICAL_LIST and NUMERICAL_LIST features to FeatureType, Dataset, TensorSchema and SequentialTokenizer. Now you can use these features when working with entities such as ItemEmbedder, SasRec and TwoTower.
An example of using the features can be found in the corresponding notebook.

LinUCB

We implemented LinUCB - a recommender algorithm for contextual bandit problems. The model assumes a linear relationship between user context, item features and action rewards, making it efficient for high-dimensional contexts.

Discretizer

By analogy with LabelEncoder and LabelEncodingRule, we added a Discretizer and several entities of DiscretizingRule, which contain the implementation of the specific discretizing algorithms. Specific implementations contain 2 strategies:

  • GreedyDiscretizingRule - Discretizes column values according to the Greedy binning strategy,
  • QuantileDiscretizingRule - Discretizes columns values according to the approximate quantile algorithm.

Thanks to this approach, various rules can be assigned to various columns as part of the use of one Discretizer object. And of course, these algorithms are implemented to work with Spark, Pandas and Polars.

Improvements

Property of optimizer factory in transformers

We added property optimizer_factory in transformers, that allows to get and set optimizer_factory field.

LabelEncoder saving and loading

We added save() and load() methods in LabelEncoder that work without using pickle. This methods are implemented using json dumping.

Deterministic LabelEncoder with SparkDataFrame

We refactored LabelEncoder in deterministic mode so now the result of its work will not depend on how the data in a SparkDataFrame is partitioned. Also, we fixed a bug when partial_fit returned non-sequential IDs.

Padding value inside TensorSchemaInfo

We set new parameter padding_value inside TensorSchemaInfo that allows to access it from the TensorSchema. Now padding_value is set via TensorSchemaInfo. It means, that you can set its own value for each column, previously there was one value for all columns.
In SasRecTrainingDataset and Bert4RecTrainingDataset the corresponding deprecation warrnings have been added, using the padding_value parameter inside them now will have no effect.

v0.18.0

13 Sep 11:15

Choose a tag to compare

RePlay 0.18.0 Release notes

  • Highlights
  • Backwards Incompatible Changes
  • Improvements

Highlights

We are excited to announce the release of RePlay 0.18.0!
In this release, we added Python 3.11 support, updated dependency versions to the latest ones, and improved performance of the transformers (Bert4Rec, SasRec).

Backwards Incompatible Changes

No changes.

Improvements

Performance of the transformers

Inside the models, when using torch.nn.MultiheadAttention, all the conditions for using optimized implementation are met. You can read more about them in the class description here. In addition, there is also a decrease in memory costs, so you can use a longer sequence length or increase the size of the batch when learning.

v0.17.1

22 Aug 11:35

Choose a tag to compare

RePlay 0.17.0 Release notes

  • Highlights
  • Backwards Incompatible Changes
  • New Features

Highlights

We are ready to announce the release of RePlay 0.17.1!
In this release, we introduced item undersampling filter QuantileItemsFilter in replay.preprocessing.filters.

Backwards Incompatible Changes

No changes.

New Features

Undersampling filter QuantileItemsFilter in replay.preprocessing.filters

v0.17.0

07 Jun 07:34

Choose a tag to compare

RePlay 0.17.0 Release notes

  • Highlights
  • Backwards Incompatible Changes
  • Deprecations
  • New Features
  • Improvements
  • Bug fixes

Highlights

We are excited to announce the release of RePlay 0.17.0!
The new version fixes serious bugs related to the performance of LabelEncoder and saving checkpoints in transformers. In addition, methods have been added to save splitters and SequentialTokenizer without using pickle.

Backwards Incompatible Changes

Change SequentialDataset behavior

When training transformers on big data, a slowdown was detected that increased the epoch time from 5 minutes to 1 hour. The slowdown was due to the fact that by default, the model trainer saves checkpoints every 50 steps of the epoch. While saving the checkpoint, not only the model was saved, but also the entire training dataset was implicitly saved. The behavior was corrected by changing the SequentialDataset and the callbacks used in it. Therefore, using SequentialDataset from older versions will not be possible. Otherwise, no interface changes were required.

Deprecations

Added a deprecation warning related to saving splitters and SequentialTokenizer using a pickle. In future versions, the functionality will be removed.

New Features

A new strategy in the LabelEncoder

The drop strategy has been added. It allows you to throw tokens from the dataset that were not present at the training stage. If all rows are deleted, the corresponding warning will appear.

New Linters

We keep up with the latest trends in code quality control, so the list of linters for testing code quality has been updated. The use of Pylint and PyCodestyle has been removed. Added the linters Ruff, Black and toml-sort.

Improvements

PyArrow dependency

The dependency on PyArrow has been adjusted. The RePlay now can work with any version that is greater than 12.0.1.

Bug fixes

Performance fixes at the partial_fit stage in LabelEncoder

The slowdown occurred when using DataFrame from Pandas. The partial_fit stage had a quadratic running time. The bug has been fixed, now the time linearly depends on the size of the dataset.

Timestamp tokenization when using SasRec

Fixed an error that occurs when training a SasRec transformer with a ti_modification=True parameter.

Loading a checkpoint with a modified embedding in the transformers

The error occurred when loading the model on another device, when the dimensions of embeddings in transformers were changed before that. The example of working with embeddings in transformers has been updated.

v0.16.0

20 Mar 10:01

Choose a tag to compare

  • It was introduced the support of the dateframes from the polars package. This is available in the following modules: data (Dataset, SequenceTokenizer, SequentialDataset) for working with transformers, metrics, preprocessing and splitters. The new format allows to achieve multiple acceleration of calculations relative to the Pandas and PySpark dataframes. You can see more details about usage in the examples.
  • Removed dependencies on seaborn and matplotlib. Removed functions replay.utils.distributions.plot_item_dist and replay.utils.distributions.plot_user_dist.
  • Added functions to get and set embeddings in transformers - get_all_embeddings, set_item_embeddings_by_size, set_item_embeddings_by_tensor, append_item_embeddings. You can see more details about their use in the examples.
  • Added a QueryEmbeddingsPredictionCallback to get query embeddings at the inference stage in transformers. You can see more details about usage in the examples.
  • Added support for numerical features in SequenceTokenizer and TorchSequentialDataset. It becomes possible to use numerical features inside transformers.
  • Auto padding for inference stage of transformer-based models in a single-user mode is supported.
  • Added a new KL UΠ‘B model based on https://arxiv.org/pdf/1102.2490.pdf.
  • Added a callback to calculate cardinality in TensorSchema. Now it is not necessary to pass the cardinality parameter, the value will be calculated automatically.
  • Added the core_count parameter to replay.utils.session_handler.get_spark_session. If nothing is specified, the env variables REPLAY_SPARK_CORE_COUNT and REPLAY_SPARK_MEMORY are taken into account. If they are not specified, the value is set to -1.
  • Corrected the behavior of the item_count parameter in ValidationMetricsCallback. If you are not going to calculate the Coverage metric, then you do not need to pass this parameter.
  • The calculation of the Coverage metric on Pandas and PySpark has been aligned.
  • Removed conversion from PySpark to Pandas in some models. Added the allow_collect_to_master parameter, False by default.
  • 100% test coverage has been achieved.
  • Undetectable type correction during fit in LabelEncoder. The problem occurred when using multiple tuples with null values.
  • Changes in the experimental part:
    • Python 3.10 is supported
    • Interface updates due to the d3rlpy version update
    • Adding a DesicionTransformer

v0.15.0

30 Nov 13:43

Choose a tag to compare

  • Bert4Rec and SasRec interfaces naming was aligned with each others
  • Minor changes in sasrec_example regarding naming

v0.14.0

24 Nov 16:50

Choose a tag to compare

  • Introduced support for various hardware configurations including CPU, GPU, Multi-GPU and Clusters (based on PySpark)
  • The part of the library was moved to experimental submodule for further stabilizing and productizing
  • Preprocessing, splitters, metrics support pandas now
  • Introduced 2 SOTA models: BERT4Rec and SASRec transformers with online and offline inference

Let's start a new chapter of RePlay! πŸš€πŸš€πŸš€