Thanks to visit codestin.com
Credit goes to github.com

Skip to content

RLE in trip2seq #32

@jlliRUC

Description

@jlliRUC

Hi boathit,

Thanks for this inspirational work!

I notice that you adopt RLE (Running Length Encoding) instead of the full series of tokens as the embedding of one trajectory., which you didn't mention in your paper. As far as I know, this is not a traditional way to compress trajectory data. If you just input the single values without counts how can you retrieve the trajectory from it? This compression may cause some information loss during training. But I compared model_rle with model_full, model_full accidentally outperforms model_rle on three tasks, especially self-similarity : )

In addition, by adopting RLE on cell token, the length of the trajectory is reduced by a large margin. When I tried to apply t2vec on other datasets which have longer trajectories, if I use full trajectory instead of RLE, I couldn't handle trajectories longer than 400 points with default model parameters (I use a GPU-server with 24GB Graphic Memory).

I hope this repo is still active : ) Thanks in advance.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions