Thanks to visit codestin.com
Credit goes to github.com

Skip to content

About "detection adjustment" in the line 339-360 of solver.py #14

@wuhaixu2016

Description

@wuhaixu2016

Since some researchers are confused about the "detection adjustment", we provide some clarification here.

(1) Why use "detection adjustment"?

Firstly, I strongly suggest the researchers read the original paper Xu et al., 2018, which has given a comprehensive explanation of this operation.

In our paper, we follow this convention because of the following reasons:

  • Fair comparison: As we stated in the Implementation details section of our paper, the adjustment is a widely-used convention in time series anomaly detection. Especially, in the benchmarks that we used in our paper, the previous methods all use the adjustment operation for the evaluation of these benchmarks Shen et al., 2020. Thus, we also adopt the adjustment for model evaluation.
  • Real-world meaning: Since one abnormal event will cause a segment of abnormal time points. The adjustment corresponds to the "abnormal event detection" task, which is to evaluate the model performance in detecting the abnormal events from the whole records. This is a very meaningful task for real-world applications. Once we have detected the abnormal event, we can send a worker to check that time segment for security.

In summary, you can view the adjustment as an "evaluation protocol", which is to measure the capability of models in "abnormal event detection".

(2) We have provided a comprehensive and fair comparison in our paper.

  • All the baselines that we compared in our paper are also evaluated with this "adjustment". Note that this evaluation is widely used in the previous papers for the experiments on SMD, SWaT, and so on. Thus, the comparison is fair.
  • For a comprehensive analysis, we also provide a benchmark for the UCR dataset in Appendix L, which is from KDD Cup. The anomalies in this dataset are mostly recorded only at a single time point. Thus, if you want to obtain the comparison on single-time-point anomaly detection, this dataset can provide some intuitions.

If you still have some questions about the adjustment, welcome to email me and discuss more ([email protected]).

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions