Thanks to visit codestin.com
Credit goes to github.com

Skip to content

liting1024/LLMAtKGE

Repository files navigation

LLMAtKGE

Large Language Models as Explainable Attackers against Knowledge Graph Embeddings

This repository provides the official implementation. It supports deletion and addition attacks with human-readable rationales, and provides ready-to-run scripts for reproducible experiments. For more details, please refer to our paper.

The formatting is currently messy, and it will be refined after the paper is accepted.

Environment

We recommend using Python 3.10+. Higher versions should also be compatible. To install dependencies, run:

pip install -r requirements.txt

Preparation

The datasets have already been included in kg/. We also provide LoRA weights of two datasets for reproduction.

Reproduction

You can find ready-to-run shell scripts in scripts/.

Deletion Attack

  1. hoa_sft.sh sft for knowledge alignment by triple classfication.
  2. llm_del_filter.sh filters the candidate entities.
  3. llm_del.sh performs the deletion attack.

Addition Attack

  1. llm_add_filter.sh filters the candidate entities.
  2. llm_add.sh performs the addition attack.

Acknowledgements

We thank the authors of these open-source projects for their contributions: AttributionAttack, KoPA, KG-LLM

About

Large Language Models as Explainable Attackers against Knowledge Graph Embeddings

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published