LLM Unlearning Should Be Form-Independent
-
A GPU with at least 48GB of memory is required.
-
For the environment, run:
conda create -n ort python=3.10
pip install -r requirements.txtWe provide run_expr_lora.py to run the experiments, and summarize.py to summarize the results.
All datasets are stored in LLaMA-Factory/data/. You may need to download the datasets before running the experiments.
Download ORT dataset from [Baidu Netdisk] or [Google Drive]
Additionally, ROCR requires the projection matrices of the target model layers to run the experiments. Our pre-computed matrices for llama3-8b-instruct can be download from [Baidu Netdisk] or [Google Drive]
The complete data directory should look like this:
LLaMA-Factory/data
├── dataset_info.json
├── llama3-8b-instruct
│ ├── null_space_project_layer_4.pt
│ └── null_space_project_layer_5.pt
├── mistral-7b-instruct
└── ORT
└── Target
Update the MODEL_PATHS in expr_config_global.py to point to your model path before running the experiments. You can also modify the hyperparameters for each method in ORT/LLaMA-Factory/expr_config to suit your needs.
Use run_expr_lora.py to run the experiments. For instance, to run the GA method on the ORT dataset, you may run the following command:
cd LLaMA-Factory
python run_expr_lora.py \
--type=ga_lora \
--gpu=0 \
--model=llama3 \
--end_idx=100Similarly, you can change the --type to npo_lora / rt_lora / dpo_lora / rocr to run the corresponding methods. Change --type to original to evaluate the performance of base model before unlearning.
Use summarize.py to generate result summaries. For instance, to summarize the results of NPO, you may run the following command:
cd LLaMA-Factory
python summarize.py --model=llama3 --type=npo_loraThis will automatically generate a CSV file with summarized results for easy viewing. If the CSV file contains original results, the script will automatically calculate the differences from the original baseline.
The code we conduct our experiments and part of our ORT dataset is based on RWKU and AlphaEdit. We thank them for their contributions!
If you find this work helpful for your research, please kindly cite it.
@misc{ye2025llmunlearningformindependent,
title={LLM Unlearning Should Be Form-Independent},
author={Xiaotian Ye and Mengqi Zhang and Shu Wu},
year={2025},
eprint={2506.07795},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2506.07795},
}