Thanks to visit codestin.com
Credit goes to github.com

Skip to content

ihanwen99/SEFRQO

Repository files navigation

SEFRQO: LLM-Based Query Optimization

Currenly we are still working on preparing the final version of the code before the camera-ready submission, so the fine-tuning part is still imcomplete. If you would like to replay the experiemnts, please use the following model checkpoints: https://drive.google.com/drive/folders/13IcyAW-zPrhVkQFZ6Ho45htY_zhGfWfD?usp=sharing

This branch contains the implementation of SEFRQO. The experimental data can be seen under the folder of exp_original_results.

You can reproduce the experiments described in our paper using either:

  • Local LLMs (as specified in the paper), or
  • APIs provided by services such as OpenAI or DeepSeek.

⚠️ Make sure to adjust any necessary paths in the Python scripts before running the experiments.

Environment

  • Python 3.10.16

Example: Replay Static CEB Workload with 3B Model

To run the static CEB workload using the 3B model:

python /your_path/src/local_llm/local_version_general_online_record_sft_3B.py \
    > /your_path/test.log 2>&1

About

A Self-Evolving Fine-Tuned RAG-Based Query Optimizer

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published