This repository contains the implementation for LaMP-Val, a framework that integrates Large Language Models to incorporate personalized semantic preferences into users' valuation processes in auction scenarios.
LaMP-Val addresses a critical gap in auction research by focusing on personalized valuation rather than just bidding strategies. Our theoretical and empirical analysis demonstrates that valuation errors can significantly impact overall utility - with 1% valuation errors resulting in approximately 10% utility losses.
- Personalized Valuation: Captures individual user preferences from semantic descriptions
- LLM Integration: Leverages fine-tuned language models for nuanced preference modeling
- Novel Evaluation Metrics: Introduces Personalized Utility (PU) and Personalized Value (PV) metrics
- Comprehensive Framework: End-to-end system from data processing to auction simulation
LaMP-Val consists of three main components:
- Data Component: Constructs a novel dataset for LLM fine-tuning in personalized valuation modeling
- Learning Component: Implements diversity templates to enhance LLMs' capacity for modeling fine-grained personal valuation patterns
- Evaluation Component: Establishes a closed-loop system where LLM-generated valuations interact with bidding strategies and auction mechanisms
git clone https://github.com/sunjie279/LaMP-Val
cd LaMP-Val
pip install -r requirements.txtbash script.shThe framework uses the Epinions dataset as the primary data source and applies LLM-driven augmentation to create a comprehensive valuation dataset with:
- 923 unique item types
- 23,065 individual instances
- Train/validation/test split ratio of 6:1:3
LaMP-Val supports multiple base models:
- LLaMA-3-8B-Instruct
- Mistral-7B-Instruct
- Custom fine-tuned models
The training process uses supervised fine-tuning with diversity instruction templates to enhance model robustness.
The framework includes:
- Individual Pacing Algorithm for strategic bidding under budget constraints
- Vickrey Auction Mechanism for fair auction simulation
- Novel Metrics: Personalized Utility (PU) and Personalized Value (PV)
LaMP-Val demonstrates superior performance compared to baseline methods:
| Model | PU ↑ | PV ↑ | Weighted F1 ↑ | MAE ↓ | RMSLE ↓ |
|---|---|---|---|---|---|
| LLaMA | -1072 | 92787 | 0.6493 | 2251 | 2.6781 |
| Mistral | 1199 | 84231 | 0.6692 | 2463 | 2.5653 |
| GPT-3.5 | 2231 | 100680 | 0.8652 | 2431 | 2.1146 |
| GPT-4 | 896 | 79488 | 0.8784 | 2203 | 1.7756 |
| LaMP-Val | 5872 | 102004 | 0.9084 | 536 | 0.4818 |
- Novel Problem Formulation: First systematic approach to text-based personalized valuation in auctions
- Comprehensive Dataset: Addresses value-price paradox, preference distribution skewness, and rationale absence
- Innovative Metrics: Introduces PU and PV for personalized auction evaluation
- Strong Empirical Results: Significant improvements in both valuation accuracy and profit generation
LaMP-Val prioritizes privacy by:
- Supporting local deployment of open-source models
- Implementing data desensitization processes
- Avoiding dependency on cloud-based APIs for sensitive auction data
- Extension to broader open-source auction mechanisms
- Development of more qualified semantic-rich datasets
- Integration of adversarial training for robustness against prompt injection attacks
@article{sun2025lamp,
title={LaMP-Val: Large Language Models Empower Personalized Valuation in Auction},
author={Sun, Jie and Zhang, Tianyu and Jiang, Houcheng and Huang, Kexin and others},
journal={Findings of The 2025 Conference on Empirical Methods in Natural Language Processing},
year={2025}
}This project is licensed under the MIT License - see the LICENSE file for details.
This research is supported by the National Natural Science Foundation of China (92270114, 62302321).
For questions and issues, please contact the corresponding authors or open an issue in this repository.