Thanks to visit codestin.com
Credit goes to Github.com

Skip to content

sunjie279/LaMP-Val

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LaMP-Val: Large Language Models Empower Personalized Valuation in Auction

This repository contains the implementation for LaMP-Val, a framework that integrates Large Language Models to incorporate personalized semantic preferences into users' valuation processes in auction scenarios.

Overview

LaMP-Val addresses a critical gap in auction research by focusing on personalized valuation rather than just bidding strategies. Our theoretical and empirical analysis demonstrates that valuation errors can significantly impact overall utility - with 1% valuation errors resulting in approximately 10% utility losses.

Key Features

  • Personalized Valuation: Captures individual user preferences from semantic descriptions
  • LLM Integration: Leverages fine-tuned language models for nuanced preference modeling
  • Novel Evaluation Metrics: Introduces Personalized Utility (PU) and Personalized Value (PV) metrics
  • Comprehensive Framework: End-to-end system from data processing to auction simulation

Architecture

LaMP-Val consists of three main components:

  1. Data Component: Constructs a novel dataset for LLM fine-tuning in personalized valuation modeling
  2. Learning Component: Implements diversity templates to enhance LLMs' capacity for modeling fine-grained personal valuation patterns
  3. Evaluation Component: Establishes a closed-loop system where LLM-generated valuations interact with bidding strategies and auction mechanisms

Installation

git clone https://github.com/sunjie279/LaMP-Val
cd LaMP-Val
pip install -r requirements.txt

Usage

Quick Start

bash script.sh

Dataset Construction

The framework uses the Epinions dataset as the primary data source and applies LLM-driven augmentation to create a comprehensive valuation dataset with:

  • 923 unique item types
  • 23,065 individual instances
  • Train/validation/test split ratio of 6:1:3

Model Training

LaMP-Val supports multiple base models:

  • LLaMA-3-8B-Instruct
  • Mistral-7B-Instruct
  • Custom fine-tuned models

The training process uses supervised fine-tuning with diversity instruction templates to enhance model robustness.

Evaluation

The framework includes:

  • Individual Pacing Algorithm for strategic bidding under budget constraints
  • Vickrey Auction Mechanism for fair auction simulation
  • Novel Metrics: Personalized Utility (PU) and Personalized Value (PV)

Results

LaMP-Val demonstrates superior performance compared to baseline methods:

Model PU ↑ PV ↑ Weighted F1 ↑ MAE ↓ RMSLE ↓
LLaMA -1072 92787 0.6493 2251 2.6781
Mistral 1199 84231 0.6692 2463 2.5653
GPT-3.5 2231 100680 0.8652 2431 2.1146
GPT-4 896 79488 0.8784 2203 1.7756
LaMP-Val 5872 102004 0.9084 536 0.4818

Key Contributions

  1. Novel Problem Formulation: First systematic approach to text-based personalized valuation in auctions
  2. Comprehensive Dataset: Addresses value-price paradox, preference distribution skewness, and rationale absence
  3. Innovative Metrics: Introduces PU and PV for personalized auction evaluation
  4. Strong Empirical Results: Significant improvements in both valuation accuracy and profit generation

Privacy Considerations

LaMP-Val prioritizes privacy by:

  • Supporting local deployment of open-source models
  • Implementing data desensitization processes
  • Avoiding dependency on cloud-based APIs for sensitive auction data

Limitations and Future Work

  • Extension to broader open-source auction mechanisms
  • Development of more qualified semantic-rich datasets
  • Integration of adversarial training for robustness against prompt injection attacks

Citation

@article{sun2025lamp,
  title={LaMP-Val: Large Language Models Empower Personalized Valuation in Auction},
  author={Sun, Jie and Zhang, Tianyu and Jiang, Houcheng and Huang, Kexin and others},
  journal={Findings of The 2025 Conference on Empirical Methods in Natural Language Processing},
  year={2025}
}

License

This project is licensed under the MIT License - see the LICENSE file for details.

Acknowledgments

This research is supported by the National Natural Science Foundation of China (92270114, 62302321).

Contact

For questions and issues, please contact the corresponding authors or open an issue in this repository.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published