Thanks to visit codestin.com
Credit goes to github.com

Skip to content

This is the code for the paper Fine-Grained Emotion Recognition via In-Context Learning, published at CIKM 2025.

Notifications You must be signed in to change notification settings

zhouzhouyang520/EICL

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Fine-Grained Emotion Recognition via In-Context Learning

This repository contains the code for the paper "For more detailed results and analysis, please refer to: Fine-Grained Emotion Recognition via In-Context Learning" published at CIKM 2025.

Table of Contents

Environment Setup

Prerequisites

  • Python 3.7+
  • CUDA-capable GPU (recommended for local models)

Installation

  1. Clone this repository:
git clone https://github.com/zhouzhouyang520/EICL.git
cd EICL
  1. Install the required dependencies:
pip install -r requirements.txt

Data Download

Download the required data from one of the following sources:

Data Setup

  1. Extract data: After downloading, extract data.zip and place it in the EICL root directory.

  2. EmpatheticIntents (Optional, only needed if you need to rebuild data):

    • Extract EmpatheticIntents.zip and place it in the EICL root directory.
  3. Build models folder with the following structure:

    models/
    ├── LLMs/
    │   ├── Llama3.1_8b/
    │   ├── Mistral_Nemo/
    │   └── Phi3.5_mini/
    └── pre_trained_models/  (Optional, only needed if you need to rebuild data)
        ├── EmpatheticIntents/
        ├── all_mpnet_base_v2/
        └── roberta_large_goEmotions/
    
    • LLMs: Download three large language models and place them in models/LLMs/ with the exact folder names: Llama3.1_8b, Mistral_Nemo, and Phi3.5_mini.

    • pre_trained_models (Optional): If you need to rebuild data, download three pre-trained models:

      • EmpatheticIntents
      • all_mpnet_base_v2
      • roberta_large_goEmotions

Configuring File

Before sending requests to Claude or ChatGPT, please fill in the “url” and “api_key” fields in configs/run.json.

Running Instructions

Basic Usage

Run experiments using the main script:

python main.py \
  --auxiliary_model EI \
  --experiment_type EICL \
  --dataset ED \
  --models Phi-3.5

Arguments

  • --auxiliary_model: Auxiliary model type, choices: EI, GE (can specify multiple)
  • --experiment_type: Experiment type, choices: baseline, ICL, EICL, zero-shot (can specify multiple)
  • --dataset: Dataset name, choices: ED, EDOS, GE, EI (can specify multiple)
  • --models: LLM model names, e.g., Phi-3.5, Llama3.1_8b, Mistral-Nemo, ChatGPT, Claude, gpt-4o-mini (can specify multiple)

Examples

Run EICL experiment with Phi-3.5 on ED dataset:

python main.py --auxiliary_model GE --experiment_type EICL --dataset ED --models Phi-3.5

Run multiple experiments:

python main.py \
  --auxiliary_model EI GE \
  --experiment_type ICL EICL \
  --dataset ED EDOS \
  --models Phi-3.5 Llama3.1_8b

Run with GPU:

CUDA_VISIBLE_DEVICES=0 python main.py \
  --auxiliary_model GE \
  --experiment_type EICL \
  --dataset EDOS \
  --models Phi-3.5

Using the provided shell script:

sh eicl.sh 0  # 0 is the GPU ID

Performance

Important Note: Due to accidental deletion of previous data and code, we reconstructed the codebase in a different environment. The current implementation differs from the original paper in the following aspects:

  • Data Re-splitting: The datasets were re-partitioned, which may cause slight performance differences.

  • Updated Models: We use more advanced large language models: Claude-Haiku-4.5 and GPT-4o-mini. Although the original EICL experiments used Claude-Haiku and ChatGPT-3.5-Turbo, those models are now outdated. To build more comparable and up-to-date baselines, we adopted the newer models.

Despite these differences, EICL continues to deliver excellent performance, demonstrating its robustness and effectiveness.

The following tables present the performance results (Accuracy and Macro F1) on different datasets using different auxiliary models. All results are reported as percentages.

Results with EI Auxiliary Model

Dataset Metric Phi-3.5-mini Mistral-Nemo Llama3.1-8b Claude-Haiku-4.5 GPT-4o-mini
Zero-shot ICL EICL Zero-shot ICL EICL Zero-shot ICL EICL Zero-shot ICL EICL Zero-shot ICL EICL
EDOS Acc 33.29 48.28 56.38 28.98 44.37 56.15 23.09 30.79 41.87 36.03 49.27 57.14 32.89 44.2 54.75
F1 34.39 47.84 56.37 26.69 44.67 56.24 21.31 34.08 45.33 34.21 48.29 57.04 30.19 44.79 54.27
ED Acc 35.69 44.92 48.08 37.27 38.44 45.08 32.42 34.02 38.97 46.03 50.2 51.36 39.3 47.04 50.31
F1 34.04 44.3 47.94 34.73 38.48 44.94 25.71 36.27 42.2 43.76 49.32 51.09 35.24 45.73 49.72
GE Acc 27.51 40.38 35.85 30.62 38.05 30.83 23.94 22.98 27.57 34.07 44.65 44.16 28.73 45.44 45.93
F1 27.84 33.34 32.79 26.46 31.45 28.90 22.15 20.46 27.43 31.51 37.36 36.36 26.08 36.67 36.96

Results with GE Auxiliary Model

Dataset Metric Phi-3.5-mini Mistral-Nemo Llama3.1-8b Claude-Haiku-4.5 GPT-4o-mini
Zero-shot ICL EICL Zero-shot ICL EICL Zero-shot ICL EICL Zero-shot ICL EICL Zero-shot ICL EICL
EDOS Acc 52.46 60.27 58.13 47.54 52.08 52.96 38.34 35.68 50.94 51.07 63.05 69.48 52.84 61.41 65.45
F1 50.41 59.85 59.79 45.07 51.2 57.24 38.86 39.23 55.54 49.36 62.44 69.11 51.29 60.89 65.8
ED Acc 45.1 47.13 47.4 48.76 45.07 51.93 44.87 30.79 45.7 55.06 57.26 60.82 52.66 57.66 59.25
F1 46.1 48.33 50.26 49.36 46.69 53.46 45.94 35.65 51.05 56.26 57.94 61.57 54.07 58.79 60.54
EI Acc 47.47 63.86 60.56 53.33 61.55 65.30 42.05 41.93 51.03 58.01 68.59 70.09 55.58 68.34 68.34
F1 46.62 64.34 61.89 52.23 62.03 66.39 43.30 48.32 57.48 57.38 69.40 70.70 54.64 68.44 68.83

Citation

If you find this project helpful, please cite our paper:

@inproceedings{Ren_2025,
  series={CIKM' 25},
  title={Fine-Grained Emotion Recognition via In-Context Learning},
  url={http://dx.doi.org/10.1145/3746252.3761319},
  DOI={10.1145/3746252.3761319},
  booktitle={Proceedings of the 34th ACM International Conference on Information and Knowledge Management},
  publisher={ACM},
  author={Ren, Zhaochun and Yang, Zhou and Ye, Chenglong and Sun, Haizhou and Chen, Chao and Zhu, Xiaofei and Liao, Xiangwen},
  year={2025},
  month=nov,
  pages={2503--2513},
  collection={CIKM' 25}
}

About

This is the code for the paper Fine-Grained Emotion Recognition via In-Context Learning, published at CIKM 2025.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published