Stars
DITING: A Multi-Agent Evaluation Framework for Benchmarking Web Novel Translation
This is the repo of developing reasoning models in the specific domain of financial, aim to enhance models capabilities in handling financial reasoning tasks.
A novel medical large language model family with 13/70B parameters, which have SOTA performances on various medical tasks
Models, data, and codes for the paper: MetaAligner: Towards Generalizable Multi-Objective Alignment of Language Models
ICE-PIXIU:A Cross-Language Financial Megamodeling Framework
LAiW: A Chinese Legal Large Language Models Benchmark
A LLM training and evaluation benchmark for credit scoring
The official repo of TimeLlama, an instruction-finetuned Llama2 series that improve complex temporal reasoning ability.
This repository introduces MentaLLaMA, the first open-source instruction following large language model for interpretable mental health analysis.
Shaping Language Models with Cognitive Insights
This repository introduces PIXIU, an open-source resource featuring the first financial large language models (LLMs), instruction tuning data, and evaluation benchmarks to holistically assess finan…
code and data for paper "eadability Controllable Biomedical Document Summarization" in the Findings of EMNLP 2022
SmoothNLP 金融文本数据集(公开) Public Financial Datasets for NLP Researches Only
A Multilingual Instruction Dataset on Code and trained on large language models.
A repository of datasets in the domain of code for instruction fine-tuning.
CodePro: A Large-scale High-quality Codebase for Realistic
Databricks’ Dolly, a large language model trained on the Databricks Machine Learning Platform
⚡LLM Zoo is a project that provides data, models, and evaluation benchmark for large language models.⚡
Our codebase trials provide an implementation of the Select and Trade paper, which proposes a new paradigm for pair trading using hierarchical reinforcement learning. It includes the code for the …
Code and data for crosstalk text generation tasks, exploring whether large models and pre-trained language models can understand humor.