Large Language Model based Multi-Agents: A Survey of Progress and Challenges
-
Updated
Apr 24, 2024
Large Language Model based Multi-Agents: A Survey of Progress and Challenges
An extensible benchmark for evaluating large language models on planning
Official PyTorch implementation for Hogwild! Inference: Parallel LLM Generation with a Concurrent Attention Cache
Awesome LLM Self-Consistency: a curated list of Self-consistency in Large Language Models
The data and implementation for the experiments in the paper "Flows: Building Blocks of Reasoning and Collaborating AI".
A framework for evaluating the effectiveness of chain-of-thought reasoning in language models.
Belief-Bias evaluation of local LLMs
Análise do Impacto da Padronização de Markdown na Carga Cognitiva e Desempenho de Tarefas
Análise Causal Heterogênea para Subgrupos de Intervenção em Ansiedade
[ACL 2025] Learning to Reason Over Time: Timeline Self-Reflection for Improved Temporal Reasoning in Language Models
Participating in Explainable AI for Educational Question-Answering with Symbolic Reasoning
Framework de Mixture of Experts para Explicabilidade de Estados de Ansiedade
Measure of estimated confidence for non-hallucinative nature of outputs generated by Large Language Models.
Análise Avançada de Dados com Causalidade e Aprendizado por Reforço
Análise Robusta de Intervenções para Ansiedade com Técnicas de Tratamento de Dados Ausentes
Pipeline Unificado para Tomada de Decisão Financeira com Agentes Multimodais e Descentralizados
Análise Avançada de Intervenção para Ansiedade com SHAP
A collection of ML-related active learning datasets, including algorithms, .ipynb pipelines, .py scripts and curated and ethically aligned synthetic data.
Análise de Intervenção de Ansiedade com Descoberta Causal
Latent-Explorer is the Python implementation of the framework proposed in the paper "Unveiling LLMs: The Evolution of Latent Representations in a Dynamic Knowledge Graph".
Add a description, image, and links to the llms-reasoning topic page so that developers can more easily learn about it.
To associate your repository with the llms-reasoning topic, visit your repo's landing page and select "manage topics."