Thanks to visit codestin.com
Credit goes to github.com

Skip to content
@Lexsi-Labs

Lexsi.ai

Aligned and safe AI

https://www.lexsi.ai

Paris 🇫🇷 · Mumbai 🇮🇳 · London 🇬🇧

Lexsi Labs drives Aligned and Safe AI Frontier Research. Our goal is to build AI systems that are transparent, reliable, and value-aligned, combining interpretability, alignment, and governance to enable trustworthy intelligence at scale.

Research Focus

  • Aligned & Safe AI: Frameworks for self-monitoring, interpretable, and alignment-aware systems.
  • Explainability & Alignment: Faithful, architecture-agnostic interpretability and value-aligned optimization across tabular, vision, and language models.
  • Safe Behaviour Control: Techniques for fine-tuning, pruning, and behavioural steering in large models.
  • Risk & Governance: Continuous monitoring, drift detection, and fairness auditing for responsible deployment.
  • Tabular & LLM Research: Foundational work on tabular intelligence, in-context learning, and interpretable large language models.

Popular repositories Loading

  1. TabTune TabTune Public

    TabTune: A Unified Library for Inference and Fine-Tuning Tabular Foundation Models

    Python 65 5

  2. Orion-MSP Orion-MSP Public

    Python 37 4

  3. aligntune aligntune Public

    Aligntune : A Modular Toolkit for Post Training Alignment of LLMs

    Python 32 2

  4. DLBacktrace DLBacktrace Public

    DL Backtrace is a new explainablity technique for deep learning models that works for any modality and model type.

    Python 21 5

  5. xai_evals xai_evals Public

    Evaluation Matrices for Explainability Methods

    Python 13 1

  6. Orion-BiX Orion-BiX Public

    Python 10 2

Repositories

Showing 10 of 10 repositories

People

This organization has no public members. You must be a member to see who’s a part of this organization.

Most used topics

Loading…