Thanks to visit codestin.com
Credit goes to github.com

Skip to content
View Jada42's full-sized avatar
  • Amsterdam

Block or report Jada42

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
Jada42/README.md

Well, hello there! πŸ‘‹

I do self-driven AI research and test small language architectures. Fan of hybrid models and gating

πŸ”¬ Research Interests

  • AI Alignment & Transparency
  • Value-Aligned AI
  • (Hybrid)Language Model Architectures (SSMs; Hopfield; Latent Transformers)
  • Time-Series Modeling of Psychological Behavior
  • Autoregressive Time-Series Simulation Modeling of Astronaut Stress data, incl. Temporal Fusion Transformers

Deployed

  • 🎸 GearChat - RAG-powered chatbot for musicians to chat with their intruments manual (synthesizers, drum-machines etc.) (deployed)

πŸ“š Recent Work

  • 🌌 CALM-Seal - A hybrid implementation of Shao et al. their Continuous Autoregressive Language Modeling (CALM) fused with State Space Models (SSM) and Hopfield Networks and gating + rectified flow.

  • 🧠 Hamba - Novel hybrid hopfield + mamba inspired architecture combining SSMs and Hopfield networks with attention (BPE PPL of 16,2 (FineWiki) two hierarchical reasoning passes (Segmented-Reasoning) and rectified flow.

  • πŸ›οΈ MSC: RLAIF/RLHF for Public Value Alignment Enhancing Transparency in LLMs - Master's thesis on aligning Mistral-7b with democratic transparency values (VU Amsterdam, 2025)

  • πŸ“Š Time-Varying VAR/GAM and Temporal Fusion Models - Extension of my BSc extrapolated to astronauts on a 1 year mission. Predicting psychological behavior from simulated Astronaut data over a year with 3% shock event data (positive & negative) (VU University Amsterdam, 2024)

  • More can be seen in Repo's :-)

Older ML Work

πŸ› οΈ Tech Stack

ML/AI: PyTorch, JAX/Flax, Transformers, TRL, LangChain , FAISS, ChromaDB, Mamba, Energy-Transformers, CALM, Optuna Data Science: R, pandas, statsmodels, mgm, Time-Series, Markov Chain, Time-Varying VAR, Time-Varying Generalized Additive Models Deployment: Cloud GPUs, REST APIs, vector databases Random fact curious about mojo.

πŸ“« Connect

[LinkedIn](will add later) Email


"Technical sophistication alone cannot resolve fundamental tensions in democratic governance"
β€” From my thesis on AI transparency

Pinned Loading

  1. calm-seal calm-seal Public

    Hybrid Continuous Autoregressive Language Model with self-adaptive alignment mechanism

  2. Hamba Hamba Public

    A Hopfield-Mamba inspired hybrid LM, called Hamba.

  3. Dynamic-Gated-SSM-Transformer Dynamic-Gated-SSM-Transformer Public

    Mixes SSM layers into vanilla models as a wrapper for longer sequences

    Python

  4. music-gear-chat music-gear-chat Public

    RAG-Chatbot for Musicians Instrument manuals and workflow setup

    Python