From Random Forests to RLVR: A Short History of ML/AI Hello Worlds
A timeline of beginner-friendly 'Hello World' examples in machine learning and AI, from Random Forests in 2013 to modern RLVR models in 2025.
A timeline of beginner-friendly 'Hello World' examples in machine learning and AI, from Random Forests in 2013 to modern RLVR models in 2025.
Analysis of the rising prominence of Chinese AI labs like DeepSeek and Kimi in the global AI landscape and their rapid technological advancements.
A curated list of key LLM research papers from Jan-June 2025, organized by topic including reasoning models, RL methods, and efficient training.
A tutorial on building a transformer-based language model in R from scratch, covering tokenization, self-attention, and text generation.
Explains the differences between Machine Learning and Generative AI, with examples and industry applications.
A course teaching how to code Large Language Models (LLMs) from scratch to deeply understand their inner workings and fundamentals.
An introduction to reasoning in Large Language Models, covering concepts like chain-of-thought and methods to improve LLM reasoning abilities.
Explores the critical challenge of bias in health AI data, why unbiased data is impossible, and the ethical implications for medical algorithms.
A technical guide exploring IBM's Granite 3.1 AI models, covering their reasoning and vision capabilities with a demo and local setup instructions.
Explores four main approaches to building and enhancing reasoning capabilities in Large Language Models (LLMs) for complex tasks.
A researcher reflects on 2024 highlights in AI, covering societal impacts, software tools like Scikit-learn, and technical research on tabular data and language models.
A curated list of notable LLM and AI research papers published in 2024, providing a resource for those interested in the latest developments.
Introduces Label-Studio, an open-source tool for annotating text, image, audio, and video data for AI/ML projects, highlighting its ease of use and features.
Author shares detailed experience and study tips for passing both AWS Machine Learning Engineer Associate and Machine Learning Specialty certification exams.
Explores whether large language models like ChatGPT truly reason or merely recite memorized text from their training data, examining their logical capabilities.
A developer shares their experience taking the AWS Certified AI Practitioner beta exam, covering study methods, key topics, and exam structure.
An animated exploration of UMAP, a state-of-the-art dimensionality reduction algorithm, applied to the classic MNIST dataset of handwritten digits.
Analyzing if a Codenames bot can win using only card layout patterns, without understanding word meanings.
A technical article exploring deep neural networks by comparing classic computational methods to modern ML, using sine function calculation as an example and implementing it in Kotlin.
Announcing skrub 0.2.0, a library update simplifying machine learning on complex dataframes with new features like tabular_learner.